{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:24:29.294705Z" }, "title": "Monotonicity Marking from Universal Dependency Trees", "authors": [ { "first": "Zeming", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rose-Hulman Institute of Technology", "location": {} }, "email": "" }, { "first": "Qiyue", "middle": [], "last": "Gao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rose-Hulman Institute of Technology", "location": {} }, "email": "gaoq@rose-hulman.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Dependency parsing is a tool widely used in the field of Natural Language Processing and computational linguistics. However, there is hardly any work that connects dependency parsing to monotonicity, which is an essential part of logic and linguistic semantics. In this paper, we present a system that automatically annotates monotonicity information based on Universal Dependency parse trees. Our system utilizes surface-level monotonicity facts about quantifiers, lexical items, and token-level polarity information. We compared our system's performance with existing systems in the literature, including NatLog and ccg2mono, on a small evaluation dataset. Results show that our system outperforms NatLog and ccg2mono.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Dependency parsing is a tool widely used in the field of Natural Language Processing and computational linguistics. However, there is hardly any work that connects dependency parsing to monotonicity, which is an essential part of logic and linguistic semantics. In this paper, we present a system that automatically annotates monotonicity information based on Universal Dependency parse trees. Our system utilizes surface-level monotonicity facts about quantifiers, lexical items, and token-level polarity information. We compared our system's performance with existing systems in the literature, including NatLog and ccg2mono, on a small evaluation dataset. Results show that our system outperforms NatLog and ccg2mono.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The number of computational approaches for Natural Language Inference (NLI) has rapidly grown in recent years. Most of the approaches can be categorized as (1) Systems that translate sentences into first-order logic expressions and then apply theorem proving (Blackburn and Bos, 2005) . (2) Systems that use blackbox neural network approaches to learn the inference (Devlin et al., 2019; Liu et al., 2019) . (3) Systems that apply natural logic as a tool to make inferences (MacCartney and Manning, 2009; Angeli et al., 2016; Abzianidze, 2017) . Compared to neural network approaches, systems that apply natural logic are more robust, formally more precise, and more explainable. Several systems contributed to the third category (MacCartney and Manning, 2009; Angeli et al., 2016) to solve the NLI task using monotonicity reasoning, a type of logical inference that is based on word replacement. Below is an example of monotonicity reasoning:", "cite_spans": [ { "start": 259, "end": 284, "text": "(Blackburn and Bos, 2005)", "ref_id": "BIBREF3" }, { "start": 366, "end": 387, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF4" }, { "start": 388, "end": 405, "text": "Liu et al., 2019)", "ref_id": "BIBREF13" }, { "start": 474, "end": 504, "text": "(MacCartney and Manning, 2009;", "ref_id": "BIBREF15" }, { "start": 505, "end": 525, "text": "Angeli et al., 2016;", "ref_id": "BIBREF1" }, { "start": 526, "end": 543, "text": "Abzianidze, 2017)", "ref_id": "BIBREF0" }, { "start": 746, "end": 760, "text": "Manning, 2009;", "ref_id": "BIBREF15" }, { "start": 761, "end": 781, "text": "Angeli et al., 2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. (a) All students\u2193 carry a MacBook\u2191.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(b) All students carry a laptop. (c) All new students carry a MacBook.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. (a) Not all new students\u2191 carry a laptop.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(b) Not all students carry a laptop.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As the example shows, the word replacement is based on the polarity mark (arrow) on each word. A monotone polarity (\u2191) allows an inference from (1a) to (1b), where a more general concept laptop replaces the more specific concept MacBook. An antitone polarity (\u2193) allows an inference from (1a) to (1c), where a more specific concept new students replaces the more general concept students.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The direction of the polarity marks can be reversed by adding a downward entailment operator like Not which allows an inference from (2a) to (2b). Thus, successful word placement relies on accurate polarity marks. To obtain the polarity mark for each word, an automatic polarity marking system is required to annotate a sentence by placing polarity mark on each word. This is formally called the polarization process. Polarity markings support monotonicity reasoning, and thus are used by systems for Natural Language Inference and data augmentations for language models. (MacCartney and Manning, 2009; Angeli et al., 2016) . In this paper, we introduce a novel automatic polarity marking system that annotates monotonicity information by applying a polarity algorithm on a universal dependency parse tree. Our system is inspired by ccg2mono, an automatic polarity marking system (Hu and Moss, 2018) used by . In contrast to ccg2mono, which derives monotonicity information from CCG (Lewis and Steedman, 2014) parse trees, our system's polarization algorithm derives monotonicity information using Universal Dependency (Nivre et al., 2016) parse trees. There are several advantages of using UD parsing for polarity marking rather than CCG parsing. First, UD parsing is more accurate since the amount of training data for UD parsing is larger than those of CCG parsing. The high accuracy of UD parsing should lead to more accurate polarity annotation. Second, UD parsing works for more types of text. Overall, our system opens up a new framework for performing inference, semantics, and automated reasoning over UD representations. We will introduce the polarization algorithm's general steps, a set of rules we used to mark polarity on dependency parse trees, and comparisons between our system and some existing polarity marking tools, including NatLog (MacCartney and Manning, 2009; Angeli et al., 2016) and ccg2mono. Our evaluation focuses on a small dataset used to evaluate ccg2mono . Our system outperforms NatLog and ccg2mono. In particular, our system achieves the highest annotation accuracy on both the token level and the sentence level.", "cite_spans": [ { "start": 572, "end": 602, "text": "(MacCartney and Manning, 2009;", "ref_id": "BIBREF15" }, { "start": 603, "end": 623, "text": "Angeli et al., 2016)", "ref_id": "BIBREF1" }, { "start": 880, "end": 899, "text": "(Hu and Moss, 2018)", "ref_id": "BIBREF6" }, { "start": 983, "end": 1009, "text": "(Lewis and Steedman, 2014)", "ref_id": "BIBREF12" }, { "start": 1119, "end": 1139, "text": "(Nivre et al., 2016)", "ref_id": "BIBREF20" }, { "start": 1870, "end": 1884, "text": "Manning, 2009;", "ref_id": "BIBREF15" }, { "start": 1885, "end": 1905, "text": "Angeli et al., 2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Universal Dependencies (UD) (Nivre et al., 2016) was first designed to handle language tasks for many different languages. The syntactic annotation in UD mostly relies on dependency relations. Words enter into dependency relations, and that is what UD tries to capture. There are 40 grammatical dependency relations between words, such as nominal subject (nsubj), relative clause modifier (acl:relcl), and determiner (det). A dependency relation connects a headword to a modifier. For example, in the dependency parse tree for All dogs eat food (figure 1), the dependency relation nsubj connects the modifier dogs and the headword eat. The system presented in this paper utilizes Universal Dependencies to obtain a dependency parse tree from a sentence. We will explain the details of the parsing process in the implementation section.", "cite_spans": [ { "start": 28, "end": 48, "text": "(Nivre et al., 2016)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "There are two relevant systems of prior work: (1) The NatLog (MacCartney and Manning, 2009; Angeli et al., 2016) system included in the Stanford CoreNLP library ; (2) The ccg2mono system (Hu and Moss, 2018 ). The NatLog system is a natural language inference system, a part of the Stanford CoreNLP Library. Nat-Log marks polarity to each sentence by applying a pattern-based polarization algorithm to the dependency parse tree generated by the Stanford dependency parser. A list of downward-monotone and non-monotone expressions are defined along with an arity and a Tregex pattern for the system to identify if an expression occurred.", "cite_spans": [ { "start": 77, "end": 91, "text": "Manning, 2009;", "ref_id": "BIBREF15" }, { "start": 92, "end": 112, "text": "Angeli et al., 2016)", "ref_id": "BIBREF1" }, { "start": 187, "end": 205, "text": "(Hu and Moss, 2018", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The ccg2mono system is a polarity marking tool that annotates a sentence by polarizing a CCG parse tree. The polarization algorithm of ccg2mono is based on van Benthem (1986)'s work and Moss (2012)'s continuation on the soundness of internalized polarity marking. The system uses a marked/order-enriched lexicon and can handle application rules, type-raising, and composition in CCG. The main polarization contains two steps: mark and polarize. For the mark step, the system puts markings on each node in the parse tree from leaf to root. For the polarize step, the system generates polarities to each node from root to leaf. Compared to NatLog, an advantage of ccg2mono is that it polarizes on both the word-level and the constituent level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 Universal Dependency to Polarity", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our system's polarization algorithm contains three steps: (1) Universal Dependency Parsing, which transforms a sentence to a UD parse tree, (2) Binarization, which converts a UD parse tree to a binary UD parse tree, and (3) Polarization, which places polarity marks on each node in a binary UD parse tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1" }, { "text": "To preprocess the dependency parse graph, we designed a binarization algorithm that can map each dependency tree to an s-expression (Reddy et al., 2016) . Formally, an s-expression has the form (exp1 exp2 exp3), where exp1 is a dependency label, and both exp2 and exp3 are either (1) a word such as eat; or (2) an s-expression such as (det all dogs). The process of mapping a dependency tree to an s-expression is called binarization. Our system represents an s-expression as a binary tree. A binary tree has a root node, a left child node, and a right child node. In representing an s-expression, the root node can either be a single word or a dependency label. Both the left and the right child nodes can either be a sub-binary-tree, or null. The Figure 2 : A binarized dependency parse tree for \"All dogs eat apples.\" system always puts the modifiers on the left and the headwords on the right. For example, the sentence All dogs eat apples has an s-expression (nsubj (det All dogs) (obj eat apples)) and can be shown as a binary tree in figure 2. In the left sub-tree (All dogs), the dependency label det will be the root node, the modifier all will be the left child, and the headword dogs will be the right child.", "cite_spans": [ { "start": 132, "end": 152, "text": "(Reddy et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 749, "end": 757, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Binarization", "sec_num": "3.2" }, { "text": "Our binarization algorithm employs a dependency relation hierarchy to impose a strict traversal order from the root relation to each leaf word. The hierarchy allows for an ordering on the different modifier words. For example, in the binary dependency parse tree (nsubj (det All dogs) ( apples)), the nominal subject (nsubj) goes above the determiner (det) in the tree because det is lower than nsubj in the hierarchy. We originally used the binarization hierarchy from Reddy et al. (2016) 's work, and later extended it with additional dependency relations such as oblique nominal (obl) and expletive (expl). Table 1 shows the complete hierarchy where the level-id indicates a relation's level in the hierarchy. The smaller a relation's level-id is, the higher that relation is in the hierarchy.", "cite_spans": [ { "start": 285, "end": 286, "text": "(", "ref_id": null }, { "start": 470, "end": 489, "text": "Reddy et al. (2016)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 610, "end": 617, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Binarization", "sec_num": "3.2" }, { "text": "Algorithm 1 Binarization 1: root \u2190 GET_ROOT_NODE(G) 2: T \u2190 COMPOSE(root) 3: return T 4: 5: function COMPOSE(node): 6: C \u2190 GET_CHILDREN(node) 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization", "sec_num": "3.2" }, { "text": "Cs \u2190 SORT_BY_PRIORITY(C) 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization", "sec_num": "3.2" }, { "text": "if | Cs | == 0 then 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization", "sec_num": "3.2" }, { "text": "B \u2190 BINARYDEPENDENCYTREE() 10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization", "sec_num": "3.2" }, { "text": "B.val = node 11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization", "sec_num": "3.2" }, { "text": "return B 12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization", "sec_num": "3.2" }, { "text": "else 13: top \u2190 C.pop() 14: B \u2190 BINARYDEPENDENCYTREE() 15: B.val = RELATE(top, node) 16: B.left = COMPOSE(top) 17: B.right = COMPOSE(node) 18: return B 19:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization", "sec_num": "3.2" }, { "text": "end if 20: end function", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Binarization", "sec_num": "3.2" }, { "text": "The polarization algorithm places polarities on each node of a UD parse tree based on a lexicon of polarization rules for each dependency relation and some special words. Our polarization algorithm is similar to the algorithms surveyed by Lavalle-Mart\u00ednez et al. (2018) . Like the algorithm of Sanchez (1991) , our algorithm computes polarity from leaves to root. One difference our algorithm has is that often, the algorithm computes polarity following a left-toright inorder traversal (left\u2212\u2192root\u2212\u2192right) or a right-to-left inorder traversal (right\u2212\u2192root\u2212\u2192left) in additional to the top-down traversal. In our algorithm, each node's polarity depends both on its parent node and its sibling node (left side or right side), which is different from algorithms in Lavalle-Mart\u00ednez et al. (2018)'s paper. Our algorithm is deterministic, and thus never fails.", "cite_spans": [ { "start": 239, "end": 269, "text": "Lavalle-Mart\u00ednez et al. (2018)", "ref_id": "BIBREF11" }, { "start": 294, "end": 308, "text": "Sanchez (1991)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Polarization", "sec_num": "3.3" }, { "text": "The polarization algorithm takes in a binarized UD parse tree T and a set of polarization rules, both dependency-relation-level (L) and word-level (W). The algorithm outputs a polarized UD parse tree T * such that (1) each node is marked with Figure 3 : Visualization of a polarized binary dependency parse tree for a triple negation sentence No student refused to dance without shoes.", "cite_spans": [], "ref_spans": [ { "start": 243, "end": 251, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Polarization", "sec_num": "3.3" }, { "text": "a polarity of either monotone (\u2191), antitone (\u2193), or no monotonicity information (=), (2) both T and T * have the same universal dependency structure except the polarity marks. Figure 3 shows a visualization of the binary dependency parse tree after polarization completes. The general steps of the polarization start from the root node of the binary parse tree. The system will get the corresponding polarization rule from the lexicon according to the root node's dependency relation. In each polarization rule, the system applies the polarization rule and then continues the above steps recursively down the left sub-tree and the right sub-tree. Each polarization rule is composed from a set of basic building blocks include rules for negation, equalization, and monotonicity generation. When the recursion reaches a leaf node, which is an individual word in a sentence, a set of word-based polarization rules will be retrieved from the lexicon, and the system polarizes the nodes according to the rule corresponding to a particular word. More details about word-based polarization rules will be covered in section 3.4.2, Polarity Generation. An overview of the polarization algorithm and a general scheme of the implementation for dependency-level polarization rules are shown in Algorithm 2.", "cite_spans": [], "ref_spans": [ { "start": 176, "end": 184, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Polarization", "sec_num": "3.3" }, { "text": "Our polarization algorithm contains a lexicon of polarization rules corresponding to each dependency relation. Each polarization rule is composed from a set of building blocks divided into three categories: negation rules, equalization rules, and monotonicity generation rules. The generation rules will generate three types of monotonicity: monotone (\u2191), antitone (\u2193), and no monotonicity information (=) either by initialization or based on the words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polarization Rules", "sec_num": "3.4" }, { "text": "Input: T : binary dependency tree L: dependency-level polarization rules W: word-level polarization rules Output: T * : polarized binary dependency tree 1: if T .is_tree then 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Polarization", "sec_num": null }, { "text": "relation \u2190 T .val 3: POLARIZATION_RULE(.) \u2190 L[relation] 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Polarization", "sec_num": null }, { "text": "POLARIZATION_RULE(T ) 5: end if 6: 7: General scheme of a polarization rule's implementation for a dependency relation 8: function POLARIZATION_RULE(T ) 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Polarization", "sec_num": null }, { "text": "Initialize or inherit polarities 10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Polarization", "sec_num": null }, { "text": "if T .mark = NULL then 11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Polarization", "sec_num": null }, { "text": "T .right.mark = T .mark 12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Polarization", "sec_num": null }, { "text": "T .left.mark = T .mark 13: else 14:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Polarization", "sec_num": null }, { "text": "T .right.mark = \u2191 15:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Polarization", "sec_num": null }, { "text": "T ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2 Polarization", "sec_num": null }, { "text": "Negation and Equalization The negation rule and the equalization rule are used by several core dependency relations such as nmod, obj, and acl:recl. Both negation and equalization have two ways of application: backward or top-down. A backward negation rule is triggered by a downward polarity (\u2193) on the right node of the tree (marked below as R), flipping every node's polarity under the left node (marked below as L). Similarly, a backward equalization rule is triggered by a no monotonicity information polarity (=) on the tree's right node, and it marks every node under the left node as =. Examples for trees before and after applying a backward and forward negation and equalization are shown as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Blocks", "sec_num": "3.4.1" }, { "text": "\u2022 Backward Negation: obj \u2191 \u00ac(L \u2191 ) R \u2193 obj \u2191 L \u2193 R \u2193 \u2022 Backward Equalization: obj \u2191 \u223c = (L \u2191 ) R = obj \u2191 L = R = \u2022 Forward Negation: advmod \u2191 L \u2193 \u00ac(R \u2191 ) advmod \u2191 L \u2193 R \u2193 \u2022 Forward Equalization: advmod \u2191 L = \u223c = (R \u2191 ) advmod \u2191 L = R =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Blocks", "sec_num": "3.4.1" }, { "text": "where \u00ac means negation and \u223c = means equalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Blocks", "sec_num": "3.4.1" }, { "text": "A top-down negation is used by the polarization rule like determiner (det) and adverbial modifier (advmod). It starts at the parent node of the current tree, and flips the arrow on each node under that parent node excluding the current tree. This topdown negation is used by det, case, and advmod when a negation operators like no, not, or at-most appears. Below is an example of a tree before and after applying the top-down negation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Blocks", "sec_num": "3.4.1" }, { "text": "\u00ac(nsubj \u2191 ) det \u2191 No \u2191 cat \u2193 \u00ac(flies \u2191 ) nsubj \u2193 det \u2191 No \u2191 cat \u2193 flies \u2193", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Blocks", "sec_num": "3.4.1" }, { "text": "Polarity Generation The polarity is generated by words. During the polarization, the polarity can change based on a particular word that can promote the polarity governing the part of the sentence to which it belongs. These words include quantifiers and verbs. For the monotonicity from quantifiers, we follow the monotonicity profiles listed in the work done by Icard III and Moss (2014) on monotonicity, which built on van Benthem (1986). Additionally, to extend to more quantifiers, we observed polarization results generated by ccg2mono. Overall, we categorized the quantifiers as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Blocks", "sec_num": "3.4.1" }, { "text": "\u2022 Universal Type Every \u2193 \u2191 Each \u2193 \u2191 All \u2193 \u2191 \u2022 Negation Type No \u2193 \u2193 Less than \u2193 \u2193 At most \u2193 \u2193 \u2022 Exact Type Exactly n = = The = \u2191 This = \u2191 \u2022 Existential Type Some \u2191 \u2191 Several \u2191 \u2191 A, An \u2191 \u2191 \u2022 Other Type Most = \u2191 Few = \u2193", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Blocks", "sec_num": "3.4.1" }, { "text": "Where the first mark is the monotonicity for the first argument after the quantifier and the second mark is the monotonicity for the second argument after the quantifier. For verbs, there are upward entailment operators and downward entailment operators. Verbs that are downward entailment operators, such as refuse, promote an antitone polarity, which will negate its dependents. For example, for the phrase refused to go, refused will promote an antitone polarity, which negates to dance:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Blocks", "sec_num": "3.4.1" }, { "text": "xcomp \u2191 \u00ac(mark \u2191 ) \u00ac(to \u2191 ) \u00ac(go \u2191 ) refused \u2191 xcomp \u2191 mark \u2193 to \u2193 go \u2193 refused \u2191", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Blocks", "sec_num": "3.4.1" }, { "text": "In addition to quantifiers and verbs, some other words also change the monotonicity of a sentence. For example, words like not, none, and nobody promote an antitone polarity. Our system also handles material implications with the form if x then y. Based on Moss (2012), the word if promotes an antitone polarity in the antecedent and positive polarity in the consequent. For background on monotonicity and semantics, see van Benthem (1986) , Keenan and Faltz (1984) , and also Karttunen (2012).", "cite_spans": [ { "start": 425, "end": 439, "text": "Benthem (1986)", "ref_id": "BIBREF2" }, { "start": 442, "end": 465, "text": "Keenan and Faltz (1984)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Building Blocks", "sec_num": "3.4.1" }, { "text": "Each dependency relation has a corresponding polarization rule. All the rules start with initializing the starting node as upward monotone polarity (\u2191). Alternatively, if the starting node has a polarity marked, each child node will inherit the root node's polarity. Each rule's core part is a combination of the default rules and monotonicity generation rules. In this section, we will briefly show three major types of dependency relation rules in the polarization algorithm. The relative clause modifier relation will represent rules for modifier relations. The determiner relation rule will represent rules containing monotonicity generation rules. The Object and open clausal complement rule will represent rules containing word-level polarization rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Relation Rules", "sec_num": "3.4.2" }, { "text": "Input: T : binary dependency sub-tree Output: T * : polarized binary dependency sub-tree 1: if T .mark = NULL then 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Polarize_acl:relcl", "sec_num": null }, { "text": "T .right.mark = T .mark 3: else 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Polarize_acl:relcl", "sec_num": null }, { "text": "T .right.mark = \u2191 5: end if 6: T .left.mark = \u2191 7: 8: POLARIZE(T .right) 9: POLARIZE(T .left) 10: 11: if T .right.mark == \u2193 then 12: NEGATE(T .left) 13: else if T .right.mark == = then 14: EQUALIZE(T .left) 15: end if Relative Clause Modifier For the relative clause modifier relation (acl:relcl), the relative clause depends on the noun it modifies. First, the polarization will first be performed on both the left and right nodes, and then, depending on the polarity of the right node, a negation or an equalization rule will be applied. The algorithm first applies a top-down inheritance if the root already has its polarity marked; otherwise, it initializes the left and right nodes as monotone. The algorithm polarizes both the left and right nodes. Next, the algorithm checks the right node's polarity. If the right node is marked as antitone, a backward negation is applied. Alternatively, if the right node is marked as no monotonicity information, a backward equalization is applied. During the experiments, we noticed that if the root node is marked antitone, and the left node inherits that, a negation later will cause a double negation, producing incorrect polarity marks. To avoid this double negation, we exclude the left node from the top-down inheritance rule by initializing the left node directly with a monotone mark. The rule for acl:relcl also applies to the adverbial clause modifier (advcl) and the clausal modifier of noun (acl). An overview of the algorithm is shown in Algorithm 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Polarize_acl:relcl", "sec_num": null }, { "text": "Determiner For the determiner relation (det), each different determiner can assign a new monotonicity to the noun it modifies. First, the algorithm performs a top-down inheritance on the left node if the root already has polarity marked. Next, the algorithm assigns the polarity for the noun depending on the determiner's type. For example, if the determiner is a universal quantifier, an antitone polarity is assigned to the right node. For negation quanti-Algorithm 4 Polarize_det Input: T : binary dependency sub-tree D: determiner mark dictionary Output: T * : polarized binary dependency sub-tree 1: det_type \u2190 GET_DET_TYPE(T .left) 2: if T .mark = NULL then 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Polarize_acl:relcl", "sec_num": null }, { "text": "T .left.mark = T .mark 4: else 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Polarize_acl:relcl", "sec_num": null }, { "text": "T .left.mark = \u2191 6: end if 7: 8: T .right.mark = D[det_type] 9: POLARIZE(T .right) 10: 11: if det_type == negation then 12: NEGATE(T .parent) 13: end if fiers like no, its right node also receives an antitone polarity. Thus, a top-down negation is applied at the determiner relation tree's parent. Algorithm 4 shows an overview of the algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Polarize_acl:relcl", "sec_num": null }, { "text": "Object and Open Clausal Complement For the object relation (obj) and the open clausal complement relation xcomp, both the verb and the noun would inherit the monotonicity from the parent in the majority of cases. The inheritance procedure is the same as the one used in acl:relcl's rule. Similarly, after the inheritance, the rule will polarize both the right sub-tree and the left sub-tree. Differently, since obj and xcomp both have a verb under the relation, they require a word-level polarization rule that will check the verb determine if the verb is a downward entailment operator, which prompts an antitone monotonicity. The algorithm takes in a dictionary that contains a list of verbs and their", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Polarize_acl:relcl", "sec_num": null }, { "text": "Input: T : binary dependency sub-tree Output: T * : polarized binary dependency sub-tree 1: if T .mark = NULL then 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 5 Polarize_obj", "sec_num": null }, { "text": "T .right.mark = T .mark 3: else 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 5 Polarize_obj", "sec_num": null }, { "text": "T .right.mark = \u2191 5: end if 6: T .left.mark = \u2191 7: 8: POLARIZE(T .right) 9: POLARIZE(T .left) 10: 11: Word-level polarization rule for downward entailment operators 12: if IS_DOWNWARD_OPERATOR(T .right.mark) then 13: NEGATE(T .left) 14: end if 15:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 5 Polarize_obj", "sec_num": null }, { "text": "implicatives. The dictionary is generated from the implicative verb dataset made by Ross and Pavlick (2019) . If a verb is a downward entailment operator, which has a negative implicative, the rule will apply a negation rule on the left sub-tree to flip each node's arrow in the left sub-tree. An overview of the algorithm is shown in Algorithm 5.", "cite_spans": [ { "start": 84, "end": 107, "text": "Ross and Pavlick (2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Algorithm 5 Polarize_obj", "sec_num": null }, { "text": "We conducted several preliminary comparisons to two existing systems. First, we compared to Nat-Log's monotonicity annotator. Natlog's annotator also uses dependency parsing. The polarization algorithm does pattern-based matching for finding occurrences of downward monotonicity information, and the algorithm only polarizes on word-level. In contrast, our system uses a tree-based polarization algorithm that polarizes both on word-level polarities and constituent level polarities. Our intuition is that the Tregex patterns used in NatLog is not as common or as easily understandable as the binary tree structure, which is a classic data structure wildly used in the filed of computer science.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to Existing Systems", "sec_num": "4" }, { "text": "According to the comparison on a list of sentences, NatLog's annotator does not perform as well as our system. For example, for a phrase the rabbit, rabbit should have a polarity with no monotonicity information (=). However, NatLog marks rabbit as a monotone polarity (\u2191). NatLog also incorrectly polarizes sentences containing multiple negations. For example, for a triple negation sentence, No newspapers did not report no bad news, NatLog gives: No \u2191 newspapers \u2193 did \u2193 not \u2193 report \u2191 no \u2191 bad \u2191 news \u2191 . This result has incorrect polarity marks on multiple words, where report, bad, news should be \u2193, and no should be \u2191. Both of the scenarios above can be handled correctly by our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to Existing Systems", "sec_num": "4" }, { "text": "Comparing to ccg2mono, our algorithm shares some similarities to its polarization algorithm. Both of the systems polarize on a tree structure and rely on a lexicon of rules, and they both polarize on the word-level and the constituent level. One difference is that ccg2mono's algorithm contains two steps, the first step puts markings on each node, and the second step puts polarities on each node. Our system does not require the step of adding markings and only contains the step of adding polarities on each node.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to Existing Systems", "sec_num": "4" }, { "text": "Our system has multiple advantages over ccg2mono. For parsing, our system uses UD pars-ing, which is more accurate than CCG parsing used by ccg2mono due to a large amount of training data. Also, our system covers more types of text than ccg2mono because UD parsing works for a variety of text genres such as web texts, emails, reviews, and even informal texts like Twitter tweets. (Silveira et al., 2014; Zeldes, 2017; Liu et al., 2018) . Our system can also work for more languages than ccg2mono since UD parsing supports more languages than CCG parsing. Overall, our system delivers more accurate polarization than ccg2mono. Many times the CCG parser makes mistakes and leads to polarization mistakes later on. For example, in the annotation The \u2193 market \u2193 is \u2193 not \u2193 impossible \u2193 to \u2193 navigate \u2193 , ccg2mono incorrectly marks every word as \u2193. Our system, on the other hand, uses UD parsing which has higher parsing accuracy than CCG parsing, and thus leads to fewer polarization mistakes compared to ccg2mono. For the expression above, our system correctly polarizes it as The \u2191 market = is \u2191 not \u2191 impossible \u2193 to \u2191 navigate \u2191 .", "cite_spans": [ { "start": 381, "end": 404, "text": "(Silveira et al., 2014;", "ref_id": "BIBREF25" }, { "start": 405, "end": 418, "text": "Zeldes, 2017;", "ref_id": "BIBREF26" }, { "start": 419, "end": 436, "text": "Liu et al., 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison to Existing Systems", "sec_num": "4" }, { "text": "Our system also handles multi-word quantifiers better than ccg2mono. For example, for a multiword quantifier expression like all of the dogs, ccg2mono mistakenly marks dogs as =. Our system, however, can correctly mark the expression:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to Existing Systems", "sec_num": "4" }, { "text": "all \u2191 of \u2191 the \u2191 dogs \u2193 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to Existing Systems", "sec_num": "4" }, { "text": "Moreover, the core of ccg2mono does not include aspects of verbal semantics of downwardentailing operators like forgot and regret . For example ccg2mono's polarization for Every \u2191 member \u2193 forgot \u2191 to \u2191 attend \u2191 the \u2191 meeting = is not correct because it fails to flip the polarity of to attend the. In contrast, our system produces a correct result: Every \u2191 member \u2193 forgot \u2191 to \u2193 attend \u2193 the \u2193 meeting = .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to Existing Systems", "sec_num": "4" }, { "text": "All three systems have difficulty polarizing sentences containing numbers. A scalar number n's monotonicity information is hard to determine because it can presenter different contexts: a single number n, without additional quantifiers or adjectives, can either mean at least n, at most n, exactly n, and around n. These contexts are syntactically hard to identify for a dependency parser or a CCG parser because it would require pragmatics and some background knowledge which the parsers do not have. For example, in the sentence A dog ate 2 rotten biscuits, the gold label for 2 is = which indicates that the context is \"exactly 2\". However, our system marks this as \"\u2193 since it considers the sentence type More \u2191 dogs \u2191 than \u2191 cats \u2193 sit = comparative Less \u2191 than \u2191 5 \u2191 people \u2193 ran \u2193 less-than A \u2191 dog \u2191 who \u2191 ate \u2191 two = rotten \u2191 biscuits \u2191 was \u2191 sick \u2191 for \u2191 three \u2193 days \u2193 number Every \u2191 dog \u2193 who \u2193 likes \u2193 most \u2193 cats = was \u2191 chased \u2191 by \u2191 at \u2191 least \u2191 two \u2193 of \u2191 them \u2191 every:most:at-least Even \u2191 if \u2191 you \u2193 are \u2193 addicted \u2193 to \u2193 cigarettes \u2193 you \u2191 can \u2191 smoke \u2191 two \u2193 a \u2191 day \u2191 conditional:number Table 2 : Example sentences in 's evaluation dataset context as \"at least 2\", which is different from the gold label.", "cite_spans": [], "ref_spans": [ { "start": 1108, "end": 1115, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Comparison to Existing Systems", "sec_num": "4" }, { "text": "Dataset We obtained the small evaluation dataset used in the evaluation of ccg2mono ) from its authors. The dataset contains 56 hand-crafted English sentences, each with manually annotated monotonicity information. The sentences cover a wide range of linguistic phenomena such as quantifiers, conditionals, conjunctions, and disjunctions. The dataset also contains hard sentences involving scalar numbers. Some example sentences from the dataset are shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 459, "end": 466, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "Dependency Parser In order to obtain a universal dependency parse tree from a sentence, we utilize a parser from Stanza , a Python natural language analysis package made by Stanford. The neural pipeline in Stanza allow us to use pretrained neural parsing models to generate universal dependency parse trees. To achieve optimal performance, we trained two neural parsing models: one parsing model trained on Universal Dependency English GUM corpus (Zeldes, 2017) . The pretrained parsing model achieved 90.0 LAS (Zeman et al., 2018) evaluation score on the testing data.", "cite_spans": [ { "start": 447, "end": 461, "text": "(Zeldes, 2017)", "ref_id": "BIBREF26" }, { "start": 511, "end": 531, "text": "(Zeman et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "5" }, { "text": "We evaluated the polarization accuracy on both the token level and the sentence level, in a similar fashion to the evaluation for part-of-speech tagging (Manning, 2011) . For both levels of accuracy, we conducted one evaluation on all tokens (acc(all-tokens) in Table 3 ) and another one on key tokens including content words (nouns, verbs, adjectives, adverbs) , determiners, and numbers (acc(key-tokens) in Table 3 ). The key tokens contain most of the useful monotonicity information for inference. In token-level evaluation, we counted the number of correctly annotated tokens for acc(all-tokens) or the number of correctly annotated key tokens for acc(key-tokens). In sentencelevel evaluation, we counted the number of cor- rect sentences. A correct sentence has all tokens correctly annotated for acc(all-tokens) or all key tokens correctly annotated for acc(key-tokens). We also evaluated our system's robustness on the token level. We followed the robustness metric for evaluating multi-class classification tasks, which uses precision, recall, and F1 score to measure a system's robustness. We calculated these three metrics for each polarity label: monotone(\u2191), antitone(\u2193), and None or no monotonicity information(=). The robustness evaluation is also done both on all tokens and on key tokens. Table 3 shows the performance of our system, compared with NatLog and ccg2mono. Our evaluation process is the same as . From Table 3 , we first observe that our system consistently outperforms ccg2mono and NatLog on both the token level and the sentence level. For accuracy on the token level, our system has the highest accuracy for the evaluation on all tokens (96.5) and the highest accuracy for the evaluation on key tokens (96.5). Our system's accuracy on key tokens is higher than the accuracy on all tokens, which demonstrates our system's good performance on polarity annotation for tokens that are more signif- Table 4 : Token level robustness comparison between NatLog, ccg2mono, and our system. The robustness score is evaluated both on all tokens and on key tokens (content words + determiners + numbers). For each of the three polarities: monotone(\u2191), antitone(\u2193), and None or no monotonicity information(=), the relative precision, recall and F1 score are calculated.", "cite_spans": [ { "start": 153, "end": 168, "text": "(Manning, 2011)", "ref_id": "BIBREF17" }, { "start": 326, "end": 361, "text": "(nouns, verbs, adjectives, adverbs)", "ref_id": null } ], "ref_spans": [ { "start": 262, "end": 269, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 409, "end": 416, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1306, "end": 1313, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1431, "end": 1438, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1926, "end": 1933, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experiment Setup", "sec_num": null }, { "text": "icant to monotonicity inference. For accuracy on the sentence level, our system again has the highest accuracy for the evaluation on all tokens (87.5) and the highest accuracy for the evaluation on key tokens (89.2). Such results suggest that our system can achieve good performance on determining the monotonicity of the sentence constituents. Overall, the evaluation validates that our system has higher polarity annotation accuracy than existing systems. We compared our annotations to ccg2mono's annotation and observed that of all the tokens in the 56 sentences, if ccg2mono annotates it correctly, then our system also does so. This means, our system's polarization covers more linguistic phenomena than ccg2mono. Table 4 shows the robustness score of our system and the two existing systems. Our systems has much higher precision and recall on all three polarity labels than the other two systems. For the F1 score, our system again has the highest points over the other two systems. The consistent and high robustness scores show that our system's performance is much more robust on the given dataset than existing systems.", "cite_spans": [], "ref_spans": [ { "start": 720, "end": 727, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "6" }, { "text": "In this paper, we have demonstrated our system's ability to automatically annotate monotonicity information (polarity) for a sentence by conducting polarization on a universal dependency parse tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "The system operates by first converting the parse tree to a binary parse tree and then marking polarity on each node according to a lexicon of polarization rules. The system produces accurate annotations on sentences involving many different linguistic phenomena such as quantifiers, double negation, relative clauses, and conditionals. Our system had better performance on polarity marking than existing systems including ccg2mono (Hu and Moss, 2018) and NatLog (MacCartney and Manning, 2009; Angeli et al., 2016) . Additionally, by using UD parsing, our system offers many advantages. Our system supports a variety of text genres and can be applied to many languages. In general, this paper opens up a new framework for performing inference, semantics, and automated reasoning over UD representations. For future work, an inference system can be made that utilizes the monotonicity information annotated by our system, which is similar to the Mon-aLog system . Several improvements can be made to the system to obtain more accurate annotations. One improvement would be to incorporate pragmatics to help determine the monotonicity of a scalar number.", "cite_spans": [ { "start": 432, "end": 451, "text": "(Hu and Moss, 2018)", "ref_id": "BIBREF6" }, { "start": 456, "end": 493, "text": "NatLog (MacCartney and Manning, 2009;", "ref_id": null }, { "start": 494, "end": 514, "text": "Angeli et al., 2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" } ], "back_matter": [ { "text": "This research is advised by Dr. Lawrence Moss from Indiana University and Dr. Michael Wollowski from Rose-hulman Institute of Technology. We thank their helpful advises and feedback on this research. We also thank the anonymous reviewers for their insightful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "LangPro: Natural language theorem prover", "authors": [ { "first": "", "middle": [], "last": "Lasha Abzianidze", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "115--120", "other_ids": { "DOI": [ "10.18653/v1/D17-2020" ] }, "num": null, "urls": [], "raw_text": "Lasha Abzianidze. 2017. LangPro: Natural language theorem prover. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 115- 120, Copenhagen, Denmark. Association for Com- putational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Combining natural logic and shallow reasoning for question answering", "authors": [ { "first": "Gabor", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Neha", "middle": [], "last": "Nayak", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "442--452", "other_ids": { "DOI": [ "10.18653/v1/P16-1042" ] }, "num": null, "urls": [], "raw_text": "Gabor Angeli, Neha Nayak, and Christopher D. Man- ning. 2016. Combining natural logic and shallow reasoning for question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 442-452, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Essays in Logical Semantics, volume 29 of Studies in Linguistics and Philosophy", "authors": [ { "first": "Johan", "middle": [], "last": "Van Benthem", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan van Benthem. 1986. Essays in Logical Seman- tics, volume 29 of Studies in Linguistics and Philos- ophy. D. Reidel Publishing Co., Dordrecht.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Representation and inference for natural language -a first course in computational semantics", "authors": [ { "first": "P", "middle": [], "last": "Blackburn", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" } ], "year": 2005, "venue": "CSLI Studies in Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Blackburn and Johan Bos. 2005. Representation and inference for natural language -a first course in com- putational semantics. In CSLI Studies in Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "MonaLog: a lightweight system for natural language inference based on monotonicity", "authors": [ { "first": "Hai", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Richardson", "suffix": "" }, { "first": "Atreyee", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "S", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Moss", "suffix": "" }, { "first": "", "middle": [], "last": "K\u00fcbler", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Society for Computation in Linguistics (SCiL) 2020", "volume": "", "issue": "", "pages": "319--329", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Hu, Qi Chen, Kyle Richardson, Atreyee Mukher- jee, Lawrence S Moss, and Sandra K\u00fcbler. 2020. MonaLog: a lightweight system for natural language inference based on monotonicity. In Proceedings of the Society for Computation in Linguistics (SCiL) 2020, pages 319-329.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Polarity computations in flexible categorial grammar", "authors": [ { "first": "Hai", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Moss", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "124--129", "other_ids": { "DOI": [ "10.18653/v1/S18-2015" ] }, "num": null, "urls": [], "raw_text": "Hai Hu and Larry Moss. 2018. Polarity computations in flexible categorial grammar. In Proceedings of the Seventh Joint Conference on Lexical and Com- putational Semantics, pages 124-129, New Orleans, Louisiana. Association for Computational Linguis- tics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An automatic monotonicity annotation tool based on ccg trees", "authors": [ { "first": "Hai", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Lawrence", "middle": [ "S" ], "last": "Moss", "suffix": "" } ], "year": 2020, "venue": "Second Tsinghua Interdisciplinary Workshop on Logic, Language, and Meaning: Monotonicity in Logic and Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Hu and Lawrence S. Moss. 2020. An automatic monotonicity annotation tool based on ccg trees. In Second Tsinghua Interdisciplinary Workshop on Logic, Language, and Meaning: Monotonicity in Logic and Language.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "2014 -Perspectives on Semantic Representations for Textual Inference", "authors": [ { "first": "F", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Lawrence", "middle": [ "S" ], "last": "Icard", "suffix": "" }, { "first": "", "middle": [], "last": "Moss", "suffix": "" } ], "year": 2014, "venue": "Linguistic Issues in Language Technology", "volume": "9", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas F. Icard III and Lawrence S. Moss. 2014. Re- cent progress on monotonicity. In Linguistic Issues in Language Technology, Volume 9, 2014 -Perspec- tives on Semantic Representations for Textual Infer- ence. CSLI Publications.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Simple and phrasal implicatives", "authors": [ { "first": "Lauri", "middle": [], "last": "Karttunen", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", "volume": "1", "issue": "", "pages": "124--131", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lauri Karttunen. 2012. Simple and phrasal implica- tives. In Proceedings of the First Joint Conference on Lexical and Computational Semantics -Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth In- ternational Workshop on Semantic Evaluation, Se- mEval '12, page 124-131, USA. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Boolean Semantics for Natural Language", "authors": [ { "first": "L", "middle": [], "last": "Edward", "suffix": "" }, { "first": "Leonard", "middle": [ "M" ], "last": "Keenan", "suffix": "" }, { "first": "", "middle": [], "last": "Faltz", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward L. Keenan and Leonard M. Faltz. 1984. Boolean Semantics for Natural Language. Springer.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Equivalences among polarity algorithms", "authors": [ { "first": "J", "middle": [], "last": "Lavalle-Mart\u00ednez", "suffix": "" }, { "first": "M", "middle": [], "last": "Montes Y G\u00f3mez", "suffix": "" }, { "first": "L", "middle": [], "last": "Pineda", "suffix": "" }, { "first": "H\u00e9ctor", "middle": [], "last": "Jim\u00e9nez-Salazar", "suffix": "" }, { "first": "Ismael Everardo B\u00e1rcenas", "middle": [], "last": "Pati\u00f1o", "suffix": "" } ], "year": 2018, "venue": "Studia Logica", "volume": "106", "issue": "", "pages": "371--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Lavalle-Mart\u00ednez, M. Montes y G\u00f3mez, L. Pineda, H\u00e9ctor Jim\u00e9nez-Salazar, and Ismael Everardo B\u00e1rce- nas Pati\u00f1o. 2018. Equivalences among polarity algo- rithms. Studia Logica, 106:371-395.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A* CCG parsing with a supertag-factored model", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "990--1000", "other_ids": { "DOI": [ "10.3115/v1/D14-1107" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis and Mark Steedman. 2014. A* CCG pars- ing with a supertag-factored model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 990- 1000, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Multi-lingual Wikipedia summarization and title generation on low resource corpus", "authors": [ { "first": "Wei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zuying", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Yinan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Workshop MultiLing 2019: Summarization Across Languages, Genres and Sources", "volume": "", "issue": "", "pages": "17--25", "other_ids": { "DOI": [ "10.26615/978-954-452-058-8_004" ] }, "num": null, "urls": [], "raw_text": "Wei Liu, Lei Li, Zuying Huang, and Yinan Liu. 2019. Multi-lingual Wikipedia summarization and title generation on low resource corpus. In Proceedings of the Workshop MultiLing 2019: Summarization Across Languages, Genres and Sources, pages 17- 25, Varna, Bulgaria. INCOMA Ltd.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Parsing tweets into Universal Dependencies", "authors": [ { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "965--975", "other_ids": { "DOI": [ "10.18653/v1/N18-1088" ] }, "num": null, "urls": [], "raw_text": "Yijia Liu, Yi Zhu, Wanxiang Che, Bing Qin, Nathan Schneider, and Noah A. Smith. 2018. Parsing tweets into Universal Dependencies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 965-975, New Orleans, Louisiana. As- sociation for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "An extended model of natural logic", "authors": [ { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Eight International Conference on Computational Semantics", "volume": "", "issue": "", "pages": "140--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bill MacCartney and Christopher D. Manning. 2009. An extended model of natural logic. In Proceed- ings of the Eight International Conference on Com- putational Semantics, pages 140-156, Tilburg, The Netherlands. Association for Computational Lin- guistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": { "DOI": [ "10.3115/v1/P14-5010" ] }, "num": null, "urls": [], "raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60, Bal- timore, Maryland. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Part-of-speech tagging from 97linguistics? In CICLing", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning. 2011. Part-of-speech tagging from 97linguistics? In CICLing.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The soundness of internalized polarity marking", "authors": [ { "first": "L", "middle": [], "last": "Moss", "suffix": "" } ], "year": 2012, "venue": "Studia Logica", "volume": "100", "issue": "", "pages": "683--704", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Moss. 2012. The soundness of internalized polarity marking. Studia Logica, 100:683-704.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Syllogistic logics with comparative adjectives", "authors": [ { "first": "Lawrence", "middle": [ "S" ], "last": "Moss", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence S. Moss and Hai Hu. 2020. Syllogistic log- ics with comparative adjectives. Unpublished ms., Indiana University.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Universal Dependencies v1: A multilingual treebank collection", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Silveira", "suffix": "" }, { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "1659--1666", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Haji\u010d, Christopher D. Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666, Portoro\u017e, Slovenia. European Language Resources Associa- tion (ELRA).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Stanza: A python natural language processing toolkit for many human languages", "authors": [ { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuhui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Bolton", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "101--108", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-demos.14" ] }, "num": null, "urls": [], "raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101- 108, Online. Association for Computational Linguis- tics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Transforming dependency structures to logical forms for semantic parsing", "authors": [ { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "127--140", "other_ids": { "DOI": [ "10.1162/tacl_a_00088" ] }, "num": null, "urls": [], "raw_text": "Siva Reddy, Oscar T\u00e4ckstr\u00f6m, Michael Collins, Tom Kwiatkowski, Dipanjan Das, Mark Steedman, and Mirella Lapata. 2016. Transforming dependency structures to logical forms for semantic parsing. Transactions of the Association for Computational Linguistics, 4:127-140.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "How well do NLI models capture verb veridicality?", "authors": [ { "first": "Alexis", "middle": [], "last": "Ross", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2230--2240", "other_ids": { "DOI": [ "10.18653/v1/D19-1228" ] }, "num": null, "urls": [], "raw_text": "Alexis Ross and Ellie Pavlick. 2019. How well do NLI models capture verb veridicality? In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2230-2240, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Studies on natural logic and categorial grammar", "authors": [ { "first": "V", "middle": [], "last": "Sanchez", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Sanchez. 1991. Studies on natural logic and catego- rial grammar.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A gold standard dependency corpus for English", "authors": [ { "first": "Natalia", "middle": [], "last": "Silveira", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Miriam", "middle": [], "last": "Connor", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC- 2014).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation", "authors": [ { "first": "Amir", "middle": [], "last": "Zeldes", "suffix": "" } ], "year": 2017, "venue": "", "volume": "51", "issue": "", "pages": "581--612", "other_ids": { "DOI": [ "10.1007/s10579-016-9343-x" ] }, "num": null, "urls": [], "raw_text": "Amir Zeldes. 2017. The GUM corpus: Creating mul- tilayer resources in the classroom. Language Re- sources and Evaluation, 51(3):581-612.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "CoNLL 2018 shared task: Multilingual parsing from raw text to Universal Dependencies", "authors": [ { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Popel", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", "volume": "", "issue": "", "pages": "1--21", "other_ids": { "DOI": [ "10.18653/v1/K18-2001" ] }, "num": null, "urls": [], "raw_text": "Daniel Zeman, Jan Haji\u010d, Martin Popel, Martin Pot- thast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Mul- tilingual parsing from raw text to Universal Depen- dencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 1-21, Brussels, Belgium. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "A dependency parse tree for \"All dogs eat food.\"" }, "TABREF1": { "content": "
: Universal Dependency relation hierarchy. The
smaller a relation's level-id is, the higher that relation
is in the hierarchy.
", "num": null, "type_str": "table", "html": null, "text": "" }, "TABREF4": { "content": "", "num": null, "type_str": "table", "html": null, "text": "This table shows the polarity annotation accuracy on the token level and the sentence level for three systems: NatLog, ccg2mono, and our system. The token level accuracy counts the number of correctly annotated tokens, and the sentence level accuracy counts the number of correctly annotated sentences. Two types of accuracy are used. For acc(all-tokens), all tokens are evaluated. For acc(key-tokens), only key tokens (content words + determiners + numbers) are evaluated." } } } }