{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:24:22.465425Z"
},
"title": "Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Valentino",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Idiap Research Institute",
"location": {
"country": "Switzerland \u2021"
}
},
"email": "marco.valentino@manchester.ac.uk"
},
{
"first": "Ian",
"middle": [],
"last": "Pratt-Hartman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Idiap Research Institute",
"location": {
"country": "Switzerland \u2021"
}
},
"email": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Freitas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Idiap Research Institute",
"location": {
"country": "Switzerland \u2021"
}
},
"email": "andre.freitas@manchester.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "An emerging line of research in Explainable NLP is the creation of datasets enriched with human-annotated explanations and rationales, used to build and evaluate models with stepwise inference and explanation generation capabilities. While human-annotated explanations are used as ground-truth for the inference, there is a lack of systematic assessment of their consistency and rigour. In an attempt to provide a critical quality assessment of Explanation Gold Standards (XGSs) for NLI, we propose a systematic annotation methodology, named Explanation Entailment Verification (EEV), to quantify the logical validity of human-annotated explanations. The application of EEV on three mainstream datasets reveals the surprising conclusion that a majority of the explanations, while appearing coherent on the surface, represent logically invalid arguments, ranging from being incomplete to containing clearly identifiable logical errors. This conclusion confirms that the inferential properties of explanations are still poorly formalised and understood, and that additional work on this line of research is necessary to improve the way Explanation Gold Standards are constructed.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "An emerging line of research in Explainable NLP is the creation of datasets enriched with human-annotated explanations and rationales, used to build and evaluate models with stepwise inference and explanation generation capabilities. While human-annotated explanations are used as ground-truth for the inference, there is a lack of systematic assessment of their consistency and rigour. In an attempt to provide a critical quality assessment of Explanation Gold Standards (XGSs) for NLI, we propose a systematic annotation methodology, named Explanation Entailment Verification (EEV), to quantify the logical validity of human-annotated explanations. The application of EEV on three mainstream datasets reveals the surprising conclusion that a majority of the explanations, while appearing coherent on the surface, represent logically invalid arguments, ranging from being incomplete to containing clearly identifiable logical errors. This conclusion confirms that the inferential properties of explanations are still poorly formalised and understood, and that additional work on this line of research is necessary to improve the way Explanation Gold Standards are constructed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Explanation Gold Standards (XGSs) are emerging as a fundamental enabling tool for step-wise and explainable Natural Language Inference (NLI). Resources such as WorldTree Jansen et al., 2018) , QASC (Khot et al., 2020) , among others (Wiegreffe and Marasovi\u0107, 2021; Thayaparan et al., 2020b; Bhagavatula et al., 2020; Camburu et al., 2018) provide a corpus of linguistic evidence on how humans construct explanations that are perceived as plausible, coherent and complete.",
"cite_spans": [
{
"start": 170,
"end": 190,
"text": "Jansen et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 198,
"end": 217,
"text": "(Khot et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 233,
"end": 264,
"text": "(Wiegreffe and Marasovi\u0107, 2021;",
"ref_id": "BIBREF28"
},
{
"start": 265,
"end": 290,
"text": "Thayaparan et al., 2020b;",
"ref_id": "BIBREF23"
},
{
"start": 291,
"end": 316,
"text": "Bhagavatula et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 317,
"end": 338,
"text": "Camburu et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Designed for tasks such as Textual Entailment (TE) and Question Answering (QA), these refer-e-SNLI Premise: A man in an orange vest leans over a pickup truck. Hypothesis: A man is touching a truck. Label: entailment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Man leans over a pickup truck implies that he is touching it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation:",
"sec_num": null
},
{
"text": "Question: Which of the following characteristics would best help a tree survive the heat of a forest fire? [A] : Does the answer logically follow from the explanation? While step-wise explanations are used as ground-truth for the inference, there is a lack of assessment of their consistency and rigour. We propose EEV , a methodology to quantify the logical validity of human-annotated explanations.",
"cite_spans": [
{
"start": 107,
"end": 110,
"text": "[A]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Worldtree",
"sec_num": null
},
{
"text": "ence datasets are used to build and evaluate models with step-wise inference and explanation generation capabilities (Valentino et al., 2021; Cartuyvels et al., 2020; Kumar and Talukdar, 2020; Rajani et al., 2019) . While these explanations are used as ground-truth for the inference, there is a lack of systematic assessment of their consistency and rigour, introducing inconsistency biases within the models.",
"cite_spans": [
{
"start": 117,
"end": 141,
"text": "(Valentino et al., 2021;",
"ref_id": "BIBREF24"
},
{
"start": 142,
"end": 166,
"text": "Cartuyvels et al., 2020;",
"ref_id": "BIBREF5"
},
{
"start": 167,
"end": 192,
"text": "Kumar and Talukdar, 2020;",
"ref_id": "BIBREF12"
},
{
"start": 193,
"end": 213,
"text": "Rajani et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Worldtree",
"sec_num": null
},
{
"text": "This paper aims to provide a critical quality assessment of Eplanation Gold Standards for NLI in terms of their logical inference properties. By systematically translating natural language explanations into corresponding logical forms, we induce a set of recurring logical violations which can then be used as testing conditions for quantifying quality and logical consistency in the annotated explanations. More fundamentally, the paper reveals the surprising conclusion that a majority of the explanations present in explanation gold standards contain one or more major logical fallacies, while appearing to be coherent on the surface. This study reveals that the inferential properties of explanations are still poorly formalised and understood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Worldtree",
"sec_num": null
},
{
"text": "The main contributions of this paper can be summarised as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Worldtree",
"sec_num": null
},
{
"text": "1. Proposal of a systematic methodology, named",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Worldtree",
"sec_num": null
},
{
"text": "Explanation Entailment Verification (EEV ), for analysing the logical consistency of NLI explanation gold-standards.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Worldtree",
"sec_num": null
},
{
"text": "2. Validation of the quality assessment methodology for three contemporary and mainstream reference XGSs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Worldtree",
"sec_num": null
},
{
"text": "3. The conclusion that most of the annotated human-explanations in the analysed samples represent logically invalid arguments, ranging from being incomplete to containing clearly identifiable logical errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Worldtree",
"sec_num": null
},
{
"text": "An emerging line of research in Explainable NLP is focused on the creation of datasets enriched with human-annotated explanations and rationales (Wiegreffe and Marasovi\u0107, 2021) . These resources are often adopted as Explanation Gold Standards (XGSs), providing additional supervision for training and evaluating explainable models capable of generating natural language explanations in support of their predictions (Valentino et al., 2021 Kumar and Talukdar, 2020; Cartuyvels et al., 2020; Thayaparan et al., 2020a; Rajani et al., 2019) . XGSs are designed to support Natural Language Inference, asking human-annotators to transcribe the reasoning required for deriving the correct prediction (Thayaparan et al., 2020b) . Despite the popularity of these datasets, and their application for measuring explainability on tasks such as Textual Entailment (Camburu et al., 2018) , Multiple-choice Question Answering Jhamtani and Clark, 2020; Khot et al., 2020; Jansen et al., 2018) , and other inference tasks (Wang et al., 2020; Ferreira and Freitas, 2020b,a; Bhagavatula et al., 2020) , little has been done to provide a clear understanding on the nature and the quality of the reasoning encoded in the explanations.",
"cite_spans": [
{
"start": 145,
"end": 176,
"text": "(Wiegreffe and Marasovi\u0107, 2021)",
"ref_id": "BIBREF28"
},
{
"start": 415,
"end": 438,
"text": "(Valentino et al., 2021",
"ref_id": "BIBREF24"
},
{
"start": 439,
"end": 464,
"text": "Kumar and Talukdar, 2020;",
"ref_id": "BIBREF12"
},
{
"start": 465,
"end": 489,
"text": "Cartuyvels et al., 2020;",
"ref_id": "BIBREF5"
},
{
"start": 490,
"end": 515,
"text": "Thayaparan et al., 2020a;",
"ref_id": "BIBREF22"
},
{
"start": 516,
"end": 536,
"text": "Rajani et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 693,
"end": 719,
"text": "(Thayaparan et al., 2020b)",
"ref_id": "BIBREF23"
},
{
"start": 851,
"end": 873,
"text": "(Camburu et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 911,
"end": 936,
"text": "Jhamtani and Clark, 2020;",
"ref_id": "BIBREF10"
},
{
"start": 937,
"end": 955,
"text": "Khot et al., 2020;",
"ref_id": "BIBREF11"
},
{
"start": 956,
"end": 976,
"text": "Jansen et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 1005,
"end": 1024,
"text": "(Wang et al., 2020;",
"ref_id": "BIBREF27"
},
{
"start": 1025,
"end": 1055,
"text": "Ferreira and Freitas, 2020b,a;",
"ref_id": null
},
{
"start": 1056,
"end": 1081,
"text": "Bhagavatula et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Previous work on explainability evaluation has mainly focused on methods for assessing the quality and faithfulness of explanations generated by deep learning models (Camburu et al., 2020; Subramanian et al., 2020; Kumar and Talukdar, 2020; Jain and Wallace, 2019; Wiegreffe and Pinter, 2019) . Our work is related to this research, but focuses instead on the resources on which explainable models are trained. In that sense, this paper is more aligned to gold standard evaluation methods, which aim to design systematic approaches to qualify the content and the inference capabilities involved in mainstream NLP benchmarks (Lewis et al., 2021; Bowman and Dahl, 2021; Schlegel et al., 2020; Ribeiro et al., 2020; Pavlick and Kwiatkowski, 2019; Min et al., 2019) . However, to the best of our knowledge, none of these methods have been adopted to provide a critical assessment of humanannotated explanations present in XGSs.",
"cite_spans": [
{
"start": 166,
"end": 188,
"text": "(Camburu et al., 2020;",
"ref_id": "BIBREF4"
},
{
"start": 189,
"end": 214,
"text": "Subramanian et al., 2020;",
"ref_id": "BIBREF21"
},
{
"start": 215,
"end": 240,
"text": "Kumar and Talukdar, 2020;",
"ref_id": "BIBREF12"
},
{
"start": 241,
"end": 264,
"text": "Jain and Wallace, 2019;",
"ref_id": "BIBREF8"
},
{
"start": 265,
"end": 292,
"text": "Wiegreffe and Pinter, 2019)",
"ref_id": "BIBREF29"
},
{
"start": 624,
"end": 644,
"text": "(Lewis et al., 2021;",
"ref_id": "BIBREF13"
},
{
"start": 645,
"end": 667,
"text": "Bowman and Dahl, 2021;",
"ref_id": "BIBREF2"
},
{
"start": 668,
"end": 690,
"text": "Schlegel et al., 2020;",
"ref_id": "BIBREF20"
},
{
"start": 691,
"end": 712,
"text": "Ribeiro et al., 2020;",
"ref_id": "BIBREF19"
},
{
"start": 713,
"end": 743,
"text": "Pavlick and Kwiatkowski, 2019;",
"ref_id": "BIBREF17"
},
{
"start": 744,
"end": 761,
"text": "Min et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Given a generic classification task T , an Explanation Gold Standard (XGS) is a collection of distinct instances of T , XGS(T ) = {I 1 , I 2 , . . . , I n }, where each element of the set,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Gold Standards",
"sec_num": "3"
},
{
"text": "I i = {X i , s i , E i },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Gold Standards",
"sec_num": "3"
},
{
"text": "includes a problem formulation X i , the expected solution s i for X i , and a human-annotated explanation E i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Gold Standards",
"sec_num": "3"
},
{
"text": "In general, the nature of the elements in a XGS can vary greatly according to the task T under consideration. In this work, we restrict our investigation to Natural Language Inference (NLI) tasks, such as Textual Entailment and Question Answering, where problem formulation, expected solution, and explanations are entirely expressed in natural language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Gold Standards",
"sec_num": "3"
},
{
"text": "For this class of problems, the explanation is typically a composition of sentences, whose role is to describe the reasoning required to arrive at the final solution. As shown in the examples depicted in Figure 1 , the explanations are constructed by human annotators transcribing the commonsense and world knowledge necessary for the correct answer to hold. Given the nature of XGSs for NLI, we hypothesise that a human-annotated explanation represents a valid set of premises from which the expected solution logically follows. ",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 212,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Explanation Gold Standards",
"sec_num": "3"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Gold Standards",
"sec_num": "3"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Gold Standards",
"sec_num": "3"
},
{
"text": "Premises (P) Conclusion (c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Gold Standards",
"sec_num": "3"
},
{
"text": "Figure 2: Overview of the Explanation Entailment Verification (EEV ) applied to different NLI problems. EEV takes the form of a multi-label classification problem where, for a given NLI problem, a human annotator has to qualify the validity of the inference process described in the explanation through a pre-defined set of classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Gold Standards",
"sec_num": "3"
},
{
"text": "In order to validate or reject this hypothesis, we design a methodology aimed at evaluating XGSs in terms of logical entailment, quantifying the extent to which human-annotated explanations actually entail the final answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Gold Standards",
"sec_num": "3"
},
{
"text": "We present a methodology, named Explanation Entailment Verification (EEV ), aimed at quantifying and assessing the quality of human-annotated explanations in XGS for NLI tasks, in terms of their logical inference properties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Entailment Verification",
"sec_num": "4"
},
{
"text": "To this end, we design an annotation framework that takes the form of a multi-label classification problem defined on a XGS. Specifically, the goal of EEV is to label each element in a XGS,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Entailment Verification",
"sec_num": "4"
},
{
"text": "I i = {X i , s i , E i },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Entailment Verification",
"sec_num": "4"
},
{
"text": "using one of a predefined set of classes qualifying the validity of the inference process described in the explanation E i . Figure 2 shows a schematic representation of the annotation pipeline. One of the challenges involved in the design of a standardised methodology for EEV is the formalisation of an annotation task that is applicable to NLI problems with different shapes, such as Textual Entailment (TE) and Multiple-choice Question Answering (MCQA). To minimise the ambiguity in the annotation and make it independent of the specific NLI task, we define a methodology composed of three major steps: (1) problem definition; (2) formalisation; and (3) verification.",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 133,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Explanation Entailment Verification",
"sec_num": "4"
},
{
"text": "In the problem definition step, each example I i in the XGS is translated into an entailment form (P |= c), identifying a set of sentences P representing the premises for the entailment, and a single sentence c representing its conclusion. As illustrated in Figure 2 , this step defines an entailment problem with a single surface form that allows abstracting from the NLI task under investigation.",
"cite_spans": [],
"ref_spans": [
{
"start": 258,
"end": 267,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Explanation Entailment Verification",
"sec_num": "4"
},
{
"text": "In the formalisation step, the sentences in P and c are translated into a logical form (\u03a6 |= \u03c8). Specifically, the formalisation is performed using event-based semantics, in which verbs correspond to event-types, and their objects to semantic roles (additional details on the formalism are provided in section 4.3). This step aims to minimise the ambiguity in the interpretation of the meaning of the sentences, supporting the annotators in the identification of logical errors and gaps in the explanations, and maximise the inter-annotator agreement in the downstream verification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Entailment Verification",
"sec_num": "4"
},
{
"text": "The final step corresponds to the actual multilabel classification problem. Specifically, the annotators are asked to verify whether the formalised set of premises \u03a6 entails the conclusion \u03c8 (\u03a6 |= \u03c8) and to classify the explanation in the corresponding example After EEV is performed for each instance in the dataset, the frequencies of the classification labels can be adopted to estimate and evaluate the overall entailment properties of the explanations in the XGS under consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Entailment Verification",
"sec_num": "4"
},
{
"text": "I i = {X i , s i , E i } selecting",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Entailment Verification",
"sec_num": "4"
},
{
"text": "The problem definition step consists in the identification of the sentences in I i = {X i , s i , E i } that will compose the set of premises P and the conclusion c for the entailment problem P |= c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem definition",
"sec_num": "4.1"
},
{
"text": "Here, we describe the procedure adopted for translating a specific NLI task into the entailment problem of interest given its original surface form. In particular, we employ two different translation procedures for Textual Entailment (TE) and Multiple-choice Question Answering (MCQA) problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem definition",
"sec_num": "4.1"
},
{
"text": "Textual Entailment (TE). For a TE task, the problem formulation X i is generally composed of two sentences, p and h, representing a premise and a hypothesis (see e-SNLI in figure 1). Each example in a TE task can be classified using one of the following labels: entailment, neutral, and contradiction (Bowman et al., 2015) . In this work, we focus on examples where the expected solution s i is entailment, implying that the hypothesis h is a consequence of the premise p. Therefore, to define the entailment verification problem, we simply include the premise p in P and consider the hypothesis h as a the conclusion c. For this class of problems, the explanation E i describes additional factual knowledge necessary for the entailment p |= h to hold (Camburu et al., 2018) . Specifically, the sentences in E i can be interpreted as a further set of premises for the entailment verification problem and are included in P .",
"cite_spans": [
{
"start": 301,
"end": 322,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 752,
"end": 774,
"text": "(Camburu et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem definition",
"sec_num": "4.1"
},
{
"text": "Multiple-choice Question Answering (MCQA). In the case of MCQA, X i is typically composed of a question Q i = {c 1 , . . . , c n , q}, and a set of mutually exclusive candidate answers A i = {a 1 , . . . , a m } (see QASC and Worldtree in figure 1). In this case, the expected label s i corresponds to one of the candidate answers in A i (Jansen et al., 2018; Khot et al., 2020) . Q i can include a set of introductory sentences c 1 , . . . , c n acting as a context for the question q. We consider each sentence c i in the context as a premise for q and include it in P . Similarly to TE, we interpret the explanation E i for a MCQA example as a set of premises that entails the correct answer s i . Therefore, the sentences in E i are included in P . The question q takes the form of an elliptical assertion, and the candidate answers are possible substitutions for the ellipsis. Therefore, to derive the conclusion c, we adopt the correct answer s i as a substitution for the ellipsis in q. Details on the formalisation adopted for MCQA problems are described in section 4.3.",
"cite_spans": [
{
"start": 338,
"end": 359,
"text": "(Jansen et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 360,
"end": 378,
"text": "Khot et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem definition",
"sec_num": "4.1"
},
{
"text": "In the verification step, the annotators adopt the formalised set of premises \u03a6 and conclusion \u03c8 to classify the entailment problem in one of the following categories:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verification",
"sec_num": "4.2"
},
{
"text": "1. Valid and non-redundant: The argument is formally valid, and all premises are required for the derivation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verification",
"sec_num": "4.2"
},
{
"text": "2. Valid, but redundant premises: The argument is formally valid, but some premises are not required for the derivation. This includes the cases where more than one premise is present, and the conclusion simply repeats one of the premises.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verification",
"sec_num": "4.2"
},
{
"text": "The argument is formally invalid, but would become valid on addition of a reasonable premise, such as, for example, \"If x affects y, then a change to x affects y\", or \"If x is the same height as y and y is not as tall as z then x is not as tall as z\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Missing plausible premise:",
"sec_num": "3."
},
{
"text": "The argument is formally invalid, apparently as a result of confusing \"and\" and \"or\" or \"some\" and \"all\", or of illicitly changing the direction of an implication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical error:",
"sec_num": "4."
},
{
"text": "The argument is invalid, no obvious rescue exists in the form of a missing premise, and no simple logical error can be identified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No discernible argument:",
"sec_num": "5."
},
{
"text": "In this section, we describe an example of formalisation for a MCQA problem. A typical multiplechoice problem is a triple consisting of a question Q together with a set of candidate answers A 1 , . . . , A m . It is understood that Q takes the form of a elliptical assertion, and the candidate answers are possible substitutions for the ellipsis. The task is to determine which of the candidate answers would result in an assertion entailed by some putative knowledge-base. The corpora investigated feature a list of multiple-choice textual entailment problems together, in each case, with a specification of a correct answer and an explanation in the form of a set of assertions \u03a6 from the knowledge base providing a justification for the answer. For example, the following problem together with its resolution is taken from the Worldtree corpus (Jansen et al., 2018) .",
"cite_spans": [
{
"start": 847,
"end": 868,
"text": "(Jansen et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "Question: A group of students are studying bean plants. All of the following traits are affected by changes in the environment except . . . In formalising such problems, we represent the question as a sentence of first-order logic featuring a schematic formula variable P (corresponding to the ellipsis), and the candidate answers as firstorder formulas. In the above example, we assume that the essential force of the question to find a characteristic of plants not affected by those plants' environments. That is, we are asked for a P making the schematic formula",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "\u2200xyzwe(bnPlnt(x) \u2227 env(y, x)\u2227 changeIn(z, y) \u2227 trait(w, x) \u2227 affct(e)\u2227",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "agnt(e, z) \u2227 P \u2192 \u00acptnt(e, w)). 1into a true statement. We formalise the correct answer (B) by the atomic formula sdTp(w, x) \"w is the seed type of x\", with the other candidate answers formalised similarly. In choosing predicates for formalisation, we typically render common noun-phrases using predicates, taking these to be relational if the context demands (e.g. \"environment/seed type of a plant x\"). In addition, we typically render verbs as predicates whose arguments range over eventualities (events, processes, etc.), related to their participants via a standard list of binary \"semantic role\" predicates (agent, patient, theme) etc. Thus, to say that \"x affects y\" is to report the existence of an eventuality e of type \"affecting\", such that x is the agent of e and y its patient. This approach, although somewhat strained in many general contexts, aids standardization and, more importantly, also makes it easier to deal with adverbial phrases. Of course, many choices in formalisation strategy inevitably remain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "The knowledge-base excerpt \u03a6 is formalised straightforwardly as a finite set of first-order formulas, following the same general rendering policies. In the case of the above example, sentences (i), (ii) and (iv)-(vi) in \u03a6 might be formalised as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "\u2200xy(plnt(x)\u2227sdTp(y, x) \u2192 char(y, x)\u2227inhtd(y)) \u2200xy(char(x, y) \u2227 inhtd(x) \u2192 \u00acacqrd(x)) \u2200x(plnt(x) \u2192 orgnsm(x)) \u2200x(bnPlnt(x) \u2192 plnt(x)) \u2200xy(trait(x, y) \u2194 char(x, y)),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "with the more complicated sentence (iii) formalised as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "\u2200xyw(orgnsm(x) \u2227 env(y, x)\u2227 char(w, x) \u2227 acqrd(w) \u2192",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "\u2203e(affct(e) \u2227 agnt(e, y) \u2227 ptnt(e, w)))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "Denoting by \u03c8 the result of substituting sdTp(w, x) for P in (1), we ask ourselves: Does \u03a6 entail \u03c8?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "A moment's thought shows that it does not. At the very least, statement (iii) in the explanation, whose prima facie formalisation is (2), must instead be read as asserting that an organism's environment affects only that organism's acquired characteristics, that is to say:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "\u2200xyw(orgnsm(x) \u2227 env(y, x) \u2227 char(w, x)\u2227",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "\u2203e(affct(e) \u2227 agnt(e, y) \u2227 ptnt(e, w)) \u2192 acqrd(w)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "This is not unreasonable, of course. Generalizations in natural language are notoriously vague as to the direction of implication; let \u03a6 be the result of substituting (3) for (2) in \u03a6. Does \u03a6 entail \u03c8? Again, no. The problem this time is that, modeltheoretically speaking, just because something is affected by a change in its environment, that does not mean to say it is affected by its environment. An assertion to the effect that it is would have to be postulated:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "\u2200xyzw(env(y, x) \u2227 changeIn(z, y)\u2227",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "\u2203e(affct(e) \u2227 agnt(e, z) \u2227 ptnt(e, w)) \u2192 \u2203e(affct(e) \u2227 agnt(e, y) \u2227 ptnt(e, w))).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "Let \u03a6 be the result of augmenting \u03a6 in this way. Then \u03a6 does indeed entail \u03c8. Applying a general principle of charity, it is reasonable to take the interpretation of the explanation to be given by \u03a6 . However, the additional premise required to obtain \u03a6 seems to have been forgotten. Although not a logical truth, it has the status of a plausible general principle of the kind that is frequently explicitly articulated in the Worldtree database. Therefore, we classify this example as a missing plausible premise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalisation",
"sec_num": "4.3"
},
{
"text": "We employ EEV to analyse a set of contemporary XGSs designed for Textual Entailment and Multiple-choice Question Answering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "5"
},
{
"text": "In the following sections, we describe the methodology adopted for extracting a representative sample from the selected XGSs, and for implementing the annotation pipeline efficiently. Finally, we present the results of the annotation, reporting the frequency of each entailment verification class and presenting a list of qualitative examples to provide additional insights on the logical properties of the analysed explanations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "5"
},
{
"text": "We select three contemporary XGSs with different and complementary characteristics. In particular, we apply our methodology to two MCQA datasets (Worldtree (Jansen et al., 2018) , QASC (Khot et al., 2020) ) and one TE benchmark (e-SNLI (Camburu et al., 2018) ).",
"cite_spans": [
{
"start": 156,
"end": 177,
"text": "(Jansen et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 185,
"end": 204,
"text": "(Khot et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 236,
"end": 258,
"text": "(Camburu et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selected Datasets",
"sec_num": "5.1"
},
{
"text": "The main features of the selected XGSs are reported in Table 1 . Multi-hop indicates whether the problem requires step-wise reasoning, combining more than one sentence to compose the final explanation. Crowd-sourced indicates whether the resource is curated using standard crowd-sourcing platforms. Explanation type represents the methodology adopted to construct the explanations. Generated means that the sentences in the explanations are entirely created by human annotators. On the other hand, composed means that the sentences are retrieved from an external knowledge resource. Fi-nally, the last row reports the average number of sentences composing the explanations.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Selected Datasets",
"sec_num": "5.1"
},
{
"text": "The bottleneck of the annotation framework lies in the formalisation phase, which is generally time consuming and requires trained experts in the field. In order to make the application of EEV efficient in practice, we extract a sub-set of n = 100 examples from each XGS (Worldtree, QASC, and e-SNLI). To maximise the representativeness of the explanations in the subset, given a fixed size n, we combine a set of sampling methodologies with effect size analysis. The details of the sampling methodology are described in section 5.3 while the results are presented in section 5.4. Code and data adopted for the experiments are available online 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "5.2"
},
{
"text": "The extracted examples are randomly assigned to 2 annotators with an overlap of 20 instances to compute the inter-annotator agreement. All the annotators are active researchers in the field of Natural Language Processing and Computational Semantics. Table 2 reports the inter-annotator agreement achieved on each dataset separately. Overall, we observe an average of 72% accuracy in the multi-label classification task, computed considering the percentage of overlaps between the final entailment verification classes chosen by the annotators.",
"cite_spans": [],
"ref_spans": [
{
"start": 250,
"end": 257,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Annotation Task",
"sec_num": "5.2"
},
{
"text": "To maximise the representativeness of the explanations for the subsequent annotation task, while analysing a fixed number n of examples for each dataset, we combine a set of sampling methodologies with effect size analysis. In this section, we describe the sampling techniques adopted for each dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Methodology",
"sec_num": "5.3"
},
{
"text": "A stratified sampling methodology has been adopted for the Worldtree corpus Jansen et al., 2018) . The stratified sampling con-sists in partitioning the dataset using a set of classes and performing random sampling from each class independently. This strategy guarantees that the same amount of examples is extracted from each class. The stratified technique requires the classes to be collectively exhaustive and mutually exclusive -i.e, each example has to belong to one and only one class. To apply stratified sampling on Worldtree, we consider the high-level topics introduced in (Xu et al., 2020) , which are used to classify each question in the dataset according to one of the following categories: Life, Earth, Forces, Materials, Energy, Scientific Inference, Celestial Objects, Safety, Other. The same technique cannot be applied to e-SNLI (Camburu et al., 2018) and QASC (Khot et al., 2020) since the examples in these datasets are not partitioned using any abstract set of classes. In this case, therefore, we use random sampling on the whole dataset to extract a fixed number n of examples.",
"cite_spans": [
{
"start": 76,
"end": 96,
"text": "Jansen et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 584,
"end": 601,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF31"
},
{
"start": 849,
"end": 871,
"text": "(Camburu et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 881,
"end": 900,
"text": "(Khot et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Methodology",
"sec_num": "5.3"
},
{
"text": "Once a fixed number of examples n is extracted from each dataset, we consider the annotated explanation sentences of each example to verify whether the extracted set of explanations is representative of the whole dataset. To perform this analysis, we assume the predicates in the explanation sentences to be the expression of the type of knowledge of the whole explanation. Therefore, we consider the extracted sample of explanations representative if the distribution of predicates in the sample is correlated with the same distribution in the whole dataset. To this end, we compute the frequencies of the verbs appearing in the explanation sentences from the extracted sub-set and original dataset separately. Subsequently, we compare the frequencies in the sub-sample with the frequencies in the whole dataset computing a Pearson correlation coefficient. In this case, a coefficient greater than .7 indicates a strong correlation between the types of explanations in the sample and the types of explanations in the original dataset. After running the sampling for t times independently, we select the subset of explanations for each dataset with the highest Pearson correlation coefficient. Table 3 reports the Pearson correlation for the subsets adopted in our analysis with fixed sample size n = 100.",
"cite_spans": [],
"ref_spans": [
{
"start": 1194,
"end": 1201,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Sampling Methodology",
"sec_num": "5.3"
},
{
"text": "The quantitative analysis presented in this section aims to empirically assess the hypothesis that human-annotated explanations in XGSs constitute valid and non-redundant logical arguments for the expected answers. We report the quantitative results of the explanation entailment verification in Table 4 . Specifically, the table reports the percentage of the frequency of each verification class in the analysed samples. The column AVG reports the average for each class. Overall, we observe that the results of the annotation task tend to reject our research hypothesis, with an average of only 20.42% of analysed explanations being classified as valid and non redundant arguments. When considering also valid, but redundant explanations (21.91%), the average percentage of valid arguments reaches a total of 42.33%. Therefore, we can conclude that the majority of the explanations represent invalid arguments (57.66%).",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 303,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "We observed that the majority of invalid arguments are classified as missing plausible premise. This finding implies that a significant percentage of annotated explanations are incomplete arguments (26.00%), that can be made valid on addition of a reasonable premise. We attribute this result to the tendency of human explainers to take for granted part of the world knowledge required in the explanation (Walton, 2004) .",
"cite_spans": [
{
"start": 405,
"end": 419,
"text": "(Walton, 2004)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "A lower but significant percentage of explanations contain identifiable logical errors (11.19%), which result from confusing the set of quantifiers and logical operators, or from illicitly changing the direction of an implication. Similarly, 20.47% of the explanations where labeled as no discernible arguments, where no obvious premise can be added to make the argument valid and no simple logical error can be detected. This result can be attributed partly to natural errors occurring in a gold standard creation process, partly to the effort required for human-annotators to identify logical fallacies in their explanations. In the remaining of this section, we analyse the results obtained on each XGS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "Worldtree. The analysed sample contains the highest percentage of incomplete arguments, with a total of 38.78% explanations classified as missing plausible premise. This result can be explained by the fact that the questions in Worldtree require complex forms of reasoning, facilitating the construction of arguments containing implicit world knowledge and missing premises. At the same time, the dataset contains the smallest percentage of logical errors (6.12%). We attribute this outcome to the fact that Worldtree is not crowd-sourced, implying that the quality of the annotated explanations is more easily controllable using internal verification methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "QASC. This XGS contains the highest rate of invalid arguments (62.74%), with 35.29% of the explanations classified as no discernible argument. One of the factors contributing to these results might be related to the length of the constructed explanations, which is limited to 2 facts extracted from a predefined corpus of sentences. The high rate of no discernible arguments and missing premises (35.29% and 21.57% respectively) suggests that the majority of the questions require additional world knowledge and more detailed explanations. This conclusion is also supported by the percentage of valid, but redundant arguments, which is the lowest among the analysed samples (7.84%). Finally, the highest rate of logical errors (17.65%) might be due to a combination of factors, including the complexity of the question answering task and the adopted crowd-sourcing mechanism, which prevent a thorough quality assessment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "e-SNLI. The sample includes the highest percentage of valid arguments with a total of 31.37%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "However, we noticed that the complexity of the reasoning involved in e-SNLI is generally lower than Worldtree and QASC, with most of the textual entailment problems being an example of monotonicity reasoning. This observation is supported by the highest percentage of valid, but redundant cases (31.37%), where the explanation simply repeats the content of the conclusion. This occurrs quite often for examples of lexical entailment, where the words in the conclusion are a subset of the words in the premise. The lexical entailment instances, in fact, do not require any additional world knowledge, making any attempt of constructing an explanation redundant. Despite these characteristics, our evaluation suggests that a significant percentage of arguments are invalid (37.25%). Again, this percentage might be the results of different factors, including the errors produced by the crowd-sourcing process. Table 5 reports a set of representative cases extracted from the evaluated samples. For each entailment verification class, we report an example extracted from the XGS with the highest percentage of instances in that class.",
"cite_spans": [],
"ref_spans": [
{
"start": 908,
"end": 915,
"text": "Table 5",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "Previous studies highlight the fact that explanations are contrastive in nature, that is, they describe why an event P happened instead of some counterfactual event Q (Miller, 2019; Lipton, 1990) . Following this definition, we perform an additional analysis to verify whether the explanations contained in MCQA datasets are contrastive with respect to the wrong candidate answers -i.e., the explanation supports the validity of the correct answer while excluding the set of alternative choices. In order to quantify this aspect, we asked the annotators to label the questions with more than one plausible answer, whose explanations do not mention any discriminative commonsense or world knowledge that explains why the gold answer is correct instead of the alternative choices. The results of this experiment are reported in Table 6 . Overall, we found that a significant percentage of explanations are labeled as non contrastive. This outcome is particularly evident for QASC. We attribute these results to the presence of multi-adversary answer choices in QASC, which are generated automatically to make the dataset more challenging for language models. However, we found that this mechanism can produce questions with more than one plausible correct answer, which can cause the explanation to loose its contrastive function (see QASC examples in Table 5 ).",
"cite_spans": [
{
"start": 167,
"end": 181,
"text": "(Miller, 2019;",
"ref_id": "BIBREF15"
},
{
"start": 182,
"end": 195,
"text": "Lipton, 1990)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 826,
"end": 833,
"text": "Table 6",
"ref_id": "TABREF11"
},
{
"start": 1350,
"end": 1357,
"text": "Table 5",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Contrastive Explanations",
"sec_num": "5.5"
},
{
"text": "This paper proposed a systematic annotation methodology to quantify the logical validity of human-annotated explanations in Explanation Gold Standards (XGSs). The application of the framework on three mainstream datasets led us to the conclusion that a majority of the explanations represent logically invalid arguments, ranging from being incomplete to containing clearly identifiable logical errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "The main limitation of the framework lies in the scalability of its current implementation, which is generally time consuming and requires trained semanticists. One way to improve its efficiency is to explore the adoption of supporting tools for the formalisation, such as semantic parsers and/or automatic theorem provers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Despite the current limitations, this study offers some important pointers for future work. On the one hand, the results suggest that logical errors can be reduced by a careful design of the gold standard, such as authoring explanations with internal verification strategies or reducing the complexity of the reasoning task. On the other hand, the finding that a large percentage of curated explanations still represent incomplete arguments has a deeper implication on the nature of explanations and on what annotators perceive as a valid and complete logical argument. Therefore, we argue that future progress on the design of XGSs will depend, among other things, on a better formalisation and understanding of the inferential properties of explanations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "https://github.com/ai-systems/ explanation-entailment-verification/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Abductive commonsense reasoning",
"authors": [
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Tau Yih",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Han- nah Rashkin, Doug Downey, Wen tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In International Conference on Learning Representa- tions.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1075"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "What will it take to fix benchmarking in natural language understanding?",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "George",
"middle": [
"E"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dahl",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman and George E. Dahl. 2021. What will it take to fix benchmarking in natural language understanding?",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "e-snli: Natural language inference with natural language explanations",
"authors": [
{
"first": "Oana-Maria",
"middle": [],
"last": "Camburu",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Lukasiewicz",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31",
"volume": "",
"issue": "",
"pages": "9539--9549",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oana-Maria Camburu, Tim Rockt\u00e4schel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Nat- ural language inference with natural language expla- nations. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, ed- itors, Advances in Neural Information Processing Systems 31, pages 9539-9549. Curran Associates, Inc.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Make up your mind! adversarial generation of inconsistent natural language explanations",
"authors": [
{
"first": "Oana-Maria",
"middle": [],
"last": "Camburu",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Shillingford",
"suffix": ""
},
{
"first": "Pasquale",
"middle": [],
"last": "Minervini",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Lukasiewicz",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4157--4165",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.382"
]
},
"num": null,
"urls": [],
"raw_text": "Oana-Maria Camburu, Brendan Shillingford, Pasquale Minervini, Thomas Lukasiewicz, and Phil Blunsom. 2020. Make up your mind! adversarial generation of inconsistent natural language explanations. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4157- 4165, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Autoregressive reasoning over chains of facts with transformers",
"authors": [
{
"first": "Ruben",
"middle": [],
"last": "Cartuyvels",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Spinks",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6916--6930",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.610"
]
},
"num": null,
"urls": [],
"raw_text": "Ruben Cartuyvels, Graham Spinks, and Marie- Francine Moens. 2020. Autoregressive reasoning over chains of facts with transformers. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 6916-6930, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Natural language premise selection: Finding supporting statements for mathematical text",
"authors": [
{
"first": "Deborah",
"middle": [],
"last": "Ferreira",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Freitas",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "2175--2182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deborah Ferreira and Andr\u00e9 Freitas. 2020a. Natu- ral language premise selection: Finding supporting statements for mathematical text. In Proceedings of the 12th Language Resources and Evaluation Con- ference, pages 2175-2182, Marseille, France. Euro- pean Language Resources Association.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Premise selection in natural language mathematical texts",
"authors": [
{
"first": "Deborah",
"middle": [],
"last": "Ferreira",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Freitas",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7365--7374",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.657"
]
},
"num": null,
"urls": [],
"raw_text": "Deborah Ferreira and Andr\u00e9 Freitas. 2020b. Premise selection in natural language mathematical texts. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 7365- 7374, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Attention is not Explanation",
"authors": [
{
"first": "Sarthak",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3543--3556",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1357"
]
},
"num": null,
"urls": [],
"raw_text": "Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3543-3556, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "WorldTree: A corpus of explanation graphs for elementary science questions supporting multi-hop inference",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Wainwright",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Marmorstein",
"suffix": ""
},
{
"first": "Clayton",
"middle": [],
"last": "Morrison",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Jansen, Elizabeth Wainwright, Steven Mar- morstein, and Clayton Morrison. 2018. WorldTree: A corpus of explanation graphs for elementary sci- ence questions supporting multi-hop inference. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning to explain: Datasets and models for identifying valid reasoning chains in multihop question-answering",
"authors": [
{
"first": "Harsh",
"middle": [],
"last": "Jhamtani",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "137--150",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.10"
]
},
"num": null,
"urls": [],
"raw_text": "Harsh Jhamtani and Peter Clark. 2020. Learning to ex- plain: Datasets and models for identifying valid rea- soning chains in multihop question-answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 137-150, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Qasc: A dataset for question answering via sentence composition",
"authors": [
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Guerquin",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "8082--8090",
"other_ids": {
"DOI": [
"10.1609/aaai.v34i05.6319"
]
},
"num": null,
"urls": [],
"raw_text": "Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. Qasc: A dataset for question answering via sentence compo- sition. Proceedings of the AAAI Conference on Arti- ficial Intelligence, 34(05):8082-8090.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "NILE : Natural language inference with faithful natural language explanations",
"authors": [
{
"first": "Sawan",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8730--8742",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.771"
]
},
"num": null,
"urls": [],
"raw_text": "Sawan Kumar and Partha Talukdar. 2020. NILE : Natu- ral language inference with faithful natural language explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 8730-8742, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Question and answer test-train overlap in open-domain question answering datasets",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1000--1008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021. Question and answer test-train overlap in open-domain question answering datasets. In Pro- ceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 1000-1008, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Contrastive explanation. Royal Institute of Philosophy Supplement",
"authors": [
{
"first": "",
"middle": [],
"last": "Peter Lipton",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "27",
"issue": "",
"pages": "247--266",
"other_ids": {
"DOI": [
"10.1017/S1358246100005130"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Lipton. 1990. Contrastive explanation. Royal Institute of Philosophy Supplement, 27:247-266.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Explanation in artificial intelligence: Insights from the social sciences",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2019,
"venue": "Artificial Intelligence",
"volume": "267",
"issue": "",
"pages": "1--38",
"other_ids": {
"DOI": [
"10.1016/j.artint.2018.07.007"
]
},
"num": null,
"urls": [],
"raw_text": "Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelli- gence, 267:1-38.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Compositional questions do not necessitate multi-hop reasoning",
"authors": [
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4249--4257",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1416"
]
},
"num": null,
"urls": [],
"raw_text": "Sewon Min, Eric Wallace, Sameer Singh, Matt Gard- ner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. Compositional questions do not necessitate multi-hop reasoning. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 4249-4257, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Inherent Disagreements in Human Textual Inferences. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "7",
"issue": "",
"pages": "677--694",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00293"
]
},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent Disagreements in Human Textual Inferences. Trans- actions of the Association for Computational Lin- guistics, 7:677-694.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Explain yourself! leveraging language models for commonsense reasoning",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nazneen Fatema Rajani",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4932--4942",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1487"
]
},
"num": null,
"urls": [],
"raw_text": "Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense rea- soning. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 4932-4942, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Beyond accuracy: Behavioral testing of NLP models with CheckList",
"authors": [
{
"first": "Tongshuang",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4902--4912",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.442"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A framework for evaluation of machine reading comprehension gold standards",
"authors": [
{
"first": "Viktor",
"middle": [],
"last": "Schlegel",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Valentino",
"suffix": ""
},
{
"first": "Andre",
"middle": [],
"last": "Freitas",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Nenadic",
"suffix": ""
},
{
"first": "Riza",
"middle": [],
"last": "Batista-Navarro",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "5359--5369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viktor Schlegel, Marco Valentino, Andre Freitas, Goran Nenadic, and Riza Batista-Navarro. 2020. A framework for evaluation of machine reading com- prehension gold standards. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 5359-5369, Marseille, France. Euro- pean Language Resources Association.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Obtaining faithful interpretations from compositional neural networks",
"authors": [
{
"first": "Sanjay",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Bogin",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Tomer",
"middle": [],
"last": "Wolfson",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5594--5608",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.495"
]
},
"num": null,
"urls": [],
"raw_text": "Sanjay Subramanian, Ben Bogin, Nitish Gupta, Tomer Wolfson, Sameer Singh, Jonathan Berant, and Matt Gardner. 2020. Obtaining faithful interpretations from compositional neural networks. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5594-5608, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Explanationlp: Abductive reasoning for explainable science question answering",
"authors": [
{
"first": "Mokanarangan",
"middle": [],
"last": "Thayaparan",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Valentino",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Freitas",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mokanarangan Thayaparan, Marco Valentino, and Andr\u00e9 Freitas. 2020a. Explanationlp: Abductive rea- soning for explainable science question answering.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A survey on explainability in machine reading comprehension",
"authors": [
{
"first": "Mokanarangan",
"middle": [],
"last": "Thayaparan",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Valentino",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Freitas",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mokanarangan Thayaparan, Marco Valentino, and Andr\u00e9 Freitas. 2020b. A survey on explainability in machine reading comprehension.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Unification-based reconstruction of multi-hop explanations for science questions",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Valentino",
"suffix": ""
},
{
"first": "Mokanarangan",
"middle": [],
"last": "Thayaparan",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Freitas",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "200--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Valentino, Mokanarangan Thayaparan, and Andr\u00e9 Freitas. 2021. Unification-based reconstruc- tion of multi-hop explanations for science questions. In Proceedings of the 16th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Main Volume, pages 200-211, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Explainable natural language reasoning via conceptual unification",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Valentino",
"suffix": ""
},
{
"first": "Mokanarangan",
"middle": [],
"last": "Thayaparan",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Freitas",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Valentino, Mokanarangan Thayaparan, and Andr\u00e9 Freitas. 2020. Explainable natural language reasoning via conceptual unification.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A new dialectical theory of explanation",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Walton",
"suffix": ""
}
],
"year": 2004,
"venue": "Philosophical Explorations",
"volume": "7",
"issue": "1",
"pages": "71--89",
"other_ids": {
"DOI": [
"10.1080/1386979032000186863"
]
},
"num": null,
"urls": [],
"raw_text": "Douglas Walton. 2004. A new dialectical theory of ex- planation. Philosophical Explorations, 7(1):71-89.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "SemEval-2020 task 4: Commonsense validation and explanation",
"authors": [
{
"first": "Cunxiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shuailong",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Yili",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Yilong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "307--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cunxiang Wang, Shuailong Liang, Yili Jin, Yi- long Wang, Xiaodan Zhu, and Yue Zhang. 2020. SemEval-2020 task 4: Commonsense validation and explanation. In Proceedings of the Four- teenth Workshop on Semantic Evaluation, pages 307-321, Barcelona (online). International Commit- tee for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Teach me to explain: A review of datasets for explainable nlp",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Ana",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Wiegreffe and Ana Marasovi\u0107. 2021. Teach me to explain: A review of datasets for explainable nlp.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Attention is not not explanation",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Pinter",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 11-20, Hong Kong, China. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "WorldTree v2: A corpus of sciencedomain structured explanations and inference patterns supporting multi-hop inference",
"authors": [
{
"first": "Zhengnan",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Thiem",
"suffix": ""
},
{
"first": "Jaycie",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Wainwright",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Marmorstein",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Jansen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "5456--5473",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengnan Xie, Sebastian Thiem, Jaycie Martin, Eliz- abeth Wainwright, Steven Marmorstein, and Peter Jansen. 2020. WorldTree v2: A corpus of science- domain structured explanations and inference pat- terns supporting multi-hop inference. In Proceed- ings of the 12th Language Resources and Evaluation Conference, pages 5456-5473, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Multiclass hierarchical question classification for multiple choice science exams",
"authors": [
{
"first": "Dongfang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Jaycie",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Zhengnan",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Vikas",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Harish Tayyar Madabushi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "5370--5382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongfang Xu, Peter Jansen, Jaycie Martin, Zheng- nan Xie, Vikas Yadav, Harish Tayyar Madabushi, Oyvind Tafjord, and Peter Clark. 2020. Multi- class hierarchical question classification for multi- ple choice science exams. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 5370-5382, Marseille, France. Euro- pean Language Resources Association.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Differential heating of air can be harnessed for what? [*A] electricity production [B] erosion prevention [C] transfer of electrons [D] reduce acidity of food Explanation: Differential heating of air produces wind. Wind is used for producing electricity.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Figure 1: Does the answer logically follow from the explanation? While step-wise explanations are used as ground-truth for the inference, there is a lack of assessment of their consistency and rigour. We propose EEV , a methodology to quantify the logical validity of human-annotated explanations.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "one of the following classes: (1) Valid and non redundant; (2) Valid, but redundant premises; (3) Missing plausible premise; (4) Logical error; (5) No discernible argument. The classes are mutually exclusive: each example can be assigned to one and only one label.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": "Candidate answers: [A] leaf color. [B] seed type. [C] bean production. [D] plant height. Correct answer: B Explanation: (i) The type of seed of a plant is an inherited characteristic; (ii) Inherited characteristics are the opposite of learned characteristics; acquired characteristics; (iii) An organism's environment affects that organism's acquired characteristics; (iv) A plant is a kind of organism; (v) A bean plant is a kind of plant; (vi) Trait is synonymous with characteristic.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"text": "Protecting something means preventing harm. Fire causes harm to trees, forests, and other living things. Thickness is a measure of how thick an object is. A tree is a kind of living thing.",
"type_str": "table",
"content": "
large leaves [B] shallow roots |
[*C] thick bark [D] thin trunks |
Explanation: |
"
},
"TABREF1": {
"html": null,
"num": null,
"text": "",
"type_str": "table",
"content": ": Differential heating of air can [B] erosion prevention [D] reduce acidity of food [C] transfer of electrons [*A] electricity production be harnessed for what? | electricity. produces wind. Wind is used for producing Differential heating of air Premises (P) | \u03a6 Formulas | |
Explanation: Differential heating of air produces wind. Wind is used for producing electricity. | Differential heating of air can be harnessed for electricity production. | \u03c8 | Entailment? |
| Conclusion (c) | | |
: A man in an orange vest leans over a pickup truck. Hypothesis: A man is touching a truck. Label: entailment | A man in an orange vest leans over a pickup truck. Man leans over a pickup truck implies that he is touching it. | | |
Explanation: | | | |
Man leans over a pickup truck implies that he is touching it. | A man is touching a truck. | | |
"
},
"TABREF3": {
"html": null,
"num": null,
"text": "Features of the datasets selected for the Explanation Entailment Verification (EEV ).",
"type_str": "table",
"content": ""
},
"TABREF5": {
"html": null,
"num": null,
"text": "Inter-annotator agreement computed in terms of accuracy in the multi-label classification task considering the first annotator as a gold standard.",
"type_str": "table",
"content": "Dataset | Correlation Coefficient |
Worldtree | .964 |
QASC | .958 |
e-SNLI | .987 |
"
},
"TABREF6": {
"html": null,
"num": null,
"text": "Effect size analysis of the samples extracted from each XGS for the downstream EEV annotation.",
"type_str": "table",
"content": ""
},
"TABREF8": {
"html": null,
"num": null,
"text": "Results of the application of EEV for each entailment verification category.",
"type_str": "table",
"content": ""
},
"TABREF10": {
"html": null,
"num": null,
"text": "Examples of explanations classified with different entailment verification categories.",
"type_str": "table",
"content": "Dataset | Non contrastive explanations |
Worldtree | 26.53 |
QASC | 49.02 |
"
},
"TABREF11": {
"html": null,
"num": null,
"text": "Percentage of explanations in the MCQA sample labeled as non contrastive.",
"type_str": "table",
"content": ""
}
}
}
}