{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:33:53.011911Z" }, "title": "HILDIF: Interactive Debugging of NLI Models Using Influence Functions", "authors": [ { "first": "Hugo", "middle": [], "last": "Zylberajch", "suffix": "", "affiliation": { "laboratory": "", "institution": "Imperial College London", "location": { "country": "UK" } }, "email": "" }, { "first": "Piyawat", "middle": [], "last": "Lertvittayakumjorn", "suffix": "", "affiliation": { "laboratory": "", "institution": "Imperial College London", "location": { "country": "UK" } }, "email": "" }, { "first": "Francesca", "middle": [], "last": "Toni", "suffix": "", "affiliation": { "laboratory": "", "institution": "Imperial College London", "location": { "country": "UK" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Biases and artifacts in training data can cause unwelcome behavior in text classifiers (such as shallow pattern matching), leading to lack of generalizability. One solution to this problem is to include users in the loop and leverage their feedback to improve models. We propose a novel explanatory debugging pipeline called HILDIF, enabling humans to improve deep text classifiers using influence functions as an explanation method. We experiment on the Natural Language Inference (NLI) task, showing that HILDIF can effectively alleviate artifact problems in fine-tuned BERT models and result in increased model generalizability.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Biases and artifacts in training data can cause unwelcome behavior in text classifiers (such as shallow pattern matching), leading to lack of generalizability. One solution to this problem is to include users in the loop and leverage their feedback to improve models. We propose a novel explanatory debugging pipeline called HILDIF, enabling humans to improve deep text classifiers using influence functions as an explanation method. We experiment on the Natural Language Inference (NLI) task, showing that HILDIF can effectively alleviate artifact problems in fine-tuned BERT models and result in increased model generalizability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Given two sentences, a premise and a hypothesis, Natural Language Inference (NLI) is the task of determining whether the premise entails the hypothesis, and it has been considered by many as a sign of language understanding (Condoravdi et al., 2003; Dagan et al., 2005) . Although recent deep learning models have shown to achieve good performances on different NLI datasets, as in other tasks, they have been shown to learn shallow heuristics. For example, a model is very likely to predict entailment for all hypotheses constructed from words in the premise (McCoy et al., 2019) . A key challenge is therefore to understand when and why state-of-the-art NLI models fail and try to mitigate the problems accordingly.", "cite_spans": [ { "start": 224, "end": 249, "text": "(Condoravdi et al., 2003;", "ref_id": "BIBREF1" }, { "start": 250, "end": 269, "text": "Dagan et al., 2005)", "ref_id": "BIBREF2" }, { "start": 560, "end": 580, "text": "(McCoy et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to bring to light this kind of pathology, one can use explanation techniques to comprehend how a black box model makes particular predictions. For instance, feature attribution methods explain by identifying parts of inputs that mainly contribute to predictions (Smilkov et al., 2017; Sundararajan et al., 2016; Ribeiro et al., 2016; Lundberg and Lee, 2017) . Further, example-based methods, such as influence functions (Koh and Liang, 2017) , identify training data points which are the most important for particular predictions. Existing works have proposed ways to improve models by incorporating human feedback, in response to the explanations, by: adding model constraints by fixing certain parameters (Stumpf et al., 2009; Lertvittayakumjorn et al., 2020) , adding training samples (Teso and Kersting, 2019) , and adjusting models' weights directly (Kulesza et al., 2015) .", "cite_spans": [ { "start": 271, "end": 293, "text": "(Smilkov et al., 2017;", "ref_id": "BIBREF15" }, { "start": 294, "end": 320, "text": "Sundararajan et al., 2016;", "ref_id": "BIBREF17" }, { "start": 321, "end": 342, "text": "Ribeiro et al., 2016;", "ref_id": "BIBREF14" }, { "start": 343, "end": 366, "text": "Lundberg and Lee, 2017)", "ref_id": "BIBREF12" }, { "start": 429, "end": 450, "text": "(Koh and Liang, 2017)", "ref_id": "BIBREF8" }, { "start": 716, "end": 737, "text": "(Stumpf et al., 2009;", "ref_id": "BIBREF16" }, { "start": 738, "end": 770, "text": "Lertvittayakumjorn et al., 2020)", "ref_id": "BIBREF10" }, { "start": 797, "end": 822, "text": "(Teso and Kersting, 2019)", "ref_id": "BIBREF18" }, { "start": 864, "end": 886, "text": "(Kulesza et al., 2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a novel interactive model debugging pipeline called HILDIF -Human In the Loop Debugging using Influence Functions. With the NLI task as a target, we use influence functions as an explanation method to help users understand the model reasoning via influential training examples. Then, for each influential example shown, the users provide feedback to create augmented training samples for fine tuning the model. Using HILDIF, we effectively mitigate artifact issues of BERT models (Devlin et al., 2019) trained on the MNLI dataset (Williams et al., 2018) and tested on the HANS dataset (Mc-Coy et al., 2019) , which is a known pathological setting for most deep NLI models working on English language. Our code can be found at https://github.com/hugozylberajch/HILDIF.", "cite_spans": [ { "start": 506, "end": 527, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 556, "end": 579, "text": "(Williams et al., 2018)", "ref_id": "BIBREF19" }, { "start": 611, "end": 632, "text": "(Mc-Coy et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Influence Functions. Introduced by Hampel (1974) , influence functions compute how upweighting individual examples in the training loss changes the model parameters. Influential training examples can also be used to study models (Koh and Liang, 2017) . They are particularly useful when feature attribution scores are not sufficient to illustrate how the model reasons. In the NLI task, for example, single input words may not suffice to explain a certain prediction, and the overall semantics and structures in the input may be needed.", "cite_spans": [ { "start": 35, "end": 48, "text": "Hampel (1974)", "ref_id": "BIBREF5" }, { "start": 229, "end": 250, "text": "(Koh and Liang, 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Recently, Han et al. (2020) showed that influence functions can capture key fine-grained interactions among input words and detect the presence of artifacts that lead to incorrect NLI predictions.", "cite_spans": [ { "start": 10, "end": 27, "text": "Han et al. (2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Although very appealing, influence functions are computationally expensive. Hence, Koh and Liang (2017) reduced computational complexity by using the LInear time Stochastic Second order Algorithm (LISSA) for calculating approximations. Guo et al. (2020) proposed FASTIF, which further speeds up the calculation using the k-nearest neighbors algorithm. They also fine tuned the model with influential training samples of anchor points (i.e., some data points in the validation set) to correct model errors. We will use FASTIF as a tool to explain BERT model's predictions on the NLI task in our experiment.", "cite_spans": [ { "start": 236, "end": 253, "text": "Guo et al. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Explanatory Interactive Debugging, where we improve a model by leveraging user feedback after presenting explanations for model predictions, was first introduced using simple statistical models such as Na\u00efve Bayes models or Support Vector Machines with simple explanatory techniques (Stumpf et al., 2009) . Recently, explanatory debugging has been applied to more complex models using refined interpretability methods. In FIND (Lertvittayakumjorn et al., 2020) , a masking matrix is added at the end of a CNN text classifier so as to disable particular CNN filters based on human feedback in response to LRP-based explanations (Arras et al., 2016) . In CAIPI (Teso and Kersting, 2019) , the user investigates and corrects a LIMEbased explanation (Ribeiro et al., 2016) for each prediction. Then additional training samples, created based on the correction, are used to fine tune the model. For more details on explanatory debugging, we refer interested readers to the survey by Lertvittayakumjorn and Toni (2021) .", "cite_spans": [ { "start": 283, "end": 304, "text": "(Stumpf et al., 2009)", "ref_id": "BIBREF16" }, { "start": 427, "end": 460, "text": "(Lertvittayakumjorn et al., 2020)", "ref_id": "BIBREF10" }, { "start": 627, "end": 647, "text": "(Arras et al., 2016)", "ref_id": "BIBREF0" }, { "start": 659, "end": 684, "text": "(Teso and Kersting, 2019)", "ref_id": "BIBREF18" }, { "start": 978, "end": 1012, "text": "Lertvittayakumjorn and Toni (2021)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As in CAIPI, we will exploit user feedback to control the generation of augmented samples for fine tuning the model. However, our explanations are influential training samples which are more suitable for explaining NLI predictions. This is an improvement from Guo et al. (2020) that simply fine tuned the model on influential samples without human feedback involved.", "cite_spans": [ { "start": 260, "end": 277, "text": "Guo et al. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We propose in Algorithm 1 a new pipeline called HILDIF (Human In the Loop Debugging with Influence Functions) for debugging deep text clas-Algorithm 1: HILDIF. L is a labeled training set, V is a labeled validation set, T is the number of iteration, and g is a data augmentation method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HILDIF", "sec_num": "3" }, { "text": "t \u2190 \u2212 0 f \u2190 \u2212 FIT(L) while t < T do X \u2190 \u2212 SELECT ANCHORS(f, V) Y \u2190 \u2212 f (X ) Z \u2190 \u2212 EXPLAIN(f, X ,\u0176) S \u2190 \u2212 \u00d8 for x i \u2208 X do for z ij \u2208 Z i do Present x i ,\u0177 i , z", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HILDIF", "sec_num": "3" }, { "text": "ij to the user; Obtain a similarity score s ij for the influential example", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HILDIF", "sec_num": "3" }, { "text": "z ij ; S \u2190 \u2212 S \u222a g(z ij , s ij ) f \u2190 \u2212 FINE TUNE(f, S) t \u2190 \u2212 t + 1 Return : f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HILDIF", "sec_num": "3" }, { "text": "sifiers using influence functions. As far as our knowledge goes, this is the first interactive explanatory debugging algorithm that makes effective use of influence functions. To improve a model f using HILDIF, a set of anchor points X = (x 1 , x 2 , ..., x n ) is first selected from the validation dataset V, and the predictions\u0176 = (\u0177 1 ,\u0177 2 , ...,\u0177 n ) are computed using the model f . Then, for each anchor point x i , we use FASTIF to identify p influential training samples Z i = (z i1 , z i2 , ..., z ip ), and we define Z as a collection of Z i for all x i \u2208 X . Next, for each pair of (x i , z ij ), i \u2208 {1, ..., n}, j \u2208 {1, ..., p}, the user will give a score of similarity s ij that will be used to generate synthetic data using a data augmentation function g. Finally, the model is fine tuned on the new generated data samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HILDIF", "sec_num": "3" }, { "text": "Next, we explain, in detail, each step of HILDIF, including explanation generation, user feedback collection, and data augmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HILDIF", "sec_num": "3" }, { "text": "Explanation Generation. From the validation set V, we can either select anchor points randomly or handpick some that contain particular heuristics we want to debug. After that, the user is presented with a list of top-p most negatively influential training data points for each anchor point. These influential data points contribute to the decrease of the model's loss when upweighted. Hence, fine-tuning the model using these data points should improve the model performance as studied by Guo et al. (2020) . However, since HILDIF relies on FASTIF which only approximates influence scores, we hypothesize that we can achieve better performance by asking humans to assess relevancy of the influential training samples returned by FASTIF before fine-tuning.", "cite_spans": [ { "start": 490, "end": 507, "text": "Guo et al. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "HILDIF", "sec_num": "3" }, { "text": "User Feedback Collection. For each anchor point x i and corresponding influential sample z ij , the user is asked the question: The test case and the presented sample are: (1) Very different; (2) Different; (3) Can't decide; (4) Similar; (5) Very similar; the user can then answer by selecting a radio button. Then z ij will obtain a similarity score s ij from 1 to 5 accordingly based on the user's answer. Similar in this context means that both samples share the same type of heuristics or lexical artifacts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HILDIF", "sec_num": "3" }, { "text": "Data Augmentation. To create an augmented sample for the NLI task, we have to make sure that the overall semantics of the premise and the hypothesis as well as the overall relation between the two sentences are preserved. We therefore choose random word replacement with synonyms as well as back translation for data augmentation since neither changes the semantic of the sentences. Moreover, we found empirically by testing different configurations that generating 10 \u00d7 s ij augmented samples for the influential sample z ij yielded the best results. For instance, an influential sample with the score 3 leads to 30 augmented samples with the same label as the original sample.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HILDIF", "sec_num": "3" }, { "text": "Datasets and Models. We evaluate our pipeline with a pretrained BERT-base cased model. We use the MNLI dataset (Williams et al., 2018) for training and validation, and the HANS dataset, which is known to be a dataset where BERT performs poorly (McCoy et al., 2019) , for testing. For the MNLI training and validation set, we merge the class neutral and contradiction into a single non-entailment class, following the HANS dataset's setting. HANS targets three heuristics of NLI and includes examples showcasing these heuristics: Lexical Overlap where the hypothesis is constructed with words from the premise, Constituent, where the hypothesis is a subtree of the premise's parse tree, and Subsequence, where the hypothesis is a contiguous subsequence of the premise (see Table 1 in the Appendix for some examples). NLI models almost always predict entailment for any example containing these heuristics although sometimes the correct label is non-entailment. So, our goal is to make the model better detect non-entailment cases while maintaining its performance on the entailment cases. For the overall performance, we chose accuracy as our evaluation metric because HANS is a balanced dataset (containing, for each subgroup of heuristics, 5,000 samples of the entailment class and 5,000 samples of the non-entailment class).", "cite_spans": [ { "start": 111, "end": 134, "text": "(Williams et al., 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 244, "end": 264, "text": "(McCoy et al., 2019)", "ref_id": "FIGREF0" }, { "start": 772, "end": 779, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Implementation Details. All our models are implemented using the pytorch library and trained using the AdamW optimizer. The HANS dataset is held-out during training and fine-tuning and is only used for testing. For computing influence functions, we use the FASTIF algorithm and FAISS library (Johnson et al., 2019) for k-nearest neighbors search. Finally, we ran all our experiment on a single 12GB NVIDIA Tesla K80 GPU. With this setting, the computation of influence scores of 5,000 training points for a corresponding anchor point takes approximately seven minutes. The BERT-base model is trained for two epochs on the MNLI training dataset.", "cite_spans": [ { "start": 292, "end": 314, "text": "(Johnson et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Regarding user feedback collection, due to human resources constraints, we did our interactive experiments with one expert user. Further experiments could be conducted with more users, and results for the same pair of anchor point and influential point could be aggregated in order to reduce human bias.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Comparison. We experimented with T = 1, using five anchor points with 10 and 20 influential samples each. We introduce three binary propositions that will define the debugging pipeline: HS: Human scoring, DA: Data augmentation, and H: Handpicked anchor points. Without human scoring (\u00acHS), every influential sample receives a score of 5. Without data augmentation (\u00acDA), the fine tuning is done on each influential sample only, and without handpicked anchor points (\u00acH), anchor points are selected randomly. Note that our handpicked anchor points were chosen among the validation samples that contain either the lexical overlap or the subsequence heuristic (see Table 2 in the Appendix). We compared the performance of eight different configurations of debugging algorithms that stem from these three binary propositions. For each configuration, we trained and im- proved three models using different random seeds and averaged the final performance on the test set. Note that the (\u00acHS, \u00acDA, \u00acH) configuration is the algorithm used in Guo et al. (2020) whereas the (HS, DA, H) and (HS, DA, \u00acH) configurations are our HILDIF algorithm. Figure 1b shows the accuracies on the HANS dataset of the baseline model (i.e., BERT trained on MNLI), and of four configurations, including (HS, DA, H) which is displayed as HILDIF in the figure. We can see that HILDIF consistently achieved a higher accuracy in all three categories of heuristics for the non-entailment class and a slightly lower accuracy for the entailment class. Actually, we observe the trade-off between the accuracies of both classes in all the four configurations. However, HILDIF still got higher overall accuracy on the HANS dataset than the baseline and the other configurations. Moreover, interactive debugging with human scores yielded better accuracies than debugging without human scores for the Lexical Overlap and the Subsequence categories. Meanwhile, on the Constituent category, handpicking anchor points with targeted heuristic led to a big jump in accuracy that outperformed the configuration with human feedback but random anchor points. Therefore, incorporating human knowledge since the selecting anchors step is also helpful when we have prior knowledge about the model bugs. Note also, in Figure 1a , that the model accuracies on the MNLI for HILDIF and the other configurations stay close to the baseline model's accuracy, as desired.", "cite_spans": [ { "start": 1034, "end": 1051, "text": "Guo et al. (2020)", "ref_id": null } ], "ref_spans": [ { "start": 662, "end": 669, "text": "Table 2", "ref_id": null }, { "start": 1134, "end": 1143, "text": "Figure 1b", "ref_id": "FIGREF0" }, { "start": 2266, "end": 2275, "text": "Figure 1a", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "The table in Figure 1c shows that fine tuning the model with augmented data samples, instead of the influential samples only, gave better results in most cases. This was likely because data augmentation could help prevent the model from overfitting the influential samples. Besides, there was little to no improvement in the model performance when we added user feedback (i.e., human scores) for random anchor points but a substantial improvement for handpicked anchor points. This can be because, during user feedback collection, most of the data samples are difficult to compare as they either satisfy several heuristics or no heuristics relevant to the NLI task. When looking at some influential samples for handpicked anchor points, most satisfy the same heuristic and, if not, they can be easily spotted by human eyes. Although we are still far from chance performance on the non-entailment class, HILDIF achieved a substantial increase in accuracy with just five anchor points.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 22, "text": "Figure 1c", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "We introduced HILDIF, an interactive explanatory debugging pipeline for deep text classifiers, and ran experiments on the NLI task, achieving high accuracies with MNLI-trained BERT across all categories of the pathological HANS dataset. Future work includes enhancement of the data augmentation part, including the use of a variational autoencoder or a GPT-2 based generative model for synthetic data generation. Also, with more human resources, experiments can be conducted by finetuning more than one iterations (T > 1) with more anchor points for each iteration. Finally, it would be interesting to apply HILDIF to other text classification tasks, given that, except for the handpicked anchor points that are chosen with knowledge of the task, every step of the pipeline is task-independent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [ { "text": "A Examples and Anchor Points Table 1 shows an example from the MNLI dataset (Williams et al., 2018) and three examples from the HANS dataset (each of which has a different heuristic type) (McCoy et al., 2019) . Besides, Table 2 shows the five handpicked anchor points used in the experiment. ", "cite_spans": [ { "start": 76, "end": 99, "text": "(Williams et al., 2018)", "ref_id": "BIBREF19" }, { "start": 188, "end": 208, "text": "(McCoy et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 29, "end": 36, "text": "Table 1", "ref_id": null }, { "start": 220, "end": 227, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Explaining predictions of non-linear classifiers in NLP", "authors": [ { "first": "Leila", "middle": [], "last": "Arras", "suffix": "" }, { "first": "Franziska", "middle": [], "last": "Horn", "suffix": "" }, { "first": "Gr\u00e9goire", "middle": [], "last": "Montavon", "suffix": "" }, { "first": "Klaus-Robert", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Samek", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 1st Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "1--7", "other_ids": { "DOI": [ "10.18653/v1/W16-1601" ] }, "num": null, "urls": [], "raw_text": "Leila Arras, Franziska Horn, Gr\u00e9goire Montavon, Klaus-Robert M\u00fcller, and Wojciech Samek. 2016. Explaining predictions of non-linear classifiers in NLP. In Proceedings of the 1st Workshop on Repre- sentation Learning for NLP, pages 1-7, Berlin, Ger- many. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Entailment, intensionality and text understanding", "authors": [ { "first": "Cleo", "middle": [], "last": "Condoravdi", "suffix": "" }, { "first": "Dick", "middle": [], "last": "Crouch", "suffix": "" }, { "first": "Reinhard", "middle": [], "last": "Valeria De Paiva", "suffix": "" }, { "first": "Daniel", "middle": [ "G" ], "last": "Stolle", "suffix": "" }, { "first": "", "middle": [], "last": "Bobrow", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the HLT-NAACL 2003 Workshop on Text Meaning", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cleo Condoravdi, Dick Crouch, Valeria de Paiva, Rein- hard Stolle, and Daniel G. Bobrow. 2003. Entail- ment, intensionality and text understanding. In Pro- ceedings of the HLT-NAACL 2003 Workshop on Text Meaning, pages 38-45.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The pascal recognising textual entailment challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the PASCAL Challenges Workshop on Recognising Textual Entailment", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In Proceedings of the PASCAL Chal- lenges Workshop on Recognising Textual Entail- ment.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Mohit Bansal, and Caiming Xiong. 2020. Fastif: Scalable influence functions for efficient model interpretation and debugging", "authors": [ { "first": "Han", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Nazneen Fatema Rajani", "suffix": "" }, { "first": "", "middle": [], "last": "Hase", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2012.15781" ] }, "num": null, "urls": [], "raw_text": "Han Guo, Nazneen Fatema Rajani, Peter Hase, Mohit Bansal, and Caiming Xiong. 2020. Fastif: Scalable influence functions for efficient model interpretation and debugging. arXiv preprint arXiv:2012.15781.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The influence curve and its role in robust estimation", "authors": [ { "first": "R", "middle": [], "last": "Frank", "suffix": "" }, { "first": "", "middle": [], "last": "Hampel", "suffix": "" } ], "year": 1974, "venue": "Journal of the American Statistical Association", "volume": "69", "issue": "346", "pages": "383--393", "other_ids": { "DOI": [ "10.1080/01621459.1974.10482962" ] }, "num": null, "urls": [], "raw_text": "Frank R. Hampel. 1974. The influence curve and its role in robust estimation. Journal of the American Statistical Association, 69(346):383-393.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Explaining black box predictions and unveiling data artifacts through influence functions", "authors": [ { "first": "Xiaochuang", "middle": [], "last": "Han", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5553--5563", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.492" ] }, "num": null, "urls": [], "raw_text": "Xiaochuang Han, Byron C. Wallace, and Yulia Tsvetkov. 2020. Explaining black box predictions and unveiling data artifacts through influence func- tions. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5553-5563, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Billionscale similarity search with gpus", "authors": [ { "first": "J", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "M", "middle": [], "last": "Douze", "suffix": "" }, { "first": "H", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2019, "venue": "IEEE Transactions on Big Data", "volume": "", "issue": "", "pages": "1--1", "other_ids": { "DOI": [ "10.1109/TBDATA.2019.2921572" ] }, "num": null, "urls": [], "raw_text": "J. Johnson, M. Douze, and H. J\u00e9gou. 2019. Billion- scale similarity search with gpus. IEEE Transac- tions on Big Data, pages 1-1.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Understanding black-box predictions via influence functions", "authors": [ { "first": "Wei", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Koh", "suffix": "" }, { "first": "", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "1885--1894", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pang Wei Koh and Percy Liang. 2017. Understand- ing black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Ma- chine Learning Research, pages 1885-1894. PMLR.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Principles of explanatory debugging to personalize interactive machine learning", "authors": [ { "first": "Todd", "middle": [], "last": "Kulesza", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Burnett", "suffix": "" }, { "first": "Weng-Keen", "middle": [], "last": "Wong", "suffix": "" }, { "first": "Simone", "middle": [], "last": "Stumpf", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 20th International Conference on Intelligent User Interfaces, IUI '15", "volume": "", "issue": "", "pages": "126--137", "other_ids": { "DOI": [ "10.1145/2678025.2701399" ] }, "num": null, "urls": [], "raw_text": "Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of explana- tory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces, IUI '15, page 126-137, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "FIND: Human-in-the-Loop Debugging Deep Text Classifiers", "authors": [ { "first": "Piyawat", "middle": [], "last": "Lertvittayakumjorn", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Toni", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "332--348", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.24" ] }, "num": null, "urls": [], "raw_text": "Piyawat Lertvittayakumjorn, Lucia Specia, and Francesca Toni. 2020. FIND: Human-in-the-Loop Debugging Deep Text Classifiers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 332-348, Online. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Explanation-based human debugging of nlp models: A survey", "authors": [ { "first": "Piyawat", "middle": [], "last": "Lertvittayakumjorn", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Toni", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.15135" ] }, "num": null, "urls": [], "raw_text": "Piyawat Lertvittayakumjorn and Francesca Toni. 2021. Explanation-based human debugging of nlp models: A survey. arXiv preprint arXiv:2104.15135.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A unified approach to interpreting model predictions", "authors": [ { "first": "Scott", "middle": [], "last": "Lundberg", "suffix": "" }, { "first": "Su-In", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.07874" ] }, "num": null, "urls": [], "raw_text": "Scott Lundberg and Su-In Lee. 2017. A unified ap- proach to interpreting model predictions. arXiv preprint arXiv:1705.07874.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", "authors": [ { "first": "Tom", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3428--3448", "other_ids": { "DOI": [ "10.18653/v1/P19-1334" ] }, "num": null, "urls": [], "raw_text": "Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "why should i trust you?\" explaining the predictions of any classifier", "authors": [ { "first": "Sameer", "middle": [], "last": "Marco Tulio Ribeiro", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining", "volume": "", "issue": "", "pages": "1135--1144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"why should i trust you?\" explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD international con- ference on knowledge discovery and data mining, pages 1135-1144.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Smoothgrad: removing noise by adding noise", "authors": [ { "first": "Daniel", "middle": [], "last": "Smilkov", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "Been", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Fernanda", "middle": [], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Wattenberg", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.03825" ] }, "num": null, "urls": [], "raw_text": "Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Vi\u00e9gas, and Martin Wattenberg. 2017. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Interacting meaningfully with machine learning systems: Three experiments", "authors": [ { "first": "Simone", "middle": [], "last": "Stumpf", "suffix": "" }, { "first": "Vidya", "middle": [], "last": "Rajaram", "suffix": "" }, { "first": "Lida", "middle": [], "last": "Li", "suffix": "" }, { "first": "Weng-Keen", "middle": [], "last": "Wong", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Burnett", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Dietterich", "suffix": "" }, { "first": "Erin", "middle": [], "last": "Sullivan", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Herlocker", "suffix": "" } ], "year": 2009, "venue": "International journal of humancomputer studies", "volume": "67", "issue": "8", "pages": "639--662", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simone Stumpf, Vidya Rajaram, Lida Li, Weng-Keen Wong, Margaret Burnett, Thomas Dietterich, Erin Sullivan, and Jonathan Herlocker. 2009. Interact- ing meaningfully with machine learning systems: Three experiments. International journal of human- computer studies, 67(8):639-662.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Gradients of counterfactuals", "authors": [ { "first": "Mukund", "middle": [], "last": "Sundararajan", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Taly", "suffix": "" }, { "first": "Qiqi", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.02639" ] }, "num": null, "urls": [], "raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2016. Gradients of counterfactuals. arXiv preprint arXiv:1611.02639.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Explanatory interactive machine learning", "authors": [ { "first": "Stefano", "middle": [], "last": "Teso", "suffix": "" }, { "first": "Kristian", "middle": [], "last": "Kersting", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES '19", "volume": "", "issue": "", "pages": "239--245", "other_ids": { "DOI": [ "10.1145/3306618.3314293" ] }, "num": null, "urls": [], "raw_text": "Stefano Teso and Kristian Kersting. 2019. Explanatory interactive machine learning. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Soci- ety, AIES '19, page 239-245, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": { "DOI": [ "10.18653/v1/N18-1101" ] }, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "HS stands for Human Scoring, DA for Data Augmentation and H for Handpicked anchor points. (a) Average accuracy on the MNLI test set (b) Accuracies on the HANS evaluation set, which has 3 heuristic categories and 2 classes. Dashed lines show chance performance. (c) Accuracies on the HANS evaluation set for different configurations of all the debugging procedures. LO stands for Lexical Overlap, SUB for Subsequence, and CON for Constituent category of heuristics. For each cell, the first value of the tuple is the accuracy on the entailment class and the second is the accuracy on the non-entailment class. Best scores for the non-entailment class in bold.", "num": null, "uris": null, "type_str": "figure" } } } }