{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:24:20.261401Z" }, "title": "Implicit representations of event properties within contextual language models: Searching for \"causativity neurons\"", "authors": [ { "first": "Esther", "middle": [], "last": "Seyffarth", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heinrich Heine University D\u00fcsseldorf", "location": { "settlement": "D\u00fcsseldorf", "country": "Germany" } }, "email": "seyffarth@phil.hhu.de" }, { "first": "Younes", "middle": [], "last": "Samih", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heinrich Heine University D\u00fcsseldorf", "location": { "settlement": "D\u00fcsseldorf", "country": "Germany" } }, "email": "samih@phil.hhu.de" }, { "first": "Laura", "middle": [], "last": "Kallmeyer", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heinrich Heine University D\u00fcsseldorf", "location": { "settlement": "D\u00fcsseldorf", "country": "Germany" } }, "email": "kallmeyer@phil.hhu.de" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hamad Bin Khalifa University", "location": {} }, "email": "hsajjad@hbku.edu.qa" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper addresses the question to which extent neural contextual language models such as BERT implicitly represent complex semantic properties. More concretely, the paper shows that the neuron activations obtained from processing an English sentence provide discriminative features for predicting the (non-)causativity of the event denoted by the verb in a simple linear classifier. A layer-wise analysis reveals that the relevant properties are mostly learned in the higher layers. Moreover, further experiments show that appr. 10% of the neuron activations are enough to already predict causativity with a relatively high accuracy. 1", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper addresses the question to which extent neural contextual language models such as BERT implicitly represent complex semantic properties. More concretely, the paper shows that the neuron activations obtained from processing an English sentence provide discriminative features for predicting the (non-)causativity of the event denoted by the verb in a simple linear classifier. A layer-wise analysis reveals that the relevant properties are mostly learned in the higher layers. Moreover, further experiments show that appr. 10% of the neuron activations are enough to already predict causativity with a relatively high accuracy. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In natural language processing (NLP), machine learning models based on artificial neural networks have achieved impressive results in recent years, due to large amounts of available training data and powerful computing infrastructures. Contextual language models (LMs) such as ELMO (Peters et al., 2018) , BERT (Devlin et al., 2019) , and XLNet (Yang et al., 2019) have particularly contributed to this. However, it is oftentimes not clear which kinds of generalizations these models make, i.e., what exactly they learn. In this respect, neural networks suffer from a lack of transparency and interpretability. Recent research has started to investigate these questions. Since the successful use of neural word embeddings and LMs (e.g., Word2Vec, Mikolov et al. 2013; ELMO, Peters et al. 2018; BERT, Devlin et al. 2019 ) for a range of NLP/NLU tasks, it is clear that LMs capture meaning to a certain degree, in particular lexical meaning. Concerning syntactic information, work on different 1 Our datasets are available at https://github.com/eseyffarth/ predicting-causativity-iwcs-2021 types of language models, in particular RNNs and transformer-based contextual language models, has shown that these models learn morphology (Liu et al., 2019a) , syntactic structure and syntactic preferences to a certain degree (see Lin et al., 2019; Hewitt and Manning, 2019; McCoy et al., 2020; Wilcox et al., 2019; Hu et al., 2020; Warstadt et al., 2020) .", "cite_spans": [ { "start": 282, "end": 303, "text": "(Peters et al., 2018)", "ref_id": "BIBREF19" }, { "start": 311, "end": 332, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 345, "end": 364, "text": "(Yang et al., 2019)", "ref_id": "BIBREF29" }, { "start": 737, "end": 767, "text": "Word2Vec, Mikolov et al. 2013;", "ref_id": null }, { "start": 768, "end": 793, "text": "ELMO, Peters et al. 2018;", "ref_id": null }, { "start": 794, "end": 818, "text": "BERT, Devlin et al. 2019", "ref_id": null }, { "start": 992, "end": 993, "text": "1", "ref_id": null }, { "start": 1228, "end": 1247, "text": "(Liu et al., 2019a)", "ref_id": "BIBREF14" }, { "start": 1321, "end": 1338, "text": "Lin et al., 2019;", "ref_id": "BIBREF13" }, { "start": 1339, "end": 1364, "text": "Hewitt and Manning, 2019;", "ref_id": "BIBREF8" }, { "start": 1365, "end": 1384, "text": "McCoy et al., 2020;", "ref_id": "BIBREF17" }, { "start": 1385, "end": 1405, "text": "Wilcox et al., 2019;", "ref_id": "BIBREF28" }, { "start": 1406, "end": 1422, "text": "Hu et al., 2020;", "ref_id": "BIBREF9" }, { "start": 1423, "end": 1445, "text": "Warstadt et al., 2020)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and motivation", "sec_num": "1" }, { "text": "In this paper, we expand the question of what linguistic properties these models learn towards whether pretrained contextualized models capture more abstract semantic properties, in particular properties that contribute to the structure of the semantic representation underlying a given sentence. More concretely, we investigate whether an LM such as BERT represents whether a sentence denotes a causative event or not. If this was the case, we would expect a systematic difference between for instance BERT's neuron activations for (1-a) and for (1-b).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and motivation", "sec_num": "1" }, { "text": "(1) a. Kim broke the window. b. Kim ate an apple.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and motivation", "sec_num": "1" }, { "text": "Note that the two sentences share almost no lexical elements, so the neuron activations are expected to be mostly different. Our research question is focused on whether there are systematic activation patterns that can be observed that are common to all instances of causative sentences, and others that are common to all instances of noncausative sentences, independent of sentence content. One of the common approaches to probe neural network models is to use a probing classifier. Given a linguistic property of interest, the idea is to extract contextualized activations of units (words/phrases/sentences) relevant to the property. A classifier is then trained to learn the property by using the extracted activations as features. The performance of the classifier is taken to approximate the degree to which the language model learned the linguistic property. We also use probing classifiers and probe the model as a whole, its individual layers and its neurons with respect to causativity. We use the NeuroX toolkit (Dalvi et al., 2019b) to conduct the probing experiments.", "cite_spans": [ { "start": 1022, "end": 1043, "text": "(Dalvi et al., 2019b)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and motivation", "sec_num": "1" }, { "text": "We experiment using two 12-layer pretrained models, BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019) , as well as a distilled version of BERT, DistilBERT (Sanh et al., 2019) . Our findings and contributions are as follows: We create a novel dataset of sentences with verbs that are labeled for causativity/non-causativity. Using this dataset for probing, we show that this abstract semantic property is learned by the pretrained models. It is better represented in the higher layers of the model and, furthermore, there is a subset of appr. 10% of the neurons that encodes the property in question.", "cite_spans": [ { "start": 57, "end": 78, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 89, "end": 108, "text": "(Yang et al., 2019)", "ref_id": "BIBREF29" }, { "start": 162, "end": 181, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and motivation", "sec_num": "1" }, { "text": "A number of interpretation studies have analyzed representations of pre-trained models and showed that they learn linguistic information such as part of speech tagging, semantic tagging and CCG tagging (Conneau et al., 2018; Liu et al., 2019a; Tenney et al., 2019a,b; Voita et al., 2019) . A typical procedure to analyze representation is a post-hoc analysis using a probing classifier. It has been shown that word-level concepts are learned at lower layers while sentence-level concepts are learned at higher layers (Liu et al., 2019b) . Dalvi et al. (2019a) extended the layer-level analysis towards individual neurons of the network. They proposed linguistic correlation analysis (LCA) to identify neurons with respect to a linguistic property. Durrani et al. (2020); later used LCA to analyze pre-trained models in the context of linguistic learning and redundancy in the network respectively.", "cite_spans": [ { "start": 202, "end": 224, "text": "(Conneau et al., 2018;", "ref_id": "BIBREF0" }, { "start": 225, "end": 243, "text": "Liu et al., 2019a;", "ref_id": "BIBREF14" }, { "start": 244, "end": 267, "text": "Tenney et al., 2019a,b;", "ref_id": null }, { "start": 268, "end": 287, "text": "Voita et al., 2019)", "ref_id": "BIBREF26" }, { "start": 517, "end": 536, "text": "(Liu et al., 2019b)", "ref_id": null }, { "start": 539, "end": 559, "text": "Dalvi et al. (2019a)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In this work, we also aim to analyze pre-trained models at model-, layer-and neuron-level using post-hoc analysis methods. Different from others, we concentrate on an abstract, structure-building semantic property, namely causativity of events. Our focus is on lexical causatives, that is, verbs whose lexical meaning has a causative aspect (Dowty, 1979) . In Dowty's aspect calculus, such verbs are analyzed as [\u03c6 CAUSE \u03c8], where \u03c6 and \u03c8 are sentences and causation is a \"two-place sentential connective\", notably even for sentences that only contain a single verb phrase. Thus, John killed Bill is decomposed as in (2) (Dowty, 1979, p. 91) .", "cite_spans": [ { "start": 341, "end": 354, "text": "(Dowty, 1979)", "ref_id": "BIBREF5" }, { "start": 621, "end": 641, "text": "(Dowty, 1979, p. 91)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "2) [[John does something] CAUSE [BECOME\u00ac[Bill is alive]]]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The \"semantically bipartite\" nature of causative verbs means that sentences with such verbs actually express not one event, but two subevents, one being the causing event and the other one being the caused event, or result, of the first. This event structure is a challenge to model with NLP systems when no superficial indicators for causativity are available. While there are verbs that are lexically causative (such as refresh) and verbs that are lexically noncausative (such as prefer), there are also verbs that vary in their causativity depending on the context in which they appear (such as open). Our goal is to determine to what extent the causativity or noncausativity of these types of verbs is implicitly learned by large language models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Over the last years, there has been an increasing interest in assessing linguistic properties encoded in neural representations. A common method to reveal these linguistic representations employs diagnostic classifiers or probes (Hupkes et al., 2018) . A common diagnostic classifier is a linear classifier trained for the underlying linguistic task, using the activations generated from the trained neural network model as features. The performance of the classifier is used as a proxy to measure the amount of linguistic information present in the activations. We also use a linear classifier for probing. Consider a pre-trained neural network model M with L layers:", "cite_spans": [ { "start": 229, "end": 250, "text": "(Hupkes et al., 2018)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "{l 1 , l 2 , . . . , l L }, where each layer l i is of size H. Given a dataset D = {s 1 , s 2 , ..., s T } consisting of T sentences, the contextualized em- bedding of sentence s j at layer l i is z i j = l i (s j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": ". In pretrained models like BERT, a special token [CLS] is appended with every training instance during training. The token is later optimized for sentence embedding during transfer learning (Devlin et al., 2019) . We consider the representations of [CLS] for sentence embedding in this study. The [CLS] representation extracted from various layers is used as input features to the probing classifier.", "cite_spans": [ { "start": 191, "end": 212, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 250, "end": 255, "text": "[CLS]", "ref_id": null }, { "start": 298, "end": 303, "text": "[CLS]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "To assess to what extent a linguistic property is learned in the model, we first take the sentence representations of all layers as features for linear classification, i.e., all z i j for 1 \u2264 i \u2264 L and 1 \u2264 j \u2264 H. The classifier is trained by minimizing the following loss function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model-level probing:", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(\u03b8) = \u2212 j log P \u03b8 (t s j |s j )", "eq_num": "(1)" } ], "section": "Model-level probing:", "sec_num": null }, { "text": "where t s j is the predicted label for sentence s j . In this work, binary labels are used to encode whether the property is present in a sentence or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model-level probing:", "sec_num": null }, { "text": "Layer-level probing: Here, we question how much individual layers of a model represent our property of interest. We train a linear classifier on the activations of each individual layer. The performance of each layer serves as a proxy to how much information it encodes with respect to our property.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model-level probing:", "sec_num": null }, { "text": "Neuron-level probing: While the layer-level probing tells about how much linguistic information is learned in a layer, it does not tell about the learning of individual neurons in the network. It is possible that while a particular layer performs best in the layer-level probing, the best neurons learning about the linguistic property are spread across many layers. In neuron-level probing, we aim to identify the most salient neurons across the network that learn the linguistic property at hand. We follow the linguistic correlation analysis method (LCA) of Dalvi et al. (2019a) to conduct this analysis. Given representations of the model as in the model-level probing, LCA trains an Elas-ticNet (Zou and Hastie, 2005) classifier, and provides a salient list of neurons with respect to the linguistic property. ElasticNet provides a balance between selecting very focused localized features and distributed features (here: neurons). Equation 2gives the loss function:", "cite_spans": [ { "start": 561, "end": 581, "text": "Dalvi et al. (2019a)", "ref_id": "BIBREF1" }, { "start": 700, "end": 722, "text": "(Zou and Hastie, 2005)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Model-level probing:", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(\u03b8) = \u2212 j log P \u03b8 (t s j |s j ) +\u03bb 1 \u03b8 1 + \u03bb 2 \u03b8 2 2", "eq_num": "(2)" } ], "section": "Model-level probing:", "sec_num": null }, { "text": "where \u03bb 1 and \u03bb 2 are parameters, for which we use the suggested value of 0.00001 (Dalvi et al., 2019a) .", "cite_spans": [ { "start": 82, "end": 103, "text": "(Dalvi et al., 2019a)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Model-level probing:", "sec_num": null }, { "text": "To prepare our datasets, we create different sets of verbs that are labeled for (non)causativity, and then use them as seeds to collect sentences from a corpus to be used as input to the classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4" }, { "text": "Causative and noncausative verbs We collect a set of English verbs that are either always causative or never causative when appearing in basic transitive sentences (NP V NP). This property is derived from VerbNet 3.3 (Kipper et al., 2000) according to the event-semantic description of each basic transitive syntactic frame in each verb class. We only consider members of VerbNet classes where either all basic transitive frames or none of them are associated with causativity. Two trained linguists manually prune the lists of causative and noncausative verbs to remove ambiguous verbs and other edge cases. This results in a list of 2157 causative and 617 noncausative verbs.", "cite_spans": [ { "start": 217, "end": 238, "text": "(Kipper et al., 2000)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Verb set selection", "sec_num": "4.1" }, { "text": "Alternating verbs We also create a set of verbs whose causativity property depends on whether they appear in transitive or intransitive sentences. This is the case for verbs in VerbNet that are marked with the \"Causative\" property in basic transitive syntactic frames, and with the \"Inchoative\" property in basic intransitive frames. These verbs participate in the causative-inchoative alternation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb set selection", "sec_num": "4.1" }, { "text": "They represent a special case for our experiments because the classifier needs to distinguish between causative and noncausative uses of identical verbs, whereas the sets of causative and noncausative verbs are completely distinct. In this setting, the classifier cannot rely purely on the verb lemma (because alternating verbs can appear in both classes), and it also cannot rely purely on the (in)transitivity of sentences (because verbs outside the alternation can be causative in intransitive sentences). Since this makes the task more difficult, we expect the classification accuracy to be lower in this setting than in settings with non-alternating verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb set selection", "sec_num": "4.1" }, { "text": "We collect three datasets for our experiments. 2 All sentences are extracted from ENCOW (Sch\u00e4fer and Bildhauer, 2012; Sch\u00e4fer, 2015) , an English web corpus (9.6 billion tokens) annotated with dependencies created with MaltParser. Each dataset contains 40,000 sentences in the train portion, 5,000 sentences in dev and 5,000 sentences in the test portion. Each portion contains an equal number of causative and noncausative instances. Each test set contains sentences that were not previously seen in the train set, but not all verbs in the test set are unseen.", "cite_spans": [ { "start": 88, "end": 117, "text": "(Sch\u00e4fer and Bildhauer, 2012;", "ref_id": "BIBREF21" }, { "start": 118, "end": 132, "text": "Sch\u00e4fer, 2015)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence selection", "sec_num": "4.2" }, { "text": "2 All datasets are available at https://github.com/eseyffarth/ predicting-causativity-iwcs-2021", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence selection", "sec_num": "4.2" }, { "text": "Transitive sentences, same sentence length The first dataset (D tr 5 ) is based on the sets of causative and noncausative verbs and contains only transitive sentences of length 5 (including punctuation). This yields a dataset where all sentences have the same basic syntactic pattern. Examples are given in (3) (root verbs in bold).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence selection", "sec_num": "4.2" }, { "text": "(3) a. The answer surprised me .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence selection", "sec_num": "4.2" }, { "text": "(caus) b. It contains no surprises .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence selection", "sec_num": "4.2" }, { "text": "(noncaus)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence selection", "sec_num": "4.2" }, { "text": "Transitive sentences, varying sentence length The second dataset (D tr ) is based on the same verb sets, but contains sentences of varying lengths between 5 and 20 tokens. Examples are given in (4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence selection", "sec_num": "4.2" }, { "text": "(4) a. This affects the calculation . (caus) f. The main console opens .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence selection", "sec_num": "4.2" }, { "text": "(noncaus)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence selection", "sec_num": "4.2" }, { "text": "5 Evaluation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence selection", "sec_num": "4.2" }, { "text": "Pre-trained models We conduct experiments using three transformer-based pre-trained language models: BERT (Devlin et al., 2019) , DistilBERT (Sanh et al., 2019) , and XLNet (Yang et al., 2019 ). The BERT model is an auto-encoder trained with two unsupervised objectives: masked word prediction and next sentence prediction. It is pre-trained on Wikipedia text and BooksCorpus (Zhu et al., 2015) , and comes with hundreds of millions of parameters. distilled version of BERT. It is comprised of 6 encoder layers while retaining 97% of BERT performance. We also employ XLNet-base in all our experiments. Although it is trained with the same parameter configurations as BERT-base, it uses improved training methodology based on a permutation auto-regressive objective function.", "cite_spans": [ { "start": 106, "end": 127, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 141, "end": 160, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF20" }, { "start": 173, "end": 191, "text": "(Yang et al., 2019", "ref_id": "BIBREF29" }, { "start": 376, "end": 394, "text": "(Zhu et al., 2015)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5.1" }, { "text": "Since we are interested in analyzing sentence representations, we use the representation of the [CLS] token. However, the representation of [CLS] is not optimized for sentence embedding in the pretrained models. In order to tune it for sentence representation, we fine-tune the pre-trained model on a sentence classification task, the Stanford sentiment treebank (Socher et al., 2013) . We understand that by fine-tuning the pre-trained model, the representations of the network are tuned for the task. An alternate strategy is to use average activations of words in a sentence as sentence representation. We did not explore it in this paper.", "cite_spans": [ { "start": 363, "end": 384, "text": "(Socher et al., 2013)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5.1" }, { "text": "We train a linear classifier using a categorical cross-entropy loss, optimized using Adam. For neuron-level analysis, we used elastic-net regularization. We used the recommended values of elastic-net parameters, i.e., \u03bb 1 and \u03bb 2 each equal to 0.0001. Table 1 presents the results of using all neuron activations of the model as features for classification. The general high classification results show that the model has learned causitivity. However, as the dataset becomes hard in terms of varying sentence length and including more challenging instances with alternating verbs, the performance drops to as low as 83.96% for DistilBERT, which is still substantially better than random performance (50%).", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 259, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Probing Classifier", "sec_num": null }, { "text": "Layer-level Results Here we want to see which layers of pretrained models learn causativity. We train our probing classifier on individual layers. Figure 1 summarizes the results. As a general trend, causitivity is best represented at the higher layers of the models, which is in line with previous findings that sentence-level properties such as syntax are better learned at higher layers . For all models, we see a slight drop in the performance for the last layer, which is due to the fact that the last layer is optimized for the objective function (Kovaleva et al., 2019) . Compared to BERT and DistilBERT, the middle layer of XLNet consistently showed a small drop in the performance for all datasets. This trend is more prevalent in the neuron-level results. We discuss it later in this section.", "cite_spans": [ { "start": 553, "end": 576, "text": "(Kovaleva et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 147, "end": 155, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Model-level Results", "sec_num": null }, { "text": "We use LCA to determine a minimal set of neurons that still achieve a classification performance (Acc t ) within 2% of the performance using all the neurons of the network for classification. We additionally evaluate the effectiveness of the LCA method by comparing the classification performance using the top selected neurons with the randomly selected neurons. We found the salient neurons of LCA to perform substantially better than random neurons. Table 2 presents the numbers of salient neurons selected for each model and for each dataset together with the resulting classification accuracy. Note that in the case of BERT and the dataset D all and also for XLNet on all datasets, the accuracy increased due to the elimination of nondiscriminative features.", "cite_spans": [], "ref_spans": [ { "start": 453, "end": 460, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Neuron-level Results", "sec_num": null }, { "text": "Given salient neurons with respect to our task, we observe their distribution across the model. Figure 2 summarizes the results. Across all models and datasets, the LCA method never selected any neurons from the embedding layer. This is in line with the layer-wise results where the performance using embedding layer representation is similar to random classification, i.e., no causativity informa- tion is present. For BERT and DistilBERT, the distribution of salient neurons is skewed towards higher layers (excluding top layer), i.e., causativity information is more represented at the higher layers. XL-Net presents a slightly different picture where the salient neurons selected from the middle layers are substantially lower than most of the other layers. As the task becomes harder, the contribution of lower middle layers (3-4) substantially increases while the last layer contribution drops.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 102, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Neuron-level Results", "sec_num": null }, { "text": "The number of neurons selected from middle layers (5-6 in the case of 12 layer models and 3 in the case of 6 layer models) are substantially lower than the neighbouring layers across all models and data sets. We hypothesize that learning causitivity requires word-level and sentence-level information which is dominating at the lower and higher layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neuron-level Results", "sec_num": null }, { "text": "As shown in Table 1 , all classifiers performed best on D tr 5 . With little syntactic variation between instances in D tr 5 , this is the least challenging setting for the task: The verbs and arguments in each sentence are the main indicators for the classifiers to identify causativity. In D tr , all models achieve slightly lower accuracy. Longer sentences are more likely to contain conjunctions or subordinate clauses, which may distract the classifiers from the sentence's (non)causative root verb and its arguments. As expected, the lowest accuracy scores are observed in D all , which includes both transitive and intransitive sentences, as well as alternating verbs whose causativity property changes in these different environments. Table 3 shows that all three models mislabel alternating verbs more often than nonalternating verbs. BERT and XLNet Our datasets are randomly collected from a larger corpus with no regard for verb frequency. This results in datasets where some verbs occur only once or twice, some are never seen in the training data, and some are more common. Our goal is to determine whether the classifiers successfully learn to predict (non)causativity, independently of specific verb lemmas. The results reported so far are all averaged over all verbs in a dataset, illustrating that some models are more successful on the classification task than others (e.g. BERT achieving higher accuracy scores than the other models on the first two datasets). Additionally, it is also worth exploring the accuracy of the classifiers for individual verbs, particularly those that are most likely to be mislabeled by any of the classifiers. Table 4 reports the two most-mislabeled verbs of each type per dataset (across all models). Notably, the XLnet classifier consistently makes more mistakes with noncausative instances than with causative ones, as is also apparent from Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 743, "end": 750, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 1659, "end": 1666, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 1893, "end": 1900, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Broadly, the frequently mislabeled verbs fall in three categories: 1. presumed errors due to parsing mistakes and subsequent errors in the gold data; 2. errors due to incorrect labels of ambiguous verbs in the gold data; 3. errors due to an ambiguity between full verb, light verb, and auxiliary verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Presumed errors due to parsing mistakes and subsequent errors in the gold data Most of the frequently-mislabeled verbs in D tr 5 fall into this category. These verbs occur only a few times each, indicating that they do not represent a deeper structural issue with the classifiers; for instance, sentences with the root verb mark occasionally appear incomplete in ENCOW, as exemplified in (6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "(6) the symptoms marked gr .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "(ENCOW-02-23709973)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "The verb sound is labeled as a causative verb in our gold data (e.g., \"to sound the bells\"), but appears often in another word sense, as exemplified in (7-a). In these sentences, the verb does not have a direct object as expected; the reason for their inclusion in our datasets is an incorrect dependency parse in ENCOW. In other words, the causative (7) a. that sounds so scary !!! (ENCOW-05-11095175) b. you mean screw justice ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "(a) Dtr 5BERT (b) DtrBERT (c) D all BERT (d) Dtr 5XLNet (e) DtrXLNet (f) D all XLNET (g) Dtr 5DistilBERT (h) DtrDistilBERT (i) D all DistilBERT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "(ENCOW-14-01839826) D all also contains incorrect gold labels that are to a large extent due to parsing errors, for instance bring. All sentences included in (8) were parsed as having bring as their root verb. That the classifiers tended to assign a noncausative label to these sentences suggests that they instead assigned labels for take for granted, love, or be, respectively (which is actually correct). In future work, we will improve our datasets to minimize the number of this type of errors, using a more recent dependency parser and some manual checking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Errors due to incorrect gold labels of ambiguous verbs In D tr , face is the most mislabeled causative verb. The presumed causative label for this verb comes from the VN class confront-98, which contains verbs such as target or combat. However, the mislabeled examples from the dataset seem to evoke a weaker, more passive sense of face, as in (9-a), where human annotators might not assign a causative label. In these cases, the label assigned by the classifier is actually correct, while the gold label is not. The mislabeled instances of cover in D all are, similarly to face, an artefact of verb polysemy and should in fact not be regarded as causative sentences, as exemplified in (9-b). b. the manual that comes with the game covers everything you need to know , including the mission editor . (ENCOW-08-06019647)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Sentences with the verb represent are frequently labeled as causative by one or more of the classifiers. When the verb is used in a legal or political sense, as in (10), this may in fact be appropriate. Since our verb sets are labeled on the lemma level and we do not perform any word sense disambiguation, these differences are not explicitly marked in our datasets, so these sentences are counted as mislabeled instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "(10) they represent the voice of over 80,000 students and 62,000 members in 155 countries . (ENCOW-09-01862399) In D tr , all classifiers occasionally label instances of noncausative leave as causative, particularly XL-Net. leave is a member of the VN classes become-109.1-1-1, escape-51.1-1-1, fulfilling-13.4.1, future having-13.3, keep-15.2, and others. While not all of these classes license basic intransitive sentences of the type included in our datasets, this illustrates the polysemy of leave, which might be an explanation for the relatively high number of mislabeled instances in our experiments.", "cite_spans": [ { "start": 92, "end": 111, "text": "(ENCOW-09-01862399)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Generally, in D all , noncausative alternating verbs are among the most mislabeled verbs. Since the dataset contains different numbers of verbs of each type, this may be a sparsity effect more than an effect of these verbs being more difficult to label. This question will be approached with new datasets in future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "The reason for most errors of this type is that our datasets were created automatically with the help of a lexical resource. In order to avoid such polysemy issues, a version of the datasets with human annotations would be necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Errors due to an ambiguity between full verb, light verb, and auxiliary verb Finally, the verbs have and be are the most mislabeled nonalternating noncausative verbs in D all . These verbs appear in light verb constructions, as auxiliary verbs, and in a range of word senses that can be causative or noncausative. The examples in (11) illustrate why the classifiers are struggling to label such sentences as noncausative. Note that in all cases, the MaltParser annotations provided alongside ENCOW mark a form of have as the root verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "(11) a. hi we have just moved house and the house has no tv aerial . (ENCOW-11-17855426) b. we had a small cup made up not long ago with a very simple design . (ENCOW-06-00570494) c. local people have the power to stop this by not buying counterfeit products . (ENCOW-08-19775040) ENCOW was parsed between 2015 and 2018 using the standard engmalt model available on the MaltParser website (Roland Sch\u00e4fer, p.c.) This type of error would be minimized if a more recent dependency parser was used.", "cite_spans": [ { "start": 261, "end": 280, "text": "(ENCOW-08-19775040)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "To summarize, many of the \"errors\" of the classifiers are actually not errors but incorrect labels in the gold data. This means that the classifiers might be better in predicting causativity than assessed by our evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We set up a series of classification experiments with a range of datasets to determine whether large language models learn implicit representations of causativity, a linguistic property that is not necessarily represented syntactically or morphologically in English. We compare classifiers based on BERT, DistilBERT, and XLNet, and find that all learn to predict causativity to a large extent. Differences in classification accuracy are observed across different datasets (see Table 1 ). As expected, all models achieve the highest accuracy on D tr 5 and the lowest accuracy on D all . The latter set, in addition to verbs that are lexically causative or lexically noncausative, also includes verbs that participate in the causative-inchoative alternation, which presents an additional challenge to the classifiers.", "cite_spans": [], "ref_spans": [ { "start": 477, "end": 484, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We also show that causativity is represented rather in the higher layers of the models and, furthermore, that reducing each model to only the 10% of its neurons that are most correlated with the causativity property only leads to small differences in accuracy, sometimes an increase in accuracy due to the elimination of non-discriminative features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Our error analysis suggests that many of the classification errors are actually labeling errors in the data, due either to a wrong parse of the sentence in our source corpus ENCOW or to the polysemy of verbs that can be causative in certain readings but are not causative in some of the readings mislabeled in the dataset. Put differently, the classifiers were probably better in identifying causativity than their accuracy scores suggest. While our datasets were created with little manual effort and already led to good results, we are planning on pursuing possible improvements in the future in order to avoid these labeling errors as far as possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" } ], "back_matter": [ { "text": "The work presented in this paper was partly financed by the Deutsche Forschungsgemeinschaft (DFG) within the project \"Unsupervised Frame Induction (FInd)\". We wish to thank three anonymous reviewers for their constructive feedback and helpful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "German", "middle": [], "last": "Kruszewski", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2126--2136", "other_ids": { "DOI": [ "10.18653/v1/P18-1198" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lam- ple, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "What is one grain of sand in the desert? analyzing individual neurons in deep nlp models", "authors": [ { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "D", "middle": [ "Anthony" ], "last": "Bau", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, D. Anthony Bau, and James Glass. 2019a. What is one grain of sand in the desert? analyzing individual neurons in deep nlp models. In Proceed- ings of the Thirty-Third AAAI Conference on Artifi- cial Intelligence (AAAI, Oral presentation).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neurox: A toolkit for analyzing individual neurons in neural networks", "authors": [ { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Avery", "middle": [], "last": "Nortonsmith", "suffix": "" }, { "first": "D", "middle": [ "Anthony" ], "last": "Bau", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2019, "venue": "AAAI Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "9851--9852", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fahim Dalvi, Avery Nortonsmith, D. Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, and James Glass. 2019b. Neurox: A toolkit for an- alyzing individual neurons in neural networks. In AAAI Conference on Artificial Intelligence (AAAI), pages 9851-9852.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Analyzing redundancy in pretrained transformer models", "authors": [ { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4908--4926", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.398" ] }, "num": null, "urls": [], "raw_text": "Fahim Dalvi, Hassan Sajjad, Nadir Durrani, and Yonatan Belinkov. 2020. Analyzing redundancy in pretrained transformer models. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 4908- 4926, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Word Meaning and Montague Grammar", "authors": [ { "first": "David", "middle": [ "R" ], "last": "Dowty", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/978-94-009-9473-7" ] }, "num": null, "urls": [], "raw_text": "David R. Dowty. 1979. Word Meaning and Montague Grammar. Springer Netherlands.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Analyzing individual neurons in pre-trained language models", "authors": [ { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4865--4880", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.395" ] }, "num": null, "urls": [], "raw_text": "Nadir Durrani, Hassan Sajjad, Fahim Dalvi, and Yonatan Belinkov. 2020. Analyzing individual neu- rons in pre-trained language models. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4865-4880, Online. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Do RNNs learn human-like abstract word order preferences?", "authors": [ { "first": "Richard", "middle": [], "last": "Futrell", "suffix": "" }, { "first": "Roger", "middle": [ "P" ], "last": "Levy", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Society for Computation in Linguistics (SCiL) 2019", "volume": "", "issue": "", "pages": "50--59", "other_ids": { "DOI": [ "10.7275/jb34-9986" ] }, "num": null, "urls": [], "raw_text": "Richard Futrell and Roger P. Levy. 2019. Do RNNs learn human-like abstract word order preferences? In Proceedings of the Society for Computation in Linguistics (SCiL) 2019, pages 50-59.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A structural probe for finding syntax in word representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4129--4138", "other_ids": { "DOI": [ "10.18653/v1/N19-1419" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word repre- sentations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A systematic assessment of syntactic generalization in neural language models", "authors": [ { "first": "Jennifer", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Gauthier", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1725--1744", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.158" ] }, "num": null, "urls": [], "raw_text": "Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020. A systematic assessment of syntactic generalization in neural language mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1725-1744, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure", "authors": [ { "first": "Dieuwke", "middle": [], "last": "Hupkes", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Veldhoen", "suffix": "" }, { "first": "Willem", "middle": [], "last": "Zuidema", "suffix": "" } ], "year": 2018, "venue": "Journal of Artificial Intelligence Research", "volume": "61", "issue": "", "pages": "907--926", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and 'diagnostic classifiers' re- veal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Class-Based Construction of a Verb Lexicon", "authors": [ { "first": "Karin", "middle": [], "last": "Kipper", "suffix": "" }, { "first": "Hoa", "middle": [ "Trang" ], "last": "Dang", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Seventeenth National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "691--696", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karin Kipper, Hoa Trang Dang, and Martha Palmer. 2000. Class-Based Construction of a Verb Lexicon. In Proceedings of the Seventeenth National Confer- ence on Artificial Intelligence, page 691-696.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Revealing the dark secrets of BERT", "authors": [ { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Romanov", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4364--4373", "other_ids": { "DOI": [ "10.18653/v1/D19-1445" ] }, "num": null, "urls": [], "raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4364-4373, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Open sesame: Getting inside BERT's linguistic knowledge", "authors": [ { "first": "Yongjie", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chern Tan", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "241--253", "other_ids": { "DOI": [ "10.18653/v1/W19-4825" ] }, "num": null, "urls": [], "raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 241-253, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Linguistic knowledge and transferability of contextual representations", "authors": [ { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1073--1094", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Mukund Yelahanka Raghuprasad, and Smaranda Muresan. 2019b. Columbia at SemEval", "authors": [ { "first": "Zhuoran", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shivali", "middle": [], "last": "Goel", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/S19-2194" ] }, "num": null, "urls": [], "raw_text": "Zhuoran Liu, Shivali Goel, Mukund Yela- hanka Raghuprasad, and Smaranda Muresan. 2019b. Columbia at SemEval-2019 task 7:", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Multi-task learning for stance classification and rumour verification", "authors": [], "year": null, "venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "1110--1114", "other_ids": { "DOI": [ "10.18653/v1/S19-2194" ] }, "num": null, "urls": [], "raw_text": "Multi-task learning for stance classification and rumour verification. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 1110-1114, Minneapolis, Minnesota, USA. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Does syntax need to grow on trees? sources of hierarchical inductive bias in sequence-to-sequence networks", "authors": [ { "first": "R", "middle": [], "last": "", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "125--140", "other_ids": { "DOI": [ "10.1162/tacl_a_00304" ] }, "num": null, "urls": [], "raw_text": "R. Thomas McCoy, Robert Frank, and Tal Linzen. 2020. Does syntax need to grow on trees? sources of hierarchical inductive bias in sequence-to-sequence networks. Transactions of the Association for Com- putational Linguistics, 8:125-140.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Distributed Representations of Words and Phrases and their Compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed Representa- tions of Words and Phrases and their Compositional- ity. In Advances in Neural Information Processing Systems, pages 3111-3119. Curran Associates, Inc.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Building large corpora from the web using a new efficient tool chain", "authors": [ { "first": "Roland", "middle": [], "last": "Sch\u00e4fer", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Bildhauer", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)", "volume": "", "issue": "", "pages": "486--493", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roland Sch\u00e4fer and Felix Bildhauer. 2012. Building large corpora from the web using a new efficient tool chain. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 486-493, Istanbul, Turkey. Euro- pean Language Resources Association (ELRA).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Processing and querying large web corpora with the COW14 architecture", "authors": [ { "first": "Roland", "middle": [], "last": "Sch\u00e4fer", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Challenges in the Management of Large Corpora", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roland Sch\u00e4fer. 2015. Processing and querying large web corpora with the COW14 architecture. In Pro- ceedings of Challenges in the Management of Large Corpora 3 (CMLC-3), Lancaster. UCREL, IDS.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "BERT rediscovers the classical NLP pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": { "DOI": [ "10.18653/v1/P19-1452" ] }, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextualized word representations", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipan- jan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextualized word representations.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned", "authors": [ { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" }, { "first": "David", "middle": [], "last": "Talbot", "suffix": "" }, { "first": "Fedor", "middle": [], "last": "Moiseev", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P19-1580" ] }, "num": null, "urls": [], "raw_text": "Elena Voita, David Talbot, Fedor Moiseev, Rico Sen- nrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lift- ing, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, Florence, Italy.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "BLiMP: The benchmark of linguistic minimal pairs for English", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Alicia", "middle": [], "last": "Parrish", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Anhad", "middle": [], "last": "Mohananey", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Sheng-Fu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "377--392", "other_ids": { "DOI": [ "10.1162/tacl_a_00321" ] }, "num": null, "urls": [], "raw_text": "Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020. BLiMP: The benchmark of linguis- tic minimal pairs for English. Transactions of the As- sociation for Computational Linguistics, 8:377-392.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Hierarchical representation in neural language models: Suppression and recovery of expectations", "authors": [ { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Futrell", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "181--190", "other_ids": { "DOI": [ "10.18653/v1/W19-4819" ] }, "num": null, "urls": [], "raw_text": "Ethan Wilcox, Roger Levy, and Richard Futrell. 2019. Hierarchical representation in neural language mod- els: Suppression and recovery of expectations. In Proceedings of the 2019 ACL Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 181-190, Florence, Italy. As- sociation for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "authors": [ { "first": "Yukun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Urtasun", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Fidler", "suffix": "" } ], "year": 2015, "venue": "The IEEE International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In The IEEE International Con- ference on Computer Vision (ICCV).", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Regularization and variable selection via the elastic net", "authors": [ { "first": "Hui", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Hastie", "suffix": "" } ], "year": 2005, "venue": "Journal of the Royal Statistical Society, Series B", "volume": "67", "issue": "", "pages": "301--320", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hui Zou and Trevor Hastie. 2005. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B, 67:301-320.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "(caus) b. I envy you in that respect ! (noncaus) Intransitive and transitive sentences, varying length The third set (D all ) is based on the verb set that includes verbs in the causative-inchoative alternation. Sentences in D all are either transitive or intransitive and have a length between 5 and 20 tokens. Again, each portion contains an equal number of causative and noncausative instances, consisting of verbs of all three types (alternating, always causative, always noncausative). Examples are given in (5); note that (5-e) and (5-f) share the same alternating root verb. (5) a. I bring a book ! (caus) b. Everything about them intimidates . (caus) c. Each layer had its own opacity . (noncaus) d. A total of 24 people attended . (noncaus) e. He opened the pack .", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Dtr 5-BERT (b) Dtr-BERT (c) D all -BERT (d) Dtr 5-XLNet (e) Dtr-XLNet (f) D all -XLNet (g) Dtr 5-DistilBERT (h) Dtr-DistilBERT (i) D all -DistilBERTFigure 1: Layer-wise results: X-axis = Layer number, Y-axis = Classification accuracy achieved the best accuracy for causative verbs in almost all experiments, while DistilBERT often performed better on noncausative verbs.", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "How top neurons spread across different layers for each causativity dataset. X-axis = Layer number, Y-axis = Number of neurons selected from that layer gold label is assigned by mistake. A similar case is mean; as with sound, many instances do not involve a direct object at all, as exemplified in (7-b), but are included because of an incorrect parse.", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "(8) a. people take for granted what tax money brings . (ENCOW-11-16881058) b. knowledge is power , and what americans really love is the power knowledge brings . (ENCOW-13-11898010) c. sugar is a barrow boy with all that epithet brings . (ENCOW-10-21805613)", "num": null }, "TABREF1": { "text": "Model-level results (accuracy) using all neurons for classification", "type_str": "table", "html": null, "content": "", "num": null }, "TABREF3": { "text": "Selecting minimal number of neurons. N eu a = Total number of neurons, N eu t = Top selected neurons, Acc t = Accuracy after retraining the classifier using only selected neurons.", "type_str": "table", "html": null, "content": "
", "num": null }, "TABREF5": { "text": "Accuracy per verb type and data set in all settings. D tr 5 , D tr and D all each contain an equal number of caus(ative) and noncaus(ative) instances.", "type_str": "table", "html": null, "content": "
", "num": null }, "TABREF7": { "text": "Most mislabeled verbs in all settings. Each cell states the number of instances with the given verb with an incorrect label, giving the absolute number followed by the percentage of all instances with this verb.", "type_str": "table", "html": null, "content": "
", "num": null } } } }