{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:24:25.517220Z" }, "title": "Breeding Fillmore's Chickens and Hatching the Eggs: Recombining Frames and Roles in Frame-Semantic Parsing", "authors": [ { "first": "Gosse", "middle": [], "last": "Minnema", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cognition University of Groningen", "location": { "country": "The Netherlands" } }, "email": "g.f.minnema@rug.nl" }, { "first": "Malvina", "middle": [], "last": "Nissim", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cognition University of Groningen", "location": { "country": "The Netherlands" } }, "email": "m.nissim@rug.nl" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Frame-semantic parsers traditionally predict predicates, frames, and semantic roles in a fixed order. This paper explores the 'chickenor-egg' problem of interdependencies between these components theoretically and practically. We introduce a flexible BERT-based sequence labeling architecture that allows for predicting frames and roles independently from each other or combining them in several ways. Our results show that our setups can approximate more complex traditional models' performance, while allowing for a clearer view of the interdependencies between the pipeline's components, and of how frame and role prediction models make different use of BERT's layers.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Frame-semantic parsers traditionally predict predicates, frames, and semantic roles in a fixed order. This paper explores the 'chickenor-egg' problem of interdependencies between these components theoretically and practically. We introduce a flexible BERT-based sequence labeling architecture that allows for predicting frames and roles independently from each other or combining them in several ways. Our results show that our setups can approximate more complex traditional models' performance, while allowing for a clearer view of the interdependencies between the pipeline's components, and of how frame and role prediction models make different use of BERT's layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "FrameNet (Baker et al., 2003) is a computational framework implementing the theory of frame semantics (Fillmore, 2006) . At its core is the notion of linguistic frames, which are used both for classifying word senses and defining semantic roles. For example, in (1), \"bought\" is said to evoke the COM-MERCE BUY frame, and \"Chuck\", \"some eggs\", and \"yesterday\" instantiate its associated roles.", "cite_spans": [ { "start": 9, "end": 29, "text": "(Baker et al., 2003)", "ref_id": "BIBREF2" }, { "start": 102, "end": 118, "text": "(Fillmore, 2006)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) In NLP, frame-semantic parsing is the task of automatically analyzing sentences in terms of FrameNet frames and roles. It is a form of semantic role labeling (SRL) which defines semantic roles (called frame elements) relative to frames (Gildea and Jurafsky, 2002) . Canonically (Baker et al., 2007) , frame-semantic parsing has been split up into a three-component pipeline: targetID (find frame-evoking predicates), then frameID (map each predicate to a frame), and lastly argID (given a predicate-frame pair, find and label its arguments). Some recent systems, such as the LSTMbased Open-SESAME and (Swayamdipta et al., 2017) or the classical-statistical SEMAFOR , implement the full pipeline, but with a strong focus specifically on argID. Other models implement some subset of the components (Tan, 2007; Hartmann et al., 2017; Yang and Mitchell, 2017; Peng et al., 2018) , while still implicitly adopting the pipeline's philosophy. 1 However, little focus has been given to frame-semantic parsing as an end-to-end task, which entails not only implementing the separate components of the pipeline, but also looking at their interdependencies. We highlight such interdependencies from a theoretical perspective, and investigate them empirically. Specifically, we propose a BERT-based (Devlin et al., 2019) sequence labeling system that allows for exploring frame and role prediction independently, sequentially, or jointly. Our results (i) suggest that the traditional pipeline is meaningful but only one of several viable approaches to endto-end SRL, (ii) highlight the importance of the frameID component, and (iii) show that, despite their interdependence, frame and role prediction need different kinds of linguistic information.", "cite_spans": [ { "start": 240, "end": 267, "text": "(Gildea and Jurafsky, 2002)", "ref_id": "BIBREF9" }, { "start": 282, "end": 302, "text": "(Baker et al., 2007)", "ref_id": "BIBREF1" }, { "start": 605, "end": 631, "text": "(Swayamdipta et al., 2017)", "ref_id": "BIBREF19" }, { "start": 800, "end": 811, "text": "(Tan, 2007;", "ref_id": "BIBREF20" }, { "start": 812, "end": 834, "text": "Hartmann et al., 2017;", "ref_id": "BIBREF10" }, { "start": 835, "end": 859, "text": "Yang and Mitchell, 2017;", "ref_id": "BIBREF23" }, { "start": 860, "end": 878, "text": "Peng et al., 2018)", "ref_id": "BIBREF16" }, { "start": 1290, "end": 1311, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 1618, "end": 1623, "text": "(iii)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "COMMERCE", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Contributions The main contributions of this paper are the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We identify theoretical and practical challenges in the traditional FrameNet SRL pipeline ( \u00a72);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We introduce a flexible, BERT-based sequencelabeling architecture, and experiment with predicting parts of the pipeline separately ( \u00a73);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We explore four methods for re-composing an end-to-end system ( \u00a74);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Through two evaluation metrics, we empirically show the relative contribution of the single components and their reciprocal impact ( \u00a75-6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "All of our source code and instructions for how to reproduce the experiments is publicly available at https://gitlab.com/gosseminnema/ bert-for-framenet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "According to Fillmore (2006) , an essential feature of a frame is that it is \"any system of concepts related in such a way that to understand any one of them you have to understand the whole structure in which it fits.\" In particular, linguistic frames are systems of semantic roles, possible predicates, and other semantic information. In this section, we discuss the relationship between these concepts in the context of frame-semantic parsing and highlight interdependencies between the various components.", "cite_spans": [ { "start": 13, "end": 28, "text": "Fillmore (2006)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "On pipelines, chickens, and eggs", "sec_num": "2" }, { "text": "The following artificial examples display some of the challenges that frame-semantic parsers face:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges for parsers", "sec_num": "2.1" }, { "text": "(2) SELF MOTION [ Self mover Angela ] ran [ Goal to school ] In each example, the predicate contains \"ran\", but used in different frames. In (2) and (3), the predicate is the verb \"run\", but used in two different senses (running of a person vs. running of a liquid), corresponding to two different frames. Here, the main parsing challenge is resolving this ambiguity and choosing the correct frame (frameID). By contrast, in (4), the predicate is \"run out\". This complex verb is not ambiguous, so the main challenge in this sentence would be targetID (i.e. identifying that the target consists of the two tokens \"ran\" and \"out\"). Similarly, in (5), \"run\" is used in a sense not listed in FrameNet, so the challenge here is to make sure nothing is tagged at all.", "cite_spans": [ { "start": 9, "end": 15, "text": "MOTION", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Challenges for parsers", "sec_num": "2.1" }, { "text": "The roles-make-the-frame problem In (2-3), given the target (\"ran\"), the task is to find the correct frame and its corresponding roles. In the traditional pipeline, we would do this by first predicting a frame, and then labeling the dependents of \"ran\" with roles from this frame. However, the question is what kinds of patterns a frame-finding model needs to learn in order to be successful. It is clearly not sufficient to learn a one-to-one mapping between word forms and frames, not just because of known ambiguous cases (\"Angela runs\" vs. \"a tear runs\"), but also because of gaps in FrameNet that conceal unknown ambiguities, such as in (5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges for parsers", "sec_num": "2.1" }, { "text": "To distinguish between \"ran\" in (2) and 3, a model has to to take into account the sentential context in some way, which is exactly what LSTMbased models or BERT-based models can do. But what kind of contextual information exactly do we need? SELF MOTION and FLUIDIC MOTION have a very similar syntax and semantics, the crucial difference being the semantic category of the \"mover\". Concretely, this means that in (2-3), we would benefit from recognizing that \"Angela\" denotes an animate entity while \"a tear\" denotes a fluid. Doing so would amount to doing partial semantic role labeling, since we are looking at the predicate's syntactic arguments and their semantic properties, which is exactly the information an argID model needs to tag \"Angela\" with \"Self mover\" and \"a tear\" with \"Fluid\". While it is possible to use contextual information without knowledge of dependency structure (perhaps simple co-occurrence is enough), we hypothesize that such knowledge would be helpful, and thus, that doing frameID and argID simultaneously, or even predicting frameID after argID.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges for parsers", "sec_num": "2.1" }, { "text": "The frames-make-the-targets problem In the literature, targetID has received even less attention than frameID -all models we are aware of use gold targetID inputs -but is crucial to the success of any end-to-end model. Theoretically speaking, the targetID problem is less interesting than frameID: since as almost any content word can evoke a frame, assuming a fully complete FrameNet (containing all possible predicates), doing targetID would amount to a (simplified) POS-tagging task where content words are labeled as \"yes\", and (most) function words as \"no\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges for parsers", "sec_num": "2.1" }, { "text": "However, in practice, FrameNet is far from complete, so that doing targetID means identifying all wordforms that correspond to some pred-icate evoking a frame present in FrameNet, making targetID dependent on frameID. 3 For example, to find the target in (2-3), it would suffice to lemmatize \"ran\" to \"run\", and check if \"run\" is listed under any FrameNet frames. But this strategy would fail in (4-5): in those cases, 'ran' is not the full target, but either only a part of it (4), or not at all (5). In order to predict this, we would need to recognize that \"run out\" is part of the EXPEND RESOURCE frame, and that \"run someone somewhere\" is a different sense of \"run\" that does not match either FLUIDIC MOTION or SELF MOTION. Hence, targetID seems to presuppose (partial) frameID in some cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges for parsers", "sec_num": "2.1" }, { "text": "The type of problem that we identified in this section is not unique to frame-semantic parsing but also occurs in the standard NLP pipeline of tokenization, POS-tagging, lemmatization, etc. For example, for POS-tagging \"run\" as either a verb or a noun (as in \"we run\" vs. \"a long run\"), one (theoretically speaking) needs access to dependency information (i.e. is there a subject, adjectival modification, etc.). Conversely, dependency parsing benefits from access to POS tags. This would imply that a traditional pipeline might need a lot of redudancy; e.g., a perfect POS-tagging model would also learn some dependency parsing. For (amongst others) this reason, the problem of pipelines versus joint prediction has been extensively studied in NLP in general and SRL in particular. For example, Toutanova et al. (2005) found that predicting all PropBank semantic roles together produced better results than predicting each role separately, Finkel and Manning (2009) proposed a joint model for syntactic parsing and named entity recognition as an alternative to separate prediction or a pipelinebased approach, and He et al. (2018) propose predicting PropBank predicates and semantic roles together instead of sequentially. However, as far as we are aware, no work so far has systematically addressed the frame semantic parsing pipeline and the possible ways for arranging its different components.", "cite_spans": [ { "start": 796, "end": 819, "text": "Toutanova et al. (2005)", "ref_id": "BIBREF22" }, { "start": 941, "end": 966, "text": "Finkel and Manning (2009)", "ref_id": "BIBREF7" }, { "start": 1115, "end": 1131, "text": "He et al. (2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Pipelines: NLP vs. SRL", "sec_num": "2.2" }, { "text": "In modern NLP, traditional pipelines have largely been replaced by neural models performing several tasks at once. However, a line of work initiated by Tenney et al. (2019) ; Jawahar et al. (2019) shows that neural models like BERT implicitly learn to reproduce the classical NLP pipeline, with different layers specializing in specific components of the pipeline, and the possibility for later layers to dynamically resolve ambiguities found in earlier layers. For the BERT-based models we propose, we study the relationship between different layers and the traditional FrameNet pipeline (cf. \u00a76.2).", "cite_spans": [ { "start": 152, "end": 172, "text": "Tenney et al. (2019)", "ref_id": "BIBREF21" }, { "start": 175, "end": 196, "text": "Jawahar et al. (2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Pipelines: NLP vs. SRL", "sec_num": "2.2" }, { "text": "We argued that the different components of the frame-semantic parsing task are mutually dependent on each other. Here, we take a more practical view and re-define the parsing problem in a way that allows for experimenting with individual parts of the pipeline and different combinations of them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dissecting the pipeline", "sec_num": "3" }, { "text": "For our purposes, a crucial limitation of existing frame-semantic parsing models is that they are relatively complex and not very flexible: the different components have to be executed in a fixed order and depend on each other in a fixed way, leaving no room for experimenting with different orders or alternative ways to combine the components. By contrast, we propose a maximally flexible architecture by redefining frame-semantic parsing as a sequence labeling task: given a tokenized sentence S = t 1 , . . . , t n , we predict a frame label sequence F L = l 1 , . . . , l n , where every l i \u2208 (F ID \u222a {\u2205}) \u00d7 2 AID is a pair of zero or one frame labels in F ID = {F Abandonment , . . . , F Worry } and zero or more role labels in AID = {A Abandonment@Agent , . . . , A Worry@Result }. Note that there can be more than one frame in every sentence, and the spans of different roles can overlap. This is illustrated in Figure 1 : Boris has two RID labels, each of which is associated to a different frame (Self mover belongs with SELF MOTION, while Sound source belongs to MAKE NOISE.", "cite_spans": [], "ref_spans": [ { "start": 921, "end": 929, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Strip the parser: just sequence labels", "sec_num": "3.1" }, { "text": "This problem definition comprises several simplifications. First of all, we integrate targetID and frameID into a single component. Moreover, we 'flatten' the role labels, discarding predicaterole dependency information, and assume that most of this information can be recovered during postprocessing (see \u00a75.2). We further simplify the role labels by removing frame names from argument labels, as in AID = {A Agent , . . . , A Result }. While this complicates recovering structural information, it also greatly condenses the label space and might improve generalization across frames: many frames share roles with identical names (e.g., Time, Location, or Agent), which we assume are likely to share at least some semantic properties. It should be noted that this assumption is not trivial, given that there is a long and controversial literature on the generalizability of semantic (proto-)roles (Reisinger et al., 2015) ; we will make it here nonetheless, especially since initial experiments on the development set showed a clear advantage of removing frame names from argument labels.", "cite_spans": [ { "start": 898, "end": 922, "text": "(Reisinger et al., 2015)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Strip the parser: just sequence labels", "sec_num": "3.1" }, { "text": "We implement our architecture using a BERTbased sequence labeler: given a sentence, we tokenize it into byte-pairs, compute BERT embeddings for every token, feed these (one-by-one) to a simple feed-forward neural network, and predict a label representation. By having BERT handle all preprocessing, we avoid making design choices (e.g. POS-tagging, dependency parsing) that can have a large impact on performance (cf. Kabbach et al., 2018) , and make our approach easier to adapt to other languages and datasets.", "cite_spans": [ { "start": 418, "end": 439, "text": "Kabbach et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Strip the parser: just sequence labels", "sec_num": "3.1" }, { "text": "Having maximally 'stripped down' the architecture of our parsing model, we can now define the two most basic tasks: frame prediction (equivalent to targetID plus frameID in the traditional pipeline), or role prediction (equivalent to argID, but without needing frames as input). We can then perform the tasks separately, but also jointly, or combine them in any desired way. FRAMESONLY The first basic task is predicting, given a token, whether this token 'evokes' a FrameNet frame, and if so, which one. We experiment with two types of label representation settings: Sparse, which represents each frame (and the empty symbol) as a one-hot vector, while Embedding defines dense embeddings for frames. The embedding of a frame F is defined as the centroid of the embeddings of all predicates in F , which in turn are taken from a pre-trained GloVe model (Pennington et al., 2014) . 4 This is very similar to the approach taken by Alhoshan et al. 2019. Prediction is done by regressing to the frame embedding space, and selecting the frame with the smallest cosine distance to the predicted embedding. The empty symbol is predicted if the cosine similarity to the best frame is below a threshold t f .", "cite_spans": [ { "start": 853, "end": 878, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Strip the tasks: just frames, just roles", "sec_num": "3.2" }, { "text": "ROLESONLY The other basic task predicts zero or more bare role labels for every token in the input. These labels are encoded in a binary vector that represents which roles are active for a given token. During decoding, tokens with an activation value exceeding a threshold t r are kept as the final output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strip the tasks: just frames, just roles", "sec_num": "3.2" }, { "text": "4 Re-composing an end-to-end system", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strip the tasks: just frames, just roles", "sec_num": "3.2" }, { "text": "Having defined a basic setup for experimenting with predicting frames and roles alone, we can now design experiments for investigating any interactions between frames and roles. Figure 2 provides an overview of the possible ways for combining the FRAMESONLY and ROLESONLY models: simply merging the outputs, predicting the two tasks jointly, or using a sequential pipeline.", "cite_spans": [], "ref_spans": [ { "start": 178, "end": 186, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Strip the tasks: just frames, just roles", "sec_num": "3.2" }, { "text": "Given the overlap between the frame and role prediction tasks, we test whether predicting frames and roles jointly might help the two models mutually inform each other and learn more efficiently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Do-it-together: multilabel or multitask", "sec_num": "4.1" }, { "text": "Joint(MULTILABEL) The first 'joint' approach is to predict, for every token in the input, a binary vector representing any frame target, as well as any role labels carried by the token. Hence, there is only one decoder and all parameters are shared.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Do-it-together: multilabel or multitask", "sec_num": "4.1" }, { "text": "As an alternative, we also try a setup with separate decoders for roles and frames, without any shared parameters (except for the BERT encoder). Backpropagation is done based on a weighted sum of the losses of the two decoders, where the 'loss weights' are learned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint(MULTITASK)", "sec_num": null }, { "text": "Neo-traditional In the Neo-traditional experiment, we test the traditional pipeline structure: i.e., learning frames first, and using these to (explicitly) inform role learning. In order to do this, we make two modifications to ROLESONLY: 1) we split the target role labels by frame, i.e. we ask the model to predict only one frame's roles at any given time, and 2) a representation of the 'active' frame is concatenated to the BERT embeddings as input to the model. This representation could be either Sparse or Embedding (see above). After role prediction, any roles that did not match the frame inputs are filtered out, and the predictions are merged with the frame model's output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Do-it-sequentially: what comes first?", "sec_num": "4.2" }, { "text": "Neo-traditional+MULTILABEL Following preliminary results, we repeat the experiment using MULTILABEL instead of ROLESONLY. In a final merging step, we keep all role predictions from MULTILABEL and any frame predictions that do not clash with the outputs of FRAMESONLY.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Do-it-sequentially: what comes first?", "sec_num": "4.2" }, { "text": "Reverse-traditional In this setup, we invert the traditional pipeline: given a sentence, we first predict role labels (using ROLESONLY), which are then used as input for the frame prediction model. 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Do-it-sequentially: what comes first?", "sec_num": "4.2" }, { "text": "Finally, we tried an approach assuming no interaction between frame and role prediction at all.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Do-it-separately: copy-and-paste", "sec_num": "4.3" }, { "text": "Merged In the Merged experiment, we simply merge the outputs of FRAMESONLY and ROLESONLY. In this scenario, both models are completely independent, without any possibility for frames and roles to inform each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Do-it-separately: copy-and-paste", "sec_num": "4.3" }, { "text": "showing that MULTILABEL beats ROLESONLY on roles while FRAMESONLY wins on frames, we also experiment with simply merging the output of these two 'winning' models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merged+MULTILABEL Based on initial results", "sec_num": null }, { "text": "Since our setup diverges significantly from previous systems, testing our models is not trivial. Here, we propose two evaluation methods: a token-based metric that can directly score our models' output ( \u00a75.1), and an algorithm for 'recovering' full frame structures that can be checked using the standard SemEval 2007 method (Baker et al., 2007 ) ( \u00a75.2).", "cite_spans": [ { "start": 326, "end": 345, "text": "(Baker et al., 2007", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Merged+MULTILABEL Based on initial results", "sec_num": null }, { "text": "The simplest way of evaluating our models' performance is to simply count the number of correct frame and role labels per token. We compute this given a token sequence t 1 , . . . , t n , a sequence of gold labels G 1 , . . . , G n and a sequence of predicted labels P 1 , . . . , P n , where every G i and P i is either a set of frame or role labels, or the empty label {\u2205}. We can now define:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-label evaluation", "sec_num": "5.1" }, { "text": "true positive = n i=1 |P i \u2229 G i |, false positive = n i=1 |P i \\ G i |, and false negative = n i=1 |G i \\ P i |.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-label evaluation", "sec_num": "5.1" }, { "text": "Finally, we calculate micro-averaged precision, recall, and Fscores in the usual way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-label evaluation", "sec_num": "5.1" }, { "text": "Consistency scoring A limitation of our sequence-labeling approach is that there are no explicit constraints on the predicted role labels and it is not guaranteed that the set of role predictions for a given sentence will be compatible with the set of frame predictions. Hence, we need to evaluate not just the respective accuracy, but also the mutual consistency, of predicted roles and frames. We define this as s\u2208S t\u2208tok(s) |{r \u2208 R s,t |r \u2208 allowed(F s )}|, where S is the set of sentences in the evaluation set, tok(s) returns the sequence of tokens in a sentence, R s,t is the set of predicted role labels for a particular token, F s is the set of all predicted frame labels in the sentence, and allowed(F ) returns the set of role labels that are consistent with a particular set of frame labels. For example, allowed({KILLING, USING}) gives {Killer, Victim, . . . , Agent, . . .}. The number of consistent roles is then divided by the total number of predicted roles s t |r \u2208 R s,t | to yield a global consistency score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence-label evaluation", "sec_num": "5.1" }, { "text": "For comparing our models to existing work in frame-semantic parsing, and validating the assumptions underlying our sequence-labeling setup, we need to recover full frame structures from the output of our models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering frame structures", "sec_num": "5.2" }, { "text": "Formally, given a tokenized sentence t 1 , . . . , t n , and a sequence l 1 , . . . l n of frame and role labels, we want to find the set of frame structures { T I 1 , RI 1 , . . . , T I n , RI n }. Here, every target instance T I i = F T i , t j , . . . , t k is a pairing of a frame type F T \u2208 {F Abandonment , . . . F Worry } and a sequence of tokens containing the lexical material that evokes the frame. Similarly, we define every role instance", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering frame structures", "sec_num": "5.2" }, { "text": "RI i = { RT i 1 , t j 1 , . . . t k 1 , . . . , RT in , t jn , . . . , t kn }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering frame structures", "sec_num": "5.2" }, { "text": "as a set of pairs of role types RT \u2208 {A Abandonment@Agent , . . . , A Worry@Result } and token spans instantiating these role types. See Figure 1 ( \u00a73) for an example sentence with sequence labels and corresponding frame structures.", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 151, "text": "Figure 1 ( \u00a73)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Recovering frame structures", "sec_num": "5.2" }, { "text": "We propose a simple rulebased approach. First, we find the set of target instances in the sentence, and the corresponding set of frame types. 6 Next, we find the set of (bare) role labels that can be associated to each of the predicted frame types, e.g. WORRY \u2192 {Experiencer, . . . , Result}. Next, for each of the the predicted role spans t i , . . . , t j in the sentence, we find all of the compatible frame target instances. If there is more than one compatible target, we select the target that is closest in the sentence to the role span. Note that our algorithm would miss cases of more than one frame instance 'sharing' a role (i.e. all having a role with the same label and span), but we assume that such cases are rare. In cases where it is already known which role labels are associated to which frame types (i.e., in the SemEval'07 scoring Having recovered the set of predicted frame structures, we can evaluate our models using the standard SemEval 2007 scoring method (Baker et al., 2007) . During evaluation on the development set, we noticed that our models frequently seem to make minor mistakes on role spans (i.e. erroneously missing or including an extra token). Since the SemEval script does not take into account partially matching role spans, we propose a modification to the script that gives partial credit for these role spans, and report this in addition to the scores from the original script.", "cite_spans": [ { "start": 982, "end": 1002, "text": "(Baker et al., 2007)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Recovery algorithm", "sec_num": null }, { "text": "All experiments were run using a pre-trained bert-base-cased model, fine-tuned with a simple feedforward network decoder. Loss functions depend on the setup: we optimize Mean Squared Error Loss for ROLESONLY and MULTILABEL, Sequence Cross-Entropy Loss for FRAMESONLY/Sparse, and Cosine Embedding Loss for FRAMESONLY/Embedding. We found best performance using Adam optimization (Kingma and Ba, 2014) with lr = 5e \u22125 , training for 12 epochs, with a single hidden layer of size 1000 in the decoder. Unless specified otherwise, the BERT embeddings are an automatically weighted sum (\"Scalar Mix\") of BERT's hidden layers. For implementation, we used AllenNLP (Gardner et al., 2017) 2019). All models were trained and tested on the standard FrameNet 1.7 fulltext corpus (see Appendix B for more details on the data). While our main aim remains a deeper understanding of the components of frame-semantic parsing and their interdependencies, we still need to put our scores into perspective and legitimize our sequence labeling approach. Thus, we took Open-SESAME, the only existing, open-source model that we are aware of that is capable of producing end-to-end predictions, as our baseline. 7 We used default settings (i.e., without scaffolding and ensembling) for better comparability with our own models. Hence, note that the results reported here for Open-SESAME are not the state-of-the-art results reported by Swayamdipta et al. (2017) . Table 1 reports the results on the sequence labeling task. 8 On the test set, Open-SESAME is the best model for both tasks. While best performance is not the core goal of this work, the fact that our best models perform in a similar range shows that our setup is sound to serve as a tool for comparing different pipeline variations.", "cite_spans": [ { "start": 656, "end": 678, "text": "(Gardner et al., 2017)", "ref_id": "BIBREF8" }, { "start": 1187, "end": 1188, "text": "7", "ref_id": null }, { "start": 1411, "end": 1436, "text": "Swayamdipta et al. (2017)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 1439, "end": 1446, "text": "Table 1", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Setups", "sec_num": "6.1" }, { "text": "Comparing our own models, we see that frame prediction performance is similar across setups: except for MULTILABEL, all F1-scores are within 3 points of each other. On role prediction, the setups that use MULTILABEL outperform the others. Neo-traditional performs the best on roles overall, whereas MULTITASK scores the worst. For frame prediction, performance does not seem to be boosted by joint role prediction. In fact, in MULTI-LABEL, performance on frames is very poor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence labeling", "sec_num": null }, { "text": "Similarly, adding roles as input for frame predictions (as in Reverse-traditional) does not help performance. Additional experiments to test the theoretical effectiveness of this strategy, using gold role labels as input, showed a slight improvement over FRAMESONLY (increasing F1 to 0.58 on test). However, when using predicted roles, we find no improvement and even see a small detrimental effect due to the poor performance of ROLESONLY. By contrast, Neo-traditional and Merged, when combining FRAMESONLY and MULTILABEL, perform well on both frames and roles. Lastly, MULTI-TASK does well on frames (but only slightly better than FRAMESONLY), but very poorly on roles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence labeling", "sec_num": null }, { "text": "Structural evaluation SemEval'07 scores are shown in Table 2 . Note that two separate scores are reported for Open-SESAME: \"true\" and \"recovered\". For \"true\", we converted Open-SESAME predictions to SemEval format using all available structural information (i.e., links between roles, frames, and predicates); for \"recovered\", we first removed structural information and then attempted to recover it using our algorithm (see \u00a75.2). The small difference between these scores suggests that recovery usually succeeds.", "cite_spans": [], "ref_spans": [ { "start": 53, "end": 60, "text": "Table 2", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Sequence labeling", "sec_num": null }, { "text": "In any case, Open-SESAME consistently outperforms our models, and the difference is, overall, larger on the SemEval task than on the sequence labeling task. On the test set, Merged is our best model and has an F1-score within 0.05 of Open-SESAME using strict evaluation, and within 0.03 using partial span scoring. Interestingly, whereas the sequence-labeling performance of all models drops dramatically on the test set compared to the development set, SemEval task scores are more sta- ble. Finally, as expected, both Open-SESAME and our models get higher scores when partial credit is given to incomplete role spans, but our models benefit more from this than Open-SESAME does. Out of our own models, Merged clearly wins, with a five points' difference to MULTILABEL, the worst-scoring model. A possible explanation for this difference is that MULTILABEL has poor recall for frame prediction: since frame structures always need to have a frame target, missing many frames is likely to cause low SemEval scores. However, good frames are not enough: while Merged beats Open-SESAME on frames on the development set, it has lower SemEval scores. More generally, it is interesting to note that good sequence labeling scores do not guarantee good SemEval performance. On one hand, we find that Reverse-traditional has good Se-mEval scores, especially for precision, even though it has poor sequence labeling scores on roles. On the other hand, Neo-traditional has good sequence labeling scores, but disappointing SemEval scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence labeling", "sec_num": null }, { "text": "Consistency A factor that would be expected to lead to better SemEval scores is consistency between role and frame prediction: predicting many correct frames, but also many roles inconsistent with these frames, might lead to overall worse structures. Table 4 gives consistency scores (see \u00a75.1) for all setups except Stripped. Open-SESAME and Neo-traditional score perfectly because frames are known at role prediction time, so that inconsistent roles are filtered out. There are large differences between the other setups: Merged has nearly 80% 'legal' roles, whereas Joint(MULTITASK) scores only 62%. Moreover, Merged outperforms MUL-TILABEL, despite getting its roles from MULTIL-ABEL. We speculate that this is caused by MUL-TILABEL predicting 'orphaned' roles (i.e., correct roles lacking a matching frame) that are 'reparented' in Merged, which adds 'extra' frames from FRAMESONLY. Finally, Reverse-traditional's consistency is lower than would be expected given that frame prediction is constrained by information about roles, which we attribute to poor role prediction in ROLESONLY. Still, Reverse-traditional performs quite well on SemEval, meaning that role coherence alone does not predict structural quality.", "cite_spans": [], "ref_spans": [ { "start": 251, "end": 258, "text": "Table 4", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Sequence labeling", "sec_num": null }, { "text": "BERT layer analysis Analyzing the contributions of different BERT layers helps us better understand the implicit 'pipeline' learned by the model. Table 3 shows sequence labeling scores for the Stripped and MULTILABEL models, retrained using embeddings from individual layers. For comparison, the last row shows scores from Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 146, "end": 153, "text": "Table 3", "ref_id": "TABREF8" }, { "start": 323, "end": 330, "text": "Table 1", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Sequence labeling", "sec_num": null }, { "text": "We see an interesting discrepancy between frames and roles: role prediction clearly improves when using higher layers, but frame prediction is mostly stable, suggesting that the latter benefits from lexical information more than the former. This is true for both the Stripped and MULTIL-ABEL models. Another interesting pattern is that role prediction is better for individual layers than for the \"ScalarMix\" setup, whereas this is not the case for frame prediction. This means that it is difficult to learn automatically which layers to use for role prediction, but it is yet unclear why.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence labeling", "sec_num": null }, { "text": "We examined the frame-semantic parsing pipeline theoretically and practically, identifying 'chickenor-egg' issues in the dependencies between subtasks, and studying them empirically within a BERT-based sequence-labeling framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "We found that joint frame and role prediction works well, but not always better than using frames as input to roles. By contrast, previous studies (Yang and Mitchell, 2017; Peng et al., 2018) found substantial improvements from joint prediction. However, these systems use gold targets as input, The main advantage of our sequence-labeling setup is the possibility to investigate frame and role prediction independently, as well as their mutual dependency. We found substantial benefits for role prediction from access to frame information through joint prediction or by receiving frames as input. For frame prediction, instead, the picture is less clear: while we found a theoretical benefit of using (gold) roles as input, this benefit disappears when using predicted roles. Similarly, when jointly predicting frames and roles, the MULTITASK setup yielded a slight improvement for frame prediction, whereas MULTILABEL deteriorated it. These results can be taken as supporting the traditional pipeline approach, but our results using SemEval evaluation, which looks at full frame structures, do not unequivocally confirm this: Open-SESAME performs best, but amongst our models, Reverse-traditional and Merged outperform the others, including Neo-traditional. This suggests that there might be valid alternatives to the standard pipeline, and exploring these might lead to a deeper understanding of frame semantic parsing task itself.", "cite_spans": [ { "start": 147, "end": 172, "text": "(Yang and Mitchell, 2017;", "ref_id": "BIBREF23" }, { "start": 173, "end": 191, "text": "Peng et al., 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Our setup also allows for investigating which BERT layers both components use. Role prediction strongly prefers high BERT layers, while frame prediction is less picky, suggesting that the tasks use different linguistic information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "We see several logical extensions of our work. First, qualitative analysis of the overlaps in predictions from different models could shed light on the discrepancies between sequence labeling scores, consistency scores, and SemEval scores. A second direction would be to explore how our observations about the relationship between different components of the frame semantic parsing pipeline and BERT layers could be used to improve models. Finally, one could try more sophisticated architec-tures for sequence-labeling models, in particular by enforcing frame-role consistency within the model itself rather than during post-processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1247-1256, Copenhagen, Denmark. Association for Computational Linguistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "In order to check for model stability, we repeated all our experiments (excl. the experiments for BERT layer analysis) three times. The standard deviations of the sequence-labeling scores are shown in Table A .1. F1-scores seem quite stable overall, but in a few cases there are larger deviations in precision and/or recall, especially in the Joint(MULTITASK) model. frames roles Experiment R P F R P F Joint(MULTILABEL) 01 00 00 00 01 01 Joint (MULTITASK) 02 02 00 04 07 01 Neotrad. (MULTILABEL, Sp.) 02 01 01 02 04 01 Merged(MULTILABEL) 01 01 00 00 01 01 Stripped(FRAMESONLY, Emb.) 02 04 01 ---Stripped (FRAMESONLY, Sp.) 03 02 01 ---Stripped(ROLESONLY) ---02 04 03 Table A .1: Model stability: standard deviation (%) across runs of sequence-labeling scores (on DEV)", "cite_spans": [ { "start": 445, "end": 456, "text": "(MULTITASK)", "ref_id": null }, { "start": 484, "end": 501, "text": "(MULTILABEL, Sp.)", "ref_id": null }, { "start": 605, "end": 622, "text": "(FRAMESONLY, Sp.)", "ref_id": null } ], "ref_spans": [ { "start": 201, "end": 208, "text": "Table A", "ref_id": null }, { "start": 667, "end": 674, "text": "Table A", "ref_id": null } ], "eq_spans": [], "section": "A.1 Model stability", "sec_num": null }, { "text": "A.2 FrameNet data Corpus We used the standard FrameNet corpus (release 1.7) for all experiments. We used the fulltext.train split for training, the dev split for validation and evaluation, and the test split for final evaluation. Table A .2 shows the relative sizes of these splits.", "cite_spans": [], "ref_spans": [ { "start": 230, "end": 237, "text": "Table A", "ref_id": null } ], "eq_spans": [], "section": "A.1 Model stability", "sec_num": null }, { "text": "Distribution of roles One of the key simplifications of our sequence labeling setup is 'decoupling' frames and roles. This reduces the label space since some roles occur in many different frames. Figure A .1 shows the most frequent role names with the number of different frames that they occur in. As can be seen from the graph, most frequent roles are very general ones such as 'Time', 'Place', 'Manner', etc. Although roles are, in the FrameNet philosophy, strictly defined relative to frames, we expect that roles sharing a name across frames will have a very similar semantics. ", "cite_spans": [], "ref_spans": [ { "start": 196, "end": 204, "text": "Figure A", "ref_id": null } ], "eq_spans": [], "section": "A.1 Model stability", "sec_num": null }, { "text": "Yang and Mitchell (2017) andPeng et al. (2018) learn frames and arguments jointly, but still need targetID as a separate step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See sense #14 of \"run\" in https:// www.oxfordlearnersdictionaries.com/ definition/english/run_1?q=run", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It also makes the task somewhat arbitrary (since it depends on what happens to be annotated in FrameNet), leading some researchers to ignore the problem altogether(Das, 2014).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the model glove.42B.300d from https: //nlp.stanford.edu/projects/glove/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Other setups, e.g. using MULTILABEL for role predictions, might give better performance, but would obfuscate the effect of predicting roles before frames.5 Evaluation: tokens vs. structures", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If more than one frame target label is predicted for a given token, we only keep the label with the highest probability.Neo-traditional setup), we allow the algorithm to take this information into account, but we found that this has little impact on performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that the Open-SESAME paper only treats argID, but models published at https://github.com/ swabhs/open-sesame use the same architecture for doing targetID and frameID and are discussed at https: //github.com/swabhs/coling18tutorial.8 For checking stability, all experiments were repeated three times and the scores averaged across runs. Overall, the models were quite stable and have F1-scores with standard deviations of \u2264 0.03. See the Appendix for full stability scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The research reported in this article was funded by the Dutch National Science organisation (NWO) through the project Framing situations in the Dutch language, VC.GW17.083/6215. We would also like to thank our anonymous reviewers for their helpful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Semantic frame embeddings for detecting relations between software requirements", "authors": [ { "first": "Riza", "middle": [], "last": "Waad Alhoshan", "suffix": "" }, { "first": "Liping", "middle": [], "last": "Batista-Navarro", "suffix": "" }, { "first": "", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 13th International Conference on Computational Semantics -Student Papers", "volume": "", "issue": "", "pages": "44--51", "other_ids": { "DOI": [ "10.18653/v1/W19-0606" ] }, "num": null, "urls": [], "raw_text": "Waad Alhoshan, Riza Batista-Navarro, and Liping Zhao. 2019. Semantic frame embeddings for de- tecting relations between software requirements. In Proceedings of the 13th International Conference on Computational Semantics -Student Papers, pages 44-51, Gothenburg, Sweden. Association for Com- putational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "SemEval-2007 task 19: Frame semantic structure extraction", "authors": [ { "first": "Collin", "middle": [], "last": "Baker", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Ellsworth", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)", "volume": "", "issue": "", "pages": "99--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collin Baker, Michael Ellsworth, and Katrin Erk. 2007. SemEval-2007 task 19: Frame semantic structure ex- traction. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 99-104, Prague, Czech Republic. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The structure of the FrameNet database", "authors": [ { "first": "Collin", "middle": [ "F" ], "last": "Baker", "suffix": "" }, { "first": "Charles", "middle": [ "J" ], "last": "Fillmore", "suffix": "" }, { "first": "Beau", "middle": [], "last": "Cronin", "suffix": "" } ], "year": 2003, "venue": "International Journal of Lexicography", "volume": "16", "issue": "3", "pages": "281--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collin F. Baker, Charles J. Fillmore, and Beau Cronin. 2003. The structure of the FrameNet database. International Journal of Lexicography, 16(3):281- 296.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Statistical models for framesemantic parsing", "authors": [ { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" } ], "year": 1929, "venue": "Proceedings of Frame Semantics in NLP: A Workshop in Honor of Chuck Fillmore", "volume": "", "issue": "", "pages": "26--29", "other_ids": { "DOI": [ "10.3115/v1/W14-3007" ] }, "num": null, "urls": [], "raw_text": "Dipanjan Das. 2014. Statistical models for frame- semantic parsing. In Proceedings of Frame Seman- tics in NLP: A Workshop in Honor of Chuck Fillmore (1929-2014), pages 26-29, Baltimore, MD, USA. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Frame-semantic parsing", "authors": [ { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Desai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Schneider", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2014, "venue": "Computational Linguistics", "volume": "40", "issue": "1", "pages": "9--56", "other_ids": { "DOI": [ "10.1162/COLI_a_00163" ] }, "num": null, "urls": [], "raw_text": "Dipanjan Das, Desai Chen, Andr\u00e9 F. T. Martins, Nathan Schneider, and Noah A. Smith. 2014. Frame-semantic parsing. Computational Linguis- tics, 40(1):9-56.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Frame semantics", "authors": [ { "first": "C", "middle": [], "last": "Fillmore", "suffix": "" } ], "year": 1982, "venue": "Cognitive Linguistics: Basic Readings", "volume": "", "issue": "", "pages": "373--400", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Fillmore. 2006. Frame semantics. In D. Geeraerts, editor, Cognitive Linguistics: Basic Readings, pages 373-400. De Gruyter Mouton, Berlin, Boston. Orig- inally published in 1982.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Joint parsing and named entity recognition", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "326--334", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel and Christopher D. Manning. 2009. Joint parsing and named entity recognition. In Pro- ceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 326-334, Boulder, Colorado. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Allennlp: A deep semantic natural language processing platform", "authors": [ { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Grus", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Schmitz", "suffix": "" }, { "first": "Luke", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automatic labeling of semantic roles", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "3", "pages": "245--288", "other_ids": { "DOI": [ "10.1162/089120102760275983" ] }, "num": null, "urls": [], "raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguis- tics, 28(3):245-288.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Out-of-domain FrameNet semantic role labeling", "authors": [ { "first": "Silvana", "middle": [], "last": "Hartmann", "suffix": "" }, { "first": "Ilia", "middle": [], "last": "Kuznetsov", "suffix": "" }, { "first": "Teresa", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "471--482", "other_ids": {}, "num": null, "urls": [], "raw_text": "Silvana Hartmann, Ilia Kuznetsov, Teresa Martin, and Iryna Gurevych. 2017. Out-of-domain FrameNet se- mantic role labeling. In Proceedings of the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics: Volume 1, Long Papers, pages 471-482, Valencia, Spain. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Jointly predicting predicates and arguments in neural semantic role labeling", "authors": [ { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "364--369", "other_ids": { "DOI": [ "10.18653/v1/P18-2058" ] }, "num": null, "urls": [], "raw_text": "Luheng He, Kenton Lee, Omer Levy, and Luke Zettle- moyer. 2018. Jointly predicting predicates and argu- ments in neural semantic role labeling. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 364-369, Melbourne, Australia. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "What does BERT learn about the structure of language", "authors": [ { "first": "Ganesh", "middle": [], "last": "Jawahar", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3651--3657", "other_ids": { "DOI": [ "10.18653/v1/P19-1356" ] }, "num": null, "urls": [], "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 3651-3657, Florence, Italy. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Butterfly effects in frame semantic parsing: impact of data processing on model ranking", "authors": [ { "first": "Alexandre", "middle": [], "last": "Kabbach", "suffix": "" }, { "first": "Corentin", "middle": [], "last": "Ribeyre", "suffix": "" }, { "first": "Aur\u00e9lie", "middle": [], "last": "Herbelot", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3158--3169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandre Kabbach, Corentin Ribeyre, and Aur\u00e9lie Herbelot. 2018. Butterfly effects in frame semantic parsing: impact of data processing on model rank- ing. In Proceedings of the 27th International Con- ference on Computational Linguistics, pages 3158- 3169, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Kopf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Raison", "suffix": "" }, { "first": "Alykhan", "middle": [], "last": "Tejani", "suffix": "" }, { "first": "Sasank", "middle": [], "last": "Chilamkurthy", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Steiner", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "8024--8035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning joint semantic parsers from disjoint data", "authors": [ { "first": "Hao", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1492--1502", "other_ids": { "DOI": [ "10.18653/v1/N18-1135" ] }, "num": null, "urls": [], "raw_text": "Hao Peng, Sam Thomson, Swabha Swayamdipta, and Noah A. Smith. 2018. Learning joint semantic parsers from disjoint data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 1492-1502, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Semantic proto-roles. Transactions of the Association for Computational Linguistics", "authors": [ { "first": "Drew", "middle": [], "last": "Reisinger", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Ferraro", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Harman", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Rawlins", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2015, "venue": "", "volume": "3", "issue": "", "pages": "475--488", "other_ids": { "DOI": [ "10.1162/tacl_a_00152" ] }, "num": null, "urls": [], "raw_text": "Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. Transac- tions of the Association for Computational Linguis- tics, 3:475-488.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Frame-semantic parsing with softmax-margin segmental rnns and a syntactic scaffold", "authors": [ { "first": "Swabha", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swabha Swayamdipta, Sam Thomson, Chris Dyer, and Noah A. Smith. 2017. Frame-semantic parsing with softmax-margin segmental rnns and a syntactic scaf- fold. CoRR, abs/1706.09528.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Using error-correcting output codes with model-refinement to boost centroid text classifier", "authors": [ { "first": "Songbo", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions", "volume": "", "issue": "", "pages": "81--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Songbo Tan. 2007. Using error-correcting output codes with model-refinement to boost centroid text classi- fier. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Com- panion Volume Proceedings of the Demo and Poster Sessions, pages 81-84, Prague, Czech Republic. As- sociation for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "BERT rediscovers the classical NLP pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": { "DOI": [ "10.18653/v1/P19-1452" ] }, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Joint learning improves semantic role labeling", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", "volume": "", "issue": "", "pages": "589--596", "other_ids": { "DOI": [ "10.3115/1219840.1219913" ] }, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Aria Haghighi, and Christopher Manning. 2005. Joint learning improves seman- tic role labeling. In Proceedings of the 43rd An- nual Meeting of the Association for Computational Linguistics (ACL'05), pages 589-596, Ann Arbor, Michigan. Association for Computational Linguis- tics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A joint sequential and relational model for frame-semantic parsing", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/D17-1128" ] }, "num": null, "urls": [], "raw_text": "Bishan Yang and Tom Mitchell. 2017. A joint sequen- tial and relational model for frame-semantic parsing.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Frame structures and sequence labels (N.B.: color added for illustrative purposes only)" }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Top-20 role names by number of frames" }, "TABREF4": { "type_str": "table", "text": "Sequence labeling scores (avg. over three runs). N.B: *For brevity reasons, for Merged and Neotraditional, we only give results for the MULTILABEL setting, which performs better on role prediction than ROLESONLY. \u2020 Results on Neo-traditional are using Sparse frame inputs. \u2021 Results on Reverse-traditional are using Sparse frame outputs.", "num": null, "html": null, "content": "" }, "TABREF5": { "type_str": "table", "text": "and PyTorch(Paszke et al.,", "num": null, "html": null, "content": "
DEVTEST
strictmodifiedstrictmodified
ExperimentRPFRPFRPFRPF
Open-SESAME (true)0.47 0.52 0.49 0.51 0.57 0.54 0.44 0.50 0.47 0.47 0.53 0.50
Open-SESAME (recovered)0.46 0.51 0.48 0.50 0.56 0.53 0.43 0.49 0.46 0.46 0.53 0.49
Joint(MULTILABEL)0.31 0.40 0.35 0.37 0.48 0.42 0.32 0.42 0.36 0.36 0.48 0.41
Joint(MULTITASK)0.34 0.44 0.38 0.38 0.50 0.43 0.36 0.43 0.39 0.40 0.48 0.43
Neo-traditional(MULTILABEL)* \u20200.33 0.38 0.35 0.42 0.49 0.45 0.34 0.37 0.35 0.42 0.46 0.44
Reverse-traditional(ROLESONLY) \u2021 0.33 0.48 0.39 0.38 0.55 0.45 0.34 0.47 0.40 0.38 0.53 0.45
Merged(MULTILABEL)*0.37 0.44 0.40 0.44 0.52 0.47 0.40 0.44 0.42 0.45 0.50 0.47
" }, "TABREF6": { "type_str": "table", "text": "SemEval'07 scores (avg. over three runs)", "num": null, "html": null, "content": "" }, "TABREF8": { "type_str": "table", "text": "Sequence-label scores (DEV) by BERT layer", "num": null, "html": null, "content": "
" }, "TABREF10": { "type_str": "table", "text": "Consistency scores and deviance across runs differ in architecture, and (partially) use different datasets, making direct comparison hard.", "num": null, "html": null, "content": "
" } } } }