{ "paper_id": "I17-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:39:45.646600Z" }, "title": "Improving Implicit Semantic Role Labeling by Predicting Semantic Frame Arguments", "authors": [ { "first": "Quynh", "middle": [], "last": "Ngoc", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Thi", "middle": [], "last": "Do", "suffix": "", "affiliation": { "laboratory": "", "institution": "Katholieke Universiteit Leuven", "location": { "country": "Belgium" } }, "email": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Arizona", "location": { "country": "United States" } }, "email": "bethard@email.arizona.edu" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "", "affiliation": { "laboratory": "", "institution": "Katholieke Universiteit Leuven", "location": { "country": "Belgium" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Implicit semantic role labeling (iSRL) is the task of predicting the semantic roles of a predicate that do not appear as explicit arguments, but rather regard common sense knowledge or are mentioned earlier in the discourse. We introduce an approach to iSRL based on a predictive recurrent neural semantic frame model (PRNSFM) that uses a large unannotated corpus to learn the probability of a sequence of semantic arguments given a predicate. We leverage the sequence probabilities predicted by the PRNSFM to estimate selectional preferences for predicates and their arguments. On the NomBank iSRL test set, our approach improves state-of-the-art performance on implicit semantic role labeling with less reliance than prior work on manually constructed language resources.", "pdf_parse": { "paper_id": "I17-1010", "_pdf_hash": "", "abstract": [ { "text": "Implicit semantic role labeling (iSRL) is the task of predicting the semantic roles of a predicate that do not appear as explicit arguments, but rather regard common sense knowledge or are mentioned earlier in the discourse. We introduce an approach to iSRL based on a predictive recurrent neural semantic frame model (PRNSFM) that uses a large unannotated corpus to learn the probability of a sequence of semantic arguments given a predicate. We leverage the sequence probabilities predicted by the PRNSFM to estimate selectional preferences for predicates and their arguments. On the NomBank iSRL test set, our approach improves state-of-the-art performance on implicit semantic role labeling with less reliance than prior work on manually constructed language resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic role labeling (SRL) has traditionally focused on semantic frames consisting of verbal or nominal predicates and explicit arguments that occur within the clause or sentence that contains the predicate. However, many predicates, especially nominal ones, may bear arguments that are left implicit because they regard common sense knowledge or because they are mentioned earlier in a discourse (Ruppenhofer et al., 2010; Gerber et al., 2009) . These arguments, called implicit arguments, are resolved by another semantic task, implicit semantic role labeling (iSRL). Consider a NomBank (Meyers et al., 2004) annotation example: The predicate loss in the first sentence has two arguments annotated explicitly: A0, the entity losing something, and A1, the thing lost. Meanwhile, the other instance of the same predicate in the second sentence has no associated arguments. However, for a good reader, a reasonable interpretation of the second loss should be that it receives the same A0 and A1 as the first instance. These arguments are implicit to the second loss.", "cite_spans": [ { "start": 399, "end": 425, "text": "(Ruppenhofer et al., 2010;", "ref_id": "BIBREF17" }, { "start": 426, "end": 446, "text": "Gerber et al., 2009)", "ref_id": "BIBREF5" }, { "start": 591, "end": 612, "text": "(Meyers et al., 2004)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "[", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As an emerging task, implicit semantic role labeling faces a lack of resources. First, hand-crafted implicit role annotations for use as training data are seriously limited: SemEval 2010 Task 10 (Baker et al., 1998) provided FrameNet-style (Baker et al., 1998) annotations for a fairly large number of predicates but with few annotations per predicate, while Gerber and Chai (2010) provided PropBank-style (Palmer et al., 2005) data with many more annotations per predicate but covering just 10 predicates. Second, most existing iSRL systems depend on other systems (explicit semantic role labelers, named entity taggers, lexical resources, etc.), and as a result not only need iSRL annotations to train the iSRL system, but annotations or manually built resources for all of their sub-systems as well.", "cite_spans": [ { "start": 195, "end": 215, "text": "(Baker et al., 1998)", "ref_id": "BIBREF0" }, { "start": 240, "end": 260, "text": "(Baker et al., 1998)", "ref_id": "BIBREF0" }, { "start": 359, "end": 381, "text": "Gerber and Chai (2010)", "ref_id": "BIBREF6" }, { "start": 406, "end": 427, "text": "(Palmer et al., 2005)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose an iSRL approach that addresses these challenges, requiring no manually annotated iSRL data and only a single sub-system, an explicit semantic role labeler. We introduce a predictive recurrent neural semantic frame model (PRNSFM), which can estimate the probability of a sequence of semantic arguments given a predicate, and can be trained on unannotated data drawn from the Wikipedia, Reuters, and Brown corpora, coupled with the predictions of the MATE (Bj\u00f6rkelund et al., 2010) explicit semantic role labeler on these texts. The PRNSFM forms the foundation for our iSRL system, where we use its probability estimates over sequences of semantic arguments to predict selectional preferences for associating predicates with their implicit semantic roles. Our PRNSFM-based iSRL model improves state-of-the-art performance, outperforming the only other system that depends on just an explicit semantic role labeler by 10 % F1, and achieving equal or better F1 score than several other models that require many more lexical resources.", "cite_spans": [ { "start": 466, "end": 491, "text": "(Bj\u00f6rkelund et al., 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work fits today's interest in natural language understanding, which is hampered by the fact that content in a discourse is often not expressed explicitly because it was mentioned earlier or because it regards common sense or world knowledge that resides in the mind of the communicator or the audience. In contrast, humans easily combine relevant evidence to infer meaning, determine hidden meanings and make explicit what was left implicit in the text, using the anticipatory power of the brain that predicts or \"imagines\" circumstantial situations and outcomes of actions (Friston, 2010; Vernon, 2014) which makes language processing extremely effective and fast (Kurby and Zacks, 2015; Schacter and Madore, 2016) . The neural semantic frame representations inferred by our PRNSFM take a first step towards encoding something like anticipatory power for natural language understanding systems.", "cite_spans": [ { "start": 578, "end": 593, "text": "(Friston, 2010;", "ref_id": "BIBREF4" }, { "start": 594, "end": 607, "text": "Vernon, 2014)", "ref_id": "BIBREF21" }, { "start": 669, "end": 692, "text": "(Kurby and Zacks, 2015;", "ref_id": "BIBREF8" }, { "start": 693, "end": 719, "text": "Schacter and Madore, 2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of the paper is organized as follows: First, section 2 describes the related work. Second, section 3 proposes the predictive recurrent neural semantic frame model including the formal definition, architecture, and an algorithm to extract selectional preferences from the trained model. Third, in section 4, we introduce the application of our PRNSFM in implicit semantic role labeling. Fourth, the experimental results and discussions are presented in section 5. Finally, we conclude our work and suggest some future work in section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Language Modeling Language models, from ngram models to continuous space language models (Mikolov et al., 2013; Pennington et al., 2014) , provide probability distributions over sequences of words and have shown their usefulness in many natural language processing tasks. However, to our knowledge, they have not yet been used to model semantic frames. Recently, Peng and Roth (2016) developed two distinct models that capture semantic frame chains and discourse information while abstracting over the specific mentions of predicates and entities, but these models focus on discourse processing tasks, not semantic frame processing.", "cite_spans": [ { "start": 89, "end": 111, "text": "(Mikolov et al., 2013;", "ref_id": "BIBREF11" }, { "start": 112, "end": 136, "text": "Pennington et al., 2014)", "ref_id": "BIBREF15" }, { "start": 363, "end": 383, "text": "Peng and Roth (2016)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In unsupervised SRL, Woodsend and Lapata (2015) and Titov and Khoddam (2015) induce embeddings to represent a predicate and its arguments from unannotated texts, but in their approaches, the arguments are words only, not the semantic role labels, while in our models, both are considered.", "cite_spans": [ { "start": 21, "end": 47, "text": "Woodsend and Lapata (2015)", "ref_id": "BIBREF22" }, { "start": 52, "end": 76, "text": "Titov and Khoddam (2015)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Role Labeling", "sec_num": null }, { "text": "Low-resource Implicit Semantic Role Labeling Several approaches have attempted to address the lack of resources for training iSRL systems. Laparra and Rigau (2013) proposed an approach based on exploiting argument coherence over different instances of a predicate, which did not require any manual iSRL annotations but did require many other manually-constructed resources: an explicit SRL system, WordNet super-senses, a named entity tagger, and a manual categorization of Super-SenseTagger semantic classes. Roth and Frank (2015) generated additional training data for iSRL through comparable texts, but the resulting model performed below the previous state-of-the-art of Laparra and Rigau (2013) . Schenk and Chiarcos (2016) proposed an approach to induce prototypical roles using distributed word representations, which required only an explicit SRL system and a large unannotated corpus, but their model performance was almost 10 points lower than the state-of-the-art of Laparra and Rigau (2013) . Similar to Schenk and Chiarcos (2016) , our model requires only an explicit SRL system and a large unannotated corpus, but we take a very different approach to leveraging these, and as a result improve state-of-the-art performance.", "cite_spans": [ { "start": 510, "end": 531, "text": "Roth and Frank (2015)", "ref_id": "BIBREF16" }, { "start": 675, "end": 699, "text": "Laparra and Rigau (2013)", "ref_id": "BIBREF9" }, { "start": 702, "end": 728, "text": "Schenk and Chiarcos (2016)", "ref_id": "BIBREF19" }, { "start": 978, "end": 1002, "text": "Laparra and Rigau (2013)", "ref_id": "BIBREF9" }, { "start": 1016, "end": 1042, "text": "Schenk and Chiarcos (2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Role Labeling", "sec_num": null }, { "text": "Our goal is to use unlabeled data to acquire selectional preferences that characterize how likely a phrase is to be an argument of a semantic frame. We rely on the fact that current explicit SRL systems achieve high performance on verbal predicates, and run a state-of-the-art explicit SRL system on unlabeled data. We then construct a predictive recurrent neural semantic frame model (PRNSFM) from these explicit frames and roles. Our PRNSFM views semantic frames as a sequence: a predicate, followed by the arguments in their textual order, and terminated by a special EOS symbol. We draw predicates from PropBank verbal semantic frames, and represent arguments with their nominal/pronominal heads. For example, Michael Phelps swam at the Olympics is represented as [swam:PRED, Phelps:A0, Olympics:AM-LOC, EOS], where the predicate is labeled PRED and the arguments Phelps and Olympics are labeled A0 and AM-LOC, respectively. Our PRNSFM's task is thus to take a predicate and zero or more arguments, and predict the next argument in the sequence, or EOS if no more arguments will follow. We choose to model semantic frames as a sequence (rather than, say, a bag of arguments) because in English, there are often fairly strict constraints on the order in which arguments of a verb may appear. A sequential model should thus be able to capture these constraints and use them to improve its probability estimates. Moreover, a sequential model has the ability to learn the interaction between arguments in the same semantic frame. For example, considering a swimming event, if Phelps is A0, then Olympics is more likely to be the AM-LOC than lake.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predictive Recurrent Neural Semantic Frame Model", "sec_num": "3" }, { "text": "Formally, for each t th argument of a semantic frame f , we denote its word (e.g., Phelps) as w f,t , its semantic label (e.g., A0) as l f,t , where w \u2208 V, the word vocabulary, and l \u2208 L \u222a [PRED], the set of semantic labels. We denote the predicate word and label, which are always at the 0 th position in the sequence, in the same way as arguments: w f,0 and l f,0 . We denote the sequence [w f,0 , w f,1 , . . . , w f,t\u22121 ] as w f,