{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:24:37.255891Z" }, "title": "Tuning Deep Active Learning for Semantic Role Labeling", "authors": [ { "first": "Skatje", "middle": [], "last": "Myers", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Colorado at Boulder", "location": {} }, "email": "skatje.myers@colorado.edu" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Colorado at Boulder", "location": {} }, "email": "mpalmer@colorado.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Active learning has been shown to reduce annotation requirements for numerous natural language processing tasks, including semantic role labeling (SRL). SRL involves labeling argument spans for potentially multiple predicates in a sentence, which makes it challenging to aggregate the numerous decisions into a single score for determining new instances to annotate. In this paper, we apply two ways of aggregating scores across multiple predicates in order to choose query sentences with two methods of estimating model certainty: using the neural network's outputs and using dropout-based Bayesian Active Learning by Disagreement. We compare these methods with three passive baselines-random sentence selection, random whole-document selection, and selecting sentences with the most predicates-and analyse the effect these strategies have on the learning curve with respect to reducing the number of annotated sentences and predicates to achieve high performance.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Active learning has been shown to reduce annotation requirements for numerous natural language processing tasks, including semantic role labeling (SRL). SRL involves labeling argument spans for potentially multiple predicates in a sentence, which makes it challenging to aggregate the numerous decisions into a single score for determining new instances to annotate. In this paper, we apply two ways of aggregating scores across multiple predicates in order to choose query sentences with two methods of estimating model certainty: using the neural network's outputs and using dropout-based Bayesian Active Learning by Disagreement. We compare these methods with three passive baselines-random sentence selection, random whole-document selection, and selecting sentences with the most predicates-and analyse the effect these strategies have on the learning curve with respect to reducing the number of annotated sentences and predicates to achieve high performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The ability to identify the semantic elements of a sentence (who did what to whom, where and when) is crucial for machine understanding of natural language and downstream tasks such as information extraction (MacAvaney et al., 2017) and questionanswering systems (Yih et al., 2016) . The process of automatically identifying and classifying the predicates in a sentence and the arguments that relate to them is called semantic role labeling (SRL). The current state-of-the-art semantic role labeling systems are based on supervised machine learning and rely on large corpora in order to achieve good performance. Large corpora have been created for languages such as English (Weischedel et al., 2013) , but such resources are lacking in most other languages. Additionally, those corpora created may still not translate well to other in-language domains, due to sentence structure or domain-specific vocabulary. Creation of additional annotated corpora requires a significant amount of time and often the hiring of domain experts, causing a bottleneck for developing advanced NLP tools for other languages and domains.", "cite_spans": [ { "start": 208, "end": 232, "text": "(MacAvaney et al., 2017)", "ref_id": "BIBREF12" }, { "start": 263, "end": 281, "text": "(Yih et al., 2016)", "ref_id": "BIBREF22" }, { "start": 675, "end": 700, "text": "(Weischedel et al., 2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Active learning (AL) focuses on choosing only the most informative and least repetitive instances to have annotated, thereby reducing the total needed annotation to train a supervised model, without sacrificing performance. This is done by iteratively re-training the model and assessing its confidence in its predictions in order to choose additional data for annotation that would have maximal impact on the learning rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Traditionally, practitioners use the model's probability distributions for the annotation candidates to quantify how informative a new training instance would be for the model. However, state-of-theart SRL systems rely on deep learning, whose predictive probabilities are not a reliable metric of uncertainty. In lieu of this, Gal and Ghahramani (2016) found that we can estimate model confidence by calculating the rate of disagreement of multiple Monte Carlo draws from a stochastic model, accomplished by utilising dropout during forward passes. Previous work (Siddhant and Lipton, 2018) (Shen et al., 2017) has combined this finding with Bayesian Active Learning by Disagreement (Houlsby et al., 2011) as a way of selecting informative instances for active learning for SRL and other NLP tasks; hereafter referred to as DO-BALD.", "cite_spans": [ { "start": 327, "end": 352, "text": "Gal and Ghahramani (2016)", "ref_id": "BIBREF5" }, { "start": 563, "end": 590, "text": "(Siddhant and Lipton, 2018)", "ref_id": "BIBREF18" }, { "start": 591, "end": 610, "text": "(Shen et al., 2017)", "ref_id": "BIBREF16" }, { "start": 683, "end": 705, "text": "(Houlsby et al., 2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Semantic role labeling for a single sentence is a complicated structural prediction, involving multiple predicates and varying spans. This complexity makes identifying the training examples with maximal impact more challenging. In this work, we compare two ways of aggregating confidence scores for individual predicates into a unified score to assess the usefulness of selecting a sentence for active learning. We test these strategies with two active learning approaches to calculating certainty for a predicate instance: the model's output probabilities and a granular DO-BALD selection method. Additionally, we compare the benefits of these AL approaches with three baselines: random sentence selection, random document selection, and selecting sentences with the most predicates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We will discuss the practical workflow of SRL annotation and the way this must be considered in utilising active learning effectively to create new datasets. Although the current standard data selection methodology for SRL corpora, which typically involves selecting entire documents, leaves much room for improvement by even passive strategies, we will show that active learning can provide significant reductions in annotation of both number of sentences and number of predicates. We aim to provide this comparison within the broader context and understanding of SRL annotation in practice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Active learning begins with the selection of a classifier, a small pool of labeled training data (also referred to as a seed set) for the classifier to initially be trained on, and a large amount of unlabeled data. AL is an iterative process where the classifier is trained on the labeled data and then through some query selection strategy, an instance or instances are chosen from the unlabeled data for a human annotator to provide a label for. Typically, they're chosen after the classifier attempts to predict labels for the unlabeled data and provides feedback about what instances may be the most informative. The newly annotated data is then added to the pool of labeled data that will be used to train the classifier on the next iteration. This iteration continues until some stopping criteria are met, such as the classifier's confidences about the remaining unlabeled data exceeding a certain threshold, or simply until funds or time are exhausted. Proposition Bank (PropBank) (Palmer et al., 2005) is verb-oriented semantic representation. Predicates in text are assigned a roleset ID based on the sense of the word, such as play.01 (to play a game) or play.02 (to play a role). The roleset determines the permissible semantic roles, or arguments, for that predicate. The core arguments are given generalised numbered labels, ARG0", "cite_spans": [ { "start": 988, "end": 1009, "text": "(Palmer et al., 2005)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Roleset id: give.01 transfer Arg0 giver Arg1 thing given Arg2 entity given to through ARG5. Typically an ARG0 is the agent or experiencer, while ARG1 is typically the patient or theme of the predicate. Additionally, there are modifier arguments to incorporate other semantically relevant information such as location (ARGM-LOC) and direction (ARGM-DIR). The following is an example of the arguments related to the predicate \"give\" according to the roleset in Sentences may contain several predicates and each predicate has its own arguments. Predicates commonly consist of verbs, but also include nominalisations and predicative adjectives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Many large corpora have been annotated in English, such as Ontonotes (Weischedel et al., 2013) . Although Ontonotes has since been retrofitted to unify different parts of speech into the same rolesets based on sense and given expanded nominalisations, light verb constructions, and other multiword expressions (O'Gorman et al., 2018) , an earlier version of it was released as the dataset for the CoNLL-2012 shared task. This dataset is still frequently used as an evaluation corpus for experimental SRL techniques. Additionally, there are many domain-specific SRL corpora, such as clinical records (Albright et al., 2013) and the geosciences (Duerr et al., 2016) . These domainspecific annotations are necessary because the vocabulary and sentence structure may differ too much for models trained on more general text to perform well.", "cite_spans": [ { "start": 69, "end": 94, "text": "(Weischedel et al., 2013)", "ref_id": null }, { "start": 310, "end": 333, "text": "(O'Gorman et al., 2018)", "ref_id": "BIBREF13" }, { "start": 599, "end": 622, "text": "(Albright et al., 2013)", "ref_id": "BIBREF0" }, { "start": 643, "end": 663, "text": "(Duerr et al., 2016)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Much of the text annotated with PropBank annotations was annotated using Jubilee (Choi et al., 2010) . The text is set up to be presented to annotators in the order of the predicate's lemma, enabling annotators to concentrate on the differences between rolesets of particular lemmas and providing efficiency through minimising context-switching. With this methodology, annotation time can pri-marily be reduced by minimising the number of predicates being annotated.", "cite_spans": [ { "start": 81, "end": 100, "text": "(Choi et al., 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "While this setup is typical of large-scale annotation projects, it's less feasible in the context of active learning. If each iteration results in querying annotators for only 100 sentences, there is little benefit to splitting annotation tasks based on lemmas. The more practical approach is to annotate on a sentence-by-sentence basis. In this case, reducing predicates is still beneficial, but since the cognitive burden of reading and understand the sentence must be done anyway, reducing the number of sentences is of high importance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "When new datasets are annotated, typically entire documents are chosen. Annotation projects frequently do several layers of annotation on the same text, which may include NER, syntactic parsing, SRL, coreference resolution, and event coreference. In the case of SRL, this results in numerous sentences with the same topic and vocabulary being used. The random selection of sentences used as a baseline in active learning studies may be an improvement over the selection criteria used in practice since the distribution of it will result in a more diverse dataset. For this reason, it's important when discussing how much annotation reduction an AL technique provides by selecting individual sentences to compare to the learning curve of random selection, rather than the full dataset. Our experiments include a whole-document selection method to provide comparison.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Active learning has been utilised with success in numerous NLP tasks, such as named entity recognition (Shen et al., 2017) , word sense disambiguation (Zhu and Hovy, 2007) , and sentiment classification (Li et al., 2013) . In recent years, active learning has been applied to SRL. Since probabilities from offthe-shelf NN models may sometimes be inaccessible, Wang et al. (2017) proposed working around this by designing an additional neural model to learn a strategy of selecting queries. Given an SRL model's predictions, this query model classifies instances as requiring human annotation or not. Their approach was a hybrid of active learning and selftraining. The self-training is enacted by accepting the SRL model's predicted labels into the training pool for future iterations when the sentence was determined not to require human annotation. This approach requires 31.5% less annotated data to achieve comparable performance as training on the entirety of the CoNLL-2009 dataset. Koshorek et al. (2019) compared data selection policies while simulating active learning for question-answer driven SRL (QA-SRL). QA-SRL is a form of representing the meaning of a sentence using question-answer pairs. Rather than annotating spans of text with argument names, such as PropBank's ARG0, annotators enumerate a list of questions relating to the actions in a sentence, such as who is performing an action and when is it happening, along with the corresponding answers from the original text. This representation provides similar coverage to PropBank, but can also represent implicit arguments that aren't directly represented by the syntax.", "cite_spans": [ { "start": 103, "end": 122, "text": "(Shen et al., 2017)", "ref_id": "BIBREF16" }, { "start": 151, "end": 171, "text": "(Zhu and Hovy, 2007)", "ref_id": "BIBREF23" }, { "start": 203, "end": 220, "text": "(Li et al., 2013)", "ref_id": "BIBREF11" }, { "start": 360, "end": 378, "text": "Wang et al. (2017)", "ref_id": "BIBREF20" }, { "start": 989, "end": 1011, "text": "Koshorek et al. (2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "The process of identifying spans that are arguments of a predicate and the generation of questions based on the arguments were treated as independent tasks. To provide an approximate upper bound on the learning curve, they simulated active learning on the dataset, splitting the unlabeled candidates into K subsets, and selecting the subset that improved the model the most on the evaluation data. Against this oracle policy, they compared the following selection strategies, sampling K random subsets to choose from: selecting a random subset, selecting the subset with the highest average token count among sentences, and selecting the subset that has the maximal average entropy over the model's predictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "The uncertainty strategy performed worse than random selection for argument span detection, and was not tested for question generation. Selecting the sentences with high token counts tended to improve the F-score for argument span detection by 1-3% given an equal number of training instances (and attaining 60% on the full dataset), while being largely comparable to random selection for question generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "Active learning for SRL has also been applied in combination with multi-task learning (Ikhwantri et al., 2018) , using a subset of PropBank roles along with a new \"greet\" role. The authors compared single-and multi-task SRL, both with and without active learning. Under multi-task learning the model jointly learns to identify semantic roles as well as to classify tokens as entities such as \"Person\" or \"Location\". They introduced a set of semantic roles that accommodate conversational language and annotated a small corpus of Indonesian chatbot data to provide training and testing data. By selecting sentences using model uncertainty in the single-task context, F-score was improved by less than 1% compared to randomly selecting the data.", "cite_spans": [ { "start": 86, "end": 110, "text": "(Ikhwantri et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "Modern SRL systems utilise deep learning, which poses a challenge to assessing the model's certainty in its predictions. The predictive probabilities in the output layer cannot be reliably interpreted as a measure of model certainty. Gal and Ghahramani (2016) proposed using dropout as a Bayesian approximation for model certainty, estimating it using the variation in multiple forward passes.", "cite_spans": [ { "start": 234, "end": 259, "text": "Gal and Ghahramani (2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "This dropout principle was tested on numerous NLP tasks by Siddhant and Lipton (2018) , including SRL. For their SRL experiments, they used a neural SRL model based on the He et al. (2017) model, with modifications to the decoding method (instead using a CRF decoder) and increasing the dropout rate from 0.2 to 0.25.", "cite_spans": [ { "start": 59, "end": 85, "text": "Siddhant and Lipton (2018)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "In comparison to the baseline of random selection, they tested the classic uncertainty measure of using the output probabilities of the model, normalised for sentence length, with two Bayesian Active Learning by Disagreement methods for selecting additional instances: Monte Carlo Dropout Disagreement (DO-BALD) and Bayes-by-Backprop (BB-BALD). The DO-BALD method applies dropout during multiple predictions of instances in the unlabeled pool and selects instances based on how many of those predictions disagree on the most common label of the entire sequence. This selection strategy is similar to the selection method we propose in this paper, but with several differences. The most significant difference is that the authors treat agreement between predictions as all-or-nothing, rather than allowing partial agreement based on arguments. They also are using a higher number of predictions (100 per sentence as opposed to 5 per predicate) to calculate disagreement between, which may be necessary in this allor-nothing approach. In contrast, we consider each predicate-argument label sequence independently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "They tested their methods on both the CoNLL-2005 and CoNLL-2012 datasets, which use Prop-Bank annotation. While the Bayesian methods were similar to the standard uncertainty selection method in the case of SRL, these methods resulted in approximately 2-3% increase for F-score compared to random selection when training on the same number of tokens. These results were much more modest than results for other tasks such as NER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "We used two independent datasets for our experiments: The English section of Ontonotes (version 5.0) (Weischedel et al., 2013) with the latest frame updates (O'Gorman et al., 2018) and the colon cancer portion of THYME (Albright et al., 2013) .", "cite_spans": [ { "start": 101, "end": 126, "text": "(Weischedel et al., 2013)", "ref_id": null }, { "start": 157, "end": 180, "text": "(O'Gorman et al., 2018)", "ref_id": "BIBREF13" }, { "start": 219, "end": 242, "text": "(Albright et al., 2013)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4" }, { "text": "Ontonotes 5.0 consists of 1.5 million words across multiple genres. The majority of this data is sourced from news, but it also includes telephone conversations, text from The Bible, and web data. THYME is comprised of clinical notes and pathology reports of colon and brain cancer patients. For our experiments, we used only the colon cancer portion. The data is split into training, validation, and test subsets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4" }, { "text": "We simulated active learning on the training subset of each corpus, dividing it into an initial seed set and a set of sentences to select from. The initial seed sets for sentence-based experiments were 200 randomly chosen sentences. For the wholedocument baseline, the seed set is comprised either of documents from multiple genres, totalling 200 sentences, in the case of Ontonotes; or a single patient (consisting of two clinical notes and one pathology report, totalling 195 sentences) in the case of the THYME corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4" }, { "text": "In both cases, we utilised validation data to determine early stopping. Due to the excessive computational time required to predict the standard validation sets for these corpora for every epoch for every iteration, as well as the fact that a realworld scenario would be unlikely to have such a disproportionally large validation set to perform active learning, we selected a subset of the validation data for use. In the experiments involving selecting individual sentences, we used the same randomly chosen 250 sentences. In the case of the baselines of choosing random documents, we used validation datasets approximating 250 sentences, comprised of whole documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4" }, { "text": "Evaluation was performed on the standard test subset for each respective corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4" }, { "text": "We used AllenNLP's (Gardner et al., 2018) implementation of a state-of-the-art BERT-based model (Shi and Lin, 2019) . Our training procedure for this model used 25 epochs or stopped early with a patience of 5. Trained under the same experimental configuration on the full training subsets, this model achieves an F-score of 83.82 and 83.48 on the Ontonotes and THYME datasets respectively.", "cite_spans": [ { "start": 19, "end": 41, "text": "(Gardner et al., 2018)", "ref_id": "BIBREF6" }, { "start": 96, "end": 115, "text": "(Shi and Lin, 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "5" }, { "text": "After training on the initial seed dataset, each iteration of active learning selected batches of 100 sentences re-trained from scratch. In the case of the whole-document baseline, for the creation of each batch, we selected random documents until the number of sentences selected met or exceeded 100.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "5" }, { "text": "6 Selection Methods", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "5" }, { "text": "We used the classic approach of selecting query sentences based on the probability distribution over labels from the model's output. For each predicate in a sentence, we summed the highest probability for each token and then normalised by sentence length. This results in a single confidence score for the label sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Output", "sec_num": "6.1" }, { "text": "The model output of neural networks are a poor estimate of confidence, due to their nonlinearity and tendency to overfit and be overconfident in their predictions (Gal and Ghahramani, 2016) (Dong et al., 2018) .", "cite_spans": [ { "start": 163, "end": 189, "text": "(Gal and Ghahramani, 2016)", "ref_id": "BIBREF5" }, { "start": 190, "end": 209, "text": "(Dong et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "DO-BALD", "sec_num": "6.2" }, { "text": "Using Monte Carlo dropout as a Bayesian approximation of uncertainty, as proposed by Gal and Ghahramani (2016) , we applied a dropout rate of 10% during the prediction stage. We employ the Bayesian Active Learning by Disagreement approach by predicting each candidate sentence multiple times to select sentences based on how often those predictions agree with each other.", "cite_spans": [ { "start": 85, "end": 110, "text": "Gal and Ghahramani (2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "DO-BALD", "sec_num": "6.2" }, { "text": "The number of predictions used correspondingly increases the time required to select data upon each iteration. Gal and Ghahramani (2016) used between 1000 and 10 forward passes in their experiments and Siddhant and Lipton (2018) used 100 per sentence when applying DO-BALD to SRL. An ideal solution would minimise this variable for efficiency with as little loss as possible in the benefit gained by sampling the distribution. In our experiments, we chose to perform 5 predictions per predicate. Due to sentences containing multiple predicates, this typically results in 10-15 predictions per sentence. From these predictions, agreement was calculated based on entire argument spans. For each predicate in the sentence, we considered the percent of predictions for each argument type that agreed with the most frequent span choice for that type. Referring to the example in Table 2 , the most frequently chosen span for ARG0 was \"John Smith\", although two of the predictions chose only the partial match of \"John\". In this case, since two out of the five disagree with the most common prediction, the argument ARG0 has a disagreement rate of 0.4. The rate of disagreement was calculated for each argument type present in the set of predictions and then averaged to summarise the consensus for the entire predicate-argument structure.", "cite_spans": [ { "start": 111, "end": 136, "text": "Gal and Ghahramani (2016)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 874, "end": 881, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "DO-BALD", "sec_num": "6.2" }, { "text": "By examining the forward-pass predictions predicate-by-predicate and argument-by-argument to determine agreement, our approach is more granular than Siddhant and Lipton (2018)'s method of determining disagreement from the mode of the entirety of the sentence's labels. Our strategy allows for partial credit when the predictions are in agreement about particular arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DO-BALD", "sec_num": "6.2" }, { "text": "Since sentences often contain multiple predicates, we must aggregate the scores into a single measure in order to rank sentences by their potential informativeness. We propose two such ways of combining the predicate scores, which we applied to both the Output and DO-BALD methods of calculating certainty of a single predicate-argument structure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Predicate Scores", "sec_num": "6.3" }, { "text": "\u2022 Average of Predicates (AP): The score for all predicate-argument structures in a sentence is averaged. This provides a balance between the predicates in the sentence, but high confidence for one predicate may diminish the value of a more uncertain predicate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Predicate Scores", "sec_num": "6.3" }, { "text": "\u2022 Lowest Scoring Predicate (LSP): The score for a sentence is the lowest score of all the predicate-argument structures present in the sentence. This strategy prioritises sentences that contain a predicate that is most likely to have a high impact on learning, although this may allow selecting for sentences that require annotating additional predicates that have already been learned well by the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Predicate Scores", "sec_num": "6.3" }, { "text": "In the case of DO-BALD, a sentence with two predicates will have ten total forward-passes, five for each predicate. In the following example, a sentence contains one predicate that's very common and may likely already occur in the dataset, come.01 (motion), and a second predicate that's less common, make it.14 (achieve or arrive at). A plausible scenario is that the predictions of the arguments for the rarer predicate \"make it\" will be in higher disagreement compared to the predictions of the arguments for \"came\". In this case, the LSP method will be more likely to select the sentence than AP, since it will rank this sentence's likely informativeness based only on the disagreement rate of \"make it\", whereas AP will average between the two disagreement rates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Predicate Scores", "sec_num": "6.3" }, { "text": "We include three passive baseline measurements:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "6.4" }, { "text": "\u2022 Random Sentences (RandSent): Choose random batches of sentences on each iteration of active learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "6.4" }, { "text": "\u2022 Random Documents (RandDoc): Choose random batches of entire documents, until the chosen sentence batch size is reached.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "6.4" }, { "text": "\u2022 Most Predicates (MostPred) Choose batches of sentences, selecting for those with the highest number of predicates present. Identification of predicates was done automatically using AllenNLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "6.4" }, { "text": "Out results are reported as a learning curve across number of sentences (Figures 1, 3) ( Figures 2, 4 ) present in the training pool after each iteration. Selected F-scores for the methods are reported according to number of sentences (Table 3 ) and approximate number of predicates (Table 4) in the training pool at various points.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 86, "text": "(Figures 1, 3)", "ref_id": "FIGREF1" }, { "start": 89, "end": 101, "text": "Figures 2, 4", "ref_id": "FIGREF2" }, { "start": 235, "end": 244, "text": "(Table 3", "ref_id": "TABREF5" }, { "start": 284, "end": 293, "text": "(Table 4)", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Results", "sec_num": "7" }, { "text": "We can estimate the annotation savings gained by the tested methods by examining the statistics required for each curve to reach a particular F-score. For this purpose, we will choose 78% as a benchmark for a viable SRL model that can produce sufficiently accurate results to feed into downstream NLP applications. The passive selection of random sentences attains this score after 3,000 sentences. The DO-BALD LSP method and MostPred methods achieve this score after 1,400 and 1,200 respectively, providing a 53%-60% reduction in data. Using the model's output with LSP provided a more slight, but still significant, reduction of 10%. When selecting whole documents, this performance was not achieved until 4,126 sentences were in the training pool. Both of the AP methods, which averaged the predicates in the sentences, performed significantly worse than the baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ontonotes", "sec_num": "7.1" }, { "text": "On the other hand, the reduction in predicate annotation offered by active learning was more modest. The passive strategies of selecting ran- dom sentences and documents required 9,333 and 11,598 predicates, respectively. DO-BALD LSP required 7,673 predicates (18% fewer). The Most-Pred strategy, which offered the best performance on reducing sentences, didn't achieve this until 11,460 predicates, almost comparable to random whole-document selection. Output LSP provided a negligible reduction, with 9,073 predicates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ontonotes", "sec_num": "7.1" }, { "text": "The two selection methods that averaged the predicates performed worse than the baselines by sentences. One reason for this may be that the presence of frequent, but easily learned, predicates such as copulas inflating the average confidence of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ontonotes", "sec_num": "7.1" }, { "text": "In terms of assessing the impact of wholedocument selection, which is necessary for other NLP tasks such as coreference, compared to sampling individual sentences, the difference between sentences (4,126 vs 3,000, respectively) and predicates (11,598 vs. 9,333) required to reach our benchmark was significant. Sampling individual sentences reduces sentence annotation by 27% and predicate annotation by 20% to reach our benchmark.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ontonotes", "sec_num": "7.1" }, { "text": "Due to the weak performance of the AP aggregation method on the Ontonotes dataset, we did not perform those experiments on the THYME dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THYME", "sec_num": "7.2" }, { "text": "As with our evaluation on the Ontonotes dataset, we can consider the annotation requirements to reach an F-score of 78.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THYME", "sec_num": "7.2" }, { "text": "The baseline sentence selection method obtains this benchmark after 1,600 sentences. Consistent with the results on the Ontonotes dataset, the DO-BALD LSP and MostPred methods are the most efficient ways of selecting sentences, with both requiring 60% fewer sentences to train a model with a test F-score of 78. The Output LSP method requires 18% fewer sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THYME", "sec_num": "7.2" }, { "text": "With respect to predicates, once again we see the baseline RandSent performance (4355 predicates) significantly improved by DO-BALD LSP (20% -4355 predicates) and Output LSP (16% -3666 predicates), but MostPred is a detriment (30% more annotation -5651 predicates).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THYME", "sec_num": "7.2" }, { "text": "Between the two proposed methods of aggregating predicate-argument structure scores into a single value to represent a sentence, averaging across them (AP) or only considering the weakest predicate (LSP), our results show the latter to be substantially better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "Both selecting sentences for the most predicates and selecting sentences with the predicate with the lowest DO-BALD agreement offer a significant 53%-60% decrease in the number of sentences required to train the model to a viable performance level. These findings are consistent for both the broad, general Ontonotes corpus and the niche colon cancer clinical note domain of the THYME corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "We assessed the performance of these selection strategies in terms of reducing both number of sentences and number of predicates annotated. Typically, the SRL annotation process of a large annotation project benefits most from a reduction of predicates, due to presenting annotators with batches of a specific predicate to annotate, thereby reducing the cognitive load of switching between different predicate frames. But in the case of projects attempting to develop new corpora with significant budget constraints that would most benefit from an active learning approach, the piecemeal nature of each annotation iteration makes this approach less viable and likely necessitates presenting annotators with the data sentence-by-sentence. In this case, reducing the number of sentences will have a more substantial impact than reducing the number of predicates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "While both DO-BALD LSP and the simpler strategy of selecting sentences with high predicate density provide significant reduction in sentence annotation, only DO-BALD LSP simultaneously reduced predicate annotation as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "Smaller batch sizes per iteration allow more efficient selection of data since the model is updated more frequently and we can reduce redundant information content within the batch that would waste annotation time. Using very small batches is not tractable in tasks that require long model training times. Koshorek et al. (2019) tested selection strategies on randomly sampled batches of data, rather than determining priority of individual instances, but that waters down the benefits of using the selection heuristic. In the future, we plan to investigate ways to balance syntactico-semantic redundancy with the model-based selection techniques in order to improve the learning rate for SRL, while reducing training time for each iteration.", "cite_spans": [ { "start": 306, "end": 328, "text": "Koshorek et al. (2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "9" }, { "text": "We chose to use a random 200 sentences as our seed set, but the ideal amount and method of selection for active learning for SRL remains an open question. If too few sentences are chosen, or they're not sufficiently diverse, we may encounter the missed class effect (Tomanek et al., 2009) , where the model becomes overconfident about instances that greatly differ from what's present in its current training pool, and fails to select them for annotation. On the other hand, selecting too large of a seed set negates the benefits of active learning. In future work we plan to explore unsupervised methods of selecting a semantically diverse seed set. Prior work (Dligach and Palmer, 2011) (Peterson et al., 2014) shows that language models may offer an unsupervised way of selecting rare verb instances and thus beneficial SRL instances.", "cite_spans": [ { "start": 266, "end": 288, "text": "(Tomanek et al., 2009)", "ref_id": "BIBREF19" }, { "start": 662, "end": 688, "text": "(Dligach and Palmer, 2011)", "ref_id": "BIBREF2" }, { "start": 689, "end": 712, "text": "(Peterson et al., 2014)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "9" } ], "back_matter": [ { "text": "We gratefully acknowledge the support of DARPA AIDA FA8750-18-2-0016 (RAMFIS), NIH: 5R01LM010090-09 THYME, Temporal Relation Discovery for Clinical Text, and NSF ACI 1443085: DIBBS Porting Practical NLP and ML Semantics from Biomedicine to the Earth, Ice and Life Sciences. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of any government agency. Finally, we thank the anonymous IWCS reviewers for their insightful comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Towards comprehensive syntactic and semantic annotations of the clinical narrative", "authors": [ { "first": "Daniel", "middle": [], "last": "Albright", "suffix": "" }, { "first": "Arrick", "middle": [], "last": "Lanfranchi", "suffix": "" }, { "first": "Anwen", "middle": [], "last": "Fredriksen", "suffix": "" }, { "first": "", "middle": [], "last": "Styler", "suffix": "" }, { "first": "F", "middle": [], "last": "William", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Warner", "suffix": "" }, { "first": "Jena", "middle": [ "D" ], "last": "Hwang", "suffix": "" }, { "first": "D", "middle": [], "last": "Jinho", "suffix": "" }, { "first": "Dmitriy", "middle": [], "last": "Choi", "suffix": "" }, { "first": "", "middle": [], "last": "Dligach", "suffix": "" }, { "first": "D", "middle": [], "last": "Rodney", "suffix": "" }, { "first": "James", "middle": [], "last": "Nielsen", "suffix": "" }, { "first": "Wayne", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Guergana K", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "", "middle": [], "last": "Savova", "suffix": "" } ], "year": 2013, "venue": "Journal of the American Medical Informatics Association", "volume": "20", "issue": "5", "pages": "922--930", "other_ids": { "DOI": [ "10.1136/amiajnl-2012-001317" ] }, "num": null, "urls": [], "raw_text": "Daniel Albright, Arrick Lanfranchi, Anwen Fredriksen, IV Styler, William F, Colin Warner, Jena D Hwang, Jinho D Choi, Dmitriy Dligach, Rodney D Nielsen, James Martin, Wayne Ward, Martha Palmer, and Guergana K Savova. 2013. Towards comprehensive syntactic and semantic annotations of the clinical narrative. Journal of the American Medical Infor- matics Association, 20(5):922-930.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Multilingual Propbank annotation tools: Cornerstone and jubilee", "authors": [ { "first": "Jinho", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Bonial", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 Demonstration Session", "volume": "", "issue": "", "pages": "13--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinho Choi, Claire Bonial, and Martha Palmer. 2010. Multilingual Propbank annotation tools: Corner- stone and jubilee. In Proceedings of the NAACL HLT 2010 Demonstration Session, pages 13-16, Los Angeles, California. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Good seed makes a good crop: Accelerating active learning using language modeling", "authors": [ { "first": "Dmitriy", "middle": [], "last": "Dligach", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "6--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitriy Dligach and Martha Palmer. 2011. Good seed makes a good crop: Accelerating active learning using language modeling. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 6-10, Portland, Oregon, USA. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Confidence modeling for neural semantic parsing", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "743--753", "other_ids": { "DOI": [ "10.18653/v1/P18-1069" ] }, "num": null, "urls": [], "raw_text": "Li Dong, Chris Quirk, and Mirella Lapata. 2018. Confi- dence modeling for neural semantic parsing. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 743-753, Melbourne, Australia. As- sociation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The ClearEarth Project: Preliminary Findings from Experiments in Applying the CLEARTK NLP Pipeline and Annotation Tools Developed for Biomedicine to the Earth Sciences", "authors": [ { "first": "R", "middle": [], "last": "Duerr", "suffix": "" }, { "first": "A", "middle": [], "last": "Thessen", "suffix": "" }, { "first": "C", "middle": [ "J" ], "last": "Jenkins", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "S", "middle": [], "last": "Myers", "suffix": "" }, { "first": "S", "middle": [], "last": "Ramdeen", "suffix": "" } ], "year": 2016, "venue": "AGU Fall Meeting Abstracts", "volume": "2016", "issue": "", "pages": "11--1625", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Duerr, A. Thessen, C. J. Jenkins, M. Palmer, S. My- ers, and S. Ramdeen. 2016. The ClearEarth Project: Preliminary Findings from Experiments in Applying the CLEARTK NLP Pipeline and Annotation Tools Developed for Biomedicine to the Earth Sciences. In AGU Fall Meeting Abstracts, volume 2016, pages IN11B-1625.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "authors": [ { "first": "Yarin", "middle": [], "last": "Gal", "suffix": "" }, { "first": "Zoubin", "middle": [], "last": "Ghahramani", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 33rd International Conference on International Conference on Machine Learning", "volume": "48", "issue": "", "pages": "1050--1059", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model un- certainty in deep learning. In Proceedings of the 33rd International Conference on International Con- ference on Machine Learning -Volume 48, ICML'16, page 1050-1059. JMLR.org.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "AllenNLP: A deep semantic natural language processing platform", "authors": [ { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Grus", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Schmitz", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of Workshop for NLP Open Source Software (NLP-OSS)", "volume": "", "issue": "", "pages": "1--6", "other_ids": { "DOI": [ "10.18653/v1/W18-2501" ] }, "num": null, "urls": [], "raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language pro- cessing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1- 6, Melbourne, Australia. Association for Computa- tional Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Deep semantic role labeling: What works and what's next", "authors": [ { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "473--483", "other_ids": { "DOI": [ "10.18653/v1/P17-1044" ] }, "num": null, "urls": [], "raw_text": "Luheng He, Kenton Lee, Mike Lewis, and Luke Zettle- moyer. 2017. Deep semantic role labeling: What works and what's next. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 473-483, Vancouver, Canada. Association for Com- putational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bayesian active learning for classification and preference learning", "authors": [ { "first": "Neil", "middle": [], "last": "Houlsby", "suffix": "" }, { "first": "Ferenc", "middle": [], "last": "Husz\u00e1r", "suffix": "" }, { "first": "Zoubin", "middle": [], "last": "Ghahramani", "suffix": "" }, { "first": "M\u00e1t\u00e9", "middle": [], "last": "Lengyel", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1112.5745" ] }, "num": null, "urls": [], "raw_text": "Neil Houlsby, Ferenc Husz\u00e1r, Zoubin Ghahramani, and M\u00e1t\u00e9 Lengyel. 2011. Bayesian active learn- ing for classification and preference learning. arXiv preprint arXiv:1112.5745.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Multitask active learning for neural semantic role labeling on low resource conversational corpus", "authors": [ { "first": "Fariz", "middle": [], "last": "Ikhwantri", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Louvan", "suffix": "" }, { "first": "Kemal", "middle": [], "last": "Kurniawan", "suffix": "" }, { "first": "Bagas", "middle": [], "last": "Abisena", "suffix": "" }, { "first": "Valdi", "middle": [], "last": "Rachman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP", "volume": "", "issue": "", "pages": "43--50", "other_ids": { "DOI": [ "10.18653/v1/W18-3406" ] }, "num": null, "urls": [], "raw_text": "Fariz Ikhwantri, Samuel Louvan, Kemal Kurniawan, Bagas Abisena, Valdi Rachman, Alfan Farizki Wicaksono, and Rahmad Mahendra. 2018. Multi- task active learning for neural semantic role labeling on low resource conversational corpus. In Proceed- ings of the Workshop on Deep Learning Approaches for Low-Resource NLP, pages 43-50, Melbourne. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "On the limits of learning to actively learn semantic representations", "authors": [ { "first": "Omri", "middle": [], "last": "Koshorek", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Stanovsky", "suffix": "" }, { "first": "Yichu", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Srikumar", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "452--462", "other_ids": { "DOI": [ "10.18653/v1/K19-1042" ] }, "num": null, "urls": [], "raw_text": "Omri Koshorek, Gabriel Stanovsky, Yichu Zhou, Vivek Srikumar, and Jonathan Berant. 2019. On the limits of learning to actively learn semantic rep- resentations. In Proceedings of the 23rd Confer- ence on Computational Natural Language Learning (CoNLL), pages 452-462, Hong Kong, China. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Active learning for crossdomain sentiment classification", "authors": [ { "first": "Shoushan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yunxia", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Zhongqing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, IJCAI '13", "volume": "", "issue": "", "pages": "2127--2133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shoushan Li, Yunxia Xue, Zhongqing Wang, and Guodong Zhou. 2013. Active learning for cross- domain sentiment classification. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, IJCAI '13, page 2127-2133. AAAI Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "GUIR at SemEval-2017 task 12: A framework for cross-domain clinical temporal information extraction", "authors": [ { "first": "Sean", "middle": [], "last": "Macavaney", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" }, { "first": "Nazli", "middle": [], "last": "Goharian", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)", "volume": "", "issue": "", "pages": "1024--1029", "other_ids": { "DOI": [ "10.18653/v1/S17-2180" ] }, "num": null, "urls": [], "raw_text": "Sean MacAvaney, Arman Cohan, and Nazli Goharian. 2017. GUIR at SemEval-2017 task 12: A frame- work for cross-domain clinical temporal information extraction. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1024-1029, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The new Propbank: Aligning Propbank with AMR through POS unification", "authors": [ { "first": "Sameer", "middle": [], "last": "Tim O'gorman", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Katie", "middle": [], "last": "Bonn", "suffix": "" }, { "first": "James", "middle": [], "last": "Conger", "suffix": "" }, { "first": "", "middle": [], "last": "Gung", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim O'Gorman, Sameer Pradhan, Martha Palmer, Ju- lia Bonn, Katie Conger, and James Gung. 2018. The new Propbank: Aligning Propbank with AMR through POS unification. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The Proposition Bank: An annotated corpus of semantic roles", "authors": [ { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Kingsbury", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "1", "pages": "71--106", "other_ids": { "DOI": [ "10.1162/0891201053630264" ] }, "num": null, "urls": [], "raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated cor- pus of semantic roles. Computational Linguistics, 31(1):71-106.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Focusing annotation for semantic role labeling", "authors": [ { "first": "Daniel", "middle": [], "last": "Peterson", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Shumin", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Peterson, Martha Palmer, and Shumin Wu. 2014. Focusing annotation for semantic role labeling. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), Reykjavik, Iceland. European Language Resources Association (ELRA).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Deep active learning for named entity recognition", "authors": [ { "first": "Yanyao", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Hyokun", "middle": [], "last": "Yun", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Lipton", "suffix": "" }, { "first": "Yakov", "middle": [], "last": "Kronrod", "suffix": "" }, { "first": "Animashree", "middle": [], "last": "Anandkumar", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "252--256", "other_ids": { "DOI": [ "10.18653/v1/W17-2630" ] }, "num": null, "urls": [], "raw_text": "Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. In Proceedings of the 2nd Workshop on Representa- tion Learning for NLP, pages 252-256, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Simple BERT models for relation extraction and semantic role labeling", "authors": [ { "first": "Peng", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.05255" ] }, "num": null, "urls": [], "raw_text": "Peng Shi and Jimmy Lin. 2019. Simple BERT mod- els for relation extraction and semantic role labeling. arXiv preprint arXiv:1904.05255.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Deep Bayesian active learning for natural language processing: Results of a large-scale empirical study", "authors": [ { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" }, { "first": "Zachary", "middle": [ "C" ], "last": "Lipton", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2904--2909", "other_ids": { "DOI": [ "10.18653/v1/D18-1318" ] }, "num": null, "urls": [], "raw_text": "Aditya Siddhant and Zachary C. Lipton. 2018. Deep Bayesian active learning for natural language pro- cessing: Results of a large-scale empirical study. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 2904-2909, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "On proper unit selection in active learning: Co-selection effects for named entity recognition", "authors": [ { "first": "Katrin", "middle": [], "last": "Tomanek", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Laws", "suffix": "" }, { "first": "Udo", "middle": [], "last": "Hahn", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing", "volume": "", "issue": "", "pages": "9--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katrin Tomanek, Florian Laws, Udo Hahn, and Hin- rich Sch\u00fctze. 2009. On proper unit selection in active learning: Co-selection effects for named en- tity recognition. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Lan- guage Processing, pages 9-17, Boulder, Colorado. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Active learning for black-box semantic role labeling with neural factors", "authors": [ { "first": "Chenguang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Chiticariu", "suffix": "" }, { "first": "Yunyao", "middle": [], "last": "Li", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17", "volume": "", "issue": "", "pages": "2908--2914", "other_ids": { "DOI": [ "10.24963/ijcai.2017/405" ] }, "num": null, "urls": [], "raw_text": "Chenguang Wang, Laura Chiticariu, and Yunyao Li. 2017. Active learning for black-box semantic role labeling with neural factors. In Proceedings of the Twenty-Sixth International Joint Conference on Arti- ficial Intelligence, IJCAI-17, pages 2908-2914.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The value of semantic parse labeling for knowledge base question answering", "authors": [ { "first": "Matthew", "middle": [], "last": "Wen-Tau Yih", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Richardson", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Meek", "suffix": "" }, { "first": "Jina", "middle": [], "last": "Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Suh", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "201--206", "other_ids": { "DOI": [ "10.18653/v1/P16-2033" ] }, "num": null, "urls": [], "raw_text": "Wen-tau Yih, Matthew Richardson, Chris Meek, Ming- Wei Chang, and Jina Suh. 2016. The value of se- mantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 201-206, Berlin, Germany. Association for Computational Linguis- tics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Active learning for word sense disambiguation with methods for addressing the class imbalance problem", "authors": [ { "first": "Jingbo", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", "volume": "", "issue": "", "pages": "783--790", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingbo Zhu and Eduard Hovy. 2007. Active learn- ing for word sense disambiguation with methods for addressing the class imbalance problem. In Pro- ceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning (EMNLP- CoNLL), pages 783-790, Prague, Czech Republic. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "ARG0 The governor] [ ARGM-OutputD could] [ ARGM-NEG n't] [ Pred make it], so the lieutenant governor came instead. The governor could n't make it, so [ ARG1 the lieutenant governor] [ Pred came] instead." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Learning curve of F-score by number of sentences in Ontonotes training data." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "Learning curve of F-score by number of predicates in Ontonotes training data." }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "Learning curve of F-score by number of sentences in THYME training data." }, "FIGREF4": { "uris": null, "num": null, "type_str": "figure", "text": "Learning curve of F-score by number of predicates in THYME training data." }, "TABREF0": { "type_str": "table", "num": null, "text": "PropBank roleset for give.01.", "html": null, "content": "" }, "TABREF1": { "type_str": "table", "num": null, "text": "", "html": null, "content": "
:
[ ARG0 She] had [ Pred given] [ ARG1 the answers]
[ ARG2 to two low-ability geography classes].
" }, "TABREF2": { "type_str": "table", "num": null, "text": "ARG0 John Smith] [ Pred bought] [ ARG1 apples]. Prediction 2 [ ARG0 John] Smith [ Pred bought] [ ARG1 apples]. Prediction 3 [ ARG0 John Smith] [ Pred bought] [ ARG1 apples]. Prediction 4 [ ARG0 John Smith] [ Pred bought] [ ARG1 apples]. Prediction 5 [ ARG0 John] Smith [ Pred bought] [ ARG1 apples].", "html": null, "content": "" }, "TABREF3": { "type_str": "table", "num": null, "text": "An example of varying argument predictions for a predicate, bought, by multiple forward-passes with dropout.", "html": null, "content": "
" }, "TABREF4": { "type_str": "table", "num": null, "text": "64.32 71.00 72.02 74.95 RandDoc 61.26 64.27 70.20 72.31 73.59 MostPred 59.39 74.60 76.13 77.55 77.52 DO-BALD LSP 60.25 73.48 74.80 76.23 78.13 DO-BALD AP 62.26 63.92 66.28 69.83 67.29", "html": null, "content": "
# sentences3006009001200 1500
Ontonotes
RandSent 55.48 Output LSP 61.91 70.29 71.08 73.27 74.87
Output AP62.12 58.52 64.52 62.28 68.39
THYME
RandSent64.53 72.07 74.23 75.67 76.88
RandDoc49.32 64.23 67.11 73.62 75.21
MostPred66.66 74.61 76.37 77.49 78.66
DO-BALD LSP 58.01 74.66 75.81 76.91 79.03
Output LSP64.80 72.87 76.24 77.03 78.69
and predicates
" }, "TABREF5": { "type_str": "table", "num": null, "text": "", "html": null, "content": "
: F-score for number of sentences for each
query selection method: random sentences, random
documents, most predicates, DO-BALD (Lowest Scor-
ing Predicate and Average of Predicates), model output
(Lowest Scoring Predicate and Average of Predicates).
Sentence count is approximate for whole-document se-
lection.
" }, "TABREF7": { "type_str": "table", "num": null, "text": "", "html": null, "content": "
: F-score for approximate number of predicates
for each query selection method: random sentences,
random documents, most predicates, DO-BALD (Low-
est Scoring Predicate and Average of Predicates),
model output (Lowest Scoring Predicate and Average
of Predicates). MostPred takes too large of selections
to always be within range of these numbers.
" } } } }