{ "paper_id": "I17-1036", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:39:24.857494Z" }, "title": "Exploiting Document Level Information to Improve Event Detection via Recurrent Neural Networks", "authors": [ { "first": "Shaoyang", "middle": [], "last": "Duan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tianjin University", "location": { "settlement": "Tianjin", "country": "China" } }, "email": "syduan@tju.edu.cn" }, { "first": "Ruifang", "middle": [], "last": "He", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tianjin University", "location": { "settlement": "Tianjin", "country": "China" } }, "email": "rfhe@tju.edu.cn" }, { "first": "Wenli", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tianjin University", "location": { "settlement": "Tianjin", "country": "China" } }, "email": "wlzhao@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper tackles the task of event detection, which involves identifying and categorizing events. The previous work mainly exists two problems: (1) the traditional feature-based methods apply crosssentence information, yet need taking a large amount of human effort to design complicated feature sets and inference rules; (2) the representation-based methods though overcome the problem of manually extracting features, while just depend on local sentence representation. Considering local sentence context is insufficient to resolve ambiguities in identifying particular event types, therefore, we propose a novel document level Recurrent Neural Networks (DLRNN) model, which can automatically extract cross-sentence clues to improve sentence level event detection without designing complex reasoning rules. Experiment results show that our approach outperforms other state-ofthe-art methods on ACE 2005 dataset neither the external knowledge base nor the event arguments are used explicitly.", "pdf_parse": { "paper_id": "I17-1036", "_pdf_hash": "", "abstract": [ { "text": "This paper tackles the task of event detection, which involves identifying and categorizing events. The previous work mainly exists two problems: (1) the traditional feature-based methods apply crosssentence information, yet need taking a large amount of human effort to design complicated feature sets and inference rules; (2) the representation-based methods though overcome the problem of manually extracting features, while just depend on local sentence representation. Considering local sentence context is insufficient to resolve ambiguities in identifying particular event types, therefore, we propose a novel document level Recurrent Neural Networks (DLRNN) model, which can automatically extract cross-sentence clues to improve sentence level event detection without designing complex reasoning rules. Experiment results show that our approach outperforms other state-ofthe-art methods on ACE 2005 dataset neither the external knowledge base nor the event arguments are used explicitly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Event detection is a crucial subtask of event extraction, which aims to extract event triggers (most often a single verb or noun) and classify them into specific types in text. For instance, according to the ACE 2005 annotation guideline 1 , in the sentence \"central command says troops were involved in a gun battle yesterday\", an event detection system should be able to detect an Attack event with the trigger word \"battle\". However, this 1 https://www.ldc.upenn.edu/sites/www.ldc.upenn. edu/files/english-events-guidelines-v5.4.3.pdf.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "task is very challenging, as the same event might appear with various trigger words and a trigger expression might evoke different event types in different context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most of the existing methods either employed feature-based models with cross-sentence level information (Ji and Grishman, 2008) (Liao and Grishman, 2010) (Hong et al., 2011) (Huang and Riloff, 2012) or followed representation-based architectures with sentence level context (Chen et al., 2015) (Nguyen and Grishman, 2015) (Liu et al., 2016) . Both models have some inherent flaws:", "cite_spans": [ { "start": 104, "end": 127, "text": "(Ji and Grishman, 2008)", "ref_id": "BIBREF11" }, { "start": 128, "end": 153, "text": "(Liao and Grishman, 2010)", "ref_id": "BIBREF15" }, { "start": 154, "end": 173, "text": "(Hong et al., 2011)", "ref_id": "BIBREF9" }, { "start": 174, "end": 198, "text": "(Huang and Riloff, 2012)", "ref_id": "BIBREF10" }, { "start": 274, "end": 293, "text": "(Chen et al., 2015)", "ref_id": "BIBREF4" }, { "start": 294, "end": 321, "text": "(Nguyen and Grishman, 2015)", "ref_id": "BIBREF22" }, { "start": 322, "end": 340, "text": "(Liu et al., 2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) feature-based approaches not only need to elaborately design rich features and often suffer error propagation from the existing natural language processing tools (i.e part of speech tags and dependency), but also the cross-sentence clues are embodied by devising complex inference rules, which is difficult to cover all the semantic laws;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) though representation-based models can effectively alleviate the problem of manually extract features, local sentence context information may be insufficient for event detection models or even humans to classify events from isolated sentences. For example, consider the following sentences from ACE2005 dataset: S1: Saba hasn't delivered yet. 2 S2: I knew it was time to leave. 3 It is very difficult to identify S1 as a Be-Born event with the trigger \"delivered\", which means that a person entity is given birth to. Similarly, we have low confidence to tag \"leave\" as a trigger for End-Position event in the S2, which means that a person entity stops working for an organization. However, the wider context that \"She wants to cal-l her pregnant daughter Saba in Sweden to see if she has delivered.\" would give us more confidence to tag \"delivered\" as a Be-Born event in the S1. It is easy to identify the \"leave\" as a trigger for End-Position event in the S2, if we know the previous information that \" this is when you were in the Senate -less and less information was new, fewer and fewer arguments were fresh, and the repetitiveness of the old arguments became tiresome. I was becoming almost as cynical as my constituents\".", "cite_spans": [ { "start": 382, "end": 383, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In fact, each document often has a main content in ACE 2005 English corpus. For example, if the content of a document is about terrorist attack, the document is more likely to contain Injure events, Die events, Attack events, and is unlikely to describe Be-Born events. In other words, there is a strong association between events appearing in a document. In addition, event types contained in documents with the related topics are also consistent. Therefore how to use intra and inter document information becomes particularly important. Although there have been already some work to capture the clues beyond sentence to improve sentence level event detection (Ji and Grishman, 2008) (Liao and Grishman, 2010) (Hong et al., 2011) , they still exist the following disadvantages: (1) inherent defects in feature-based models;", "cite_spans": [ { "start": 661, "end": 684, "text": "(Ji and Grishman, 2008)", "ref_id": "BIBREF11" }, { "start": 685, "end": 710, "text": "(Liao and Grishman, 2010)", "ref_id": "BIBREF15" }, { "start": 711, "end": 730, "text": "(Hong et al., 2011)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) document level information was used by a large number of inference rules, this is not only complicated and time-consuming, but also difficult to cover all of the semantic laws.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a document level recurrent neural networks (DLRNN) model for event detection to solve the above problems. Firstly, to capture lexical-level clues and minimize the dependence on supervised tools and resources for features, we introduce a distributed word representation model (Mikolov et al., 2013a) , which has been proved very effective for event detection (Chen et al., 2015 )(Nguyen and Grishman, 2015 . Secondly, we employ bidirectional recurrent networks to encode sentence level clues, which can effectively reserve the history clues and the following information of the current word. Thirdly, to capture document level and cross-document level clues without complicated inference rules. We introduce a document representation, which uses a distributed vector to represent a document and has been showed to be able to get better performance on text classification and sentiment analysis tasks (Le and Mikolov, 2014) . Finally, we use BILOU labeling method to solve the problem that a trigger contains multiple words.", "cite_spans": [ { "start": 301, "end": 324, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF19" }, { "start": 384, "end": 402, "text": "(Chen et al., 2015", "ref_id": "BIBREF4" }, { "start": 403, "end": 430, "text": ")(Nguyen and Grishman, 2015", "ref_id": "BIBREF22" }, { "start": 925, "end": 947, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In summary, our main contributions are as follows: (1) we prove the importance of document level information for event detection. (2) to capture document level clues, we devise a document level Recurrent Neural Networks (DLRNN) model for event detection, which can automatically learn features beyond sentence. (3) moreover, to solve the problem that a trigger word contains multiple words, we introduce BILOU labeling method. (4) finally, we improve the performance and achieve the best performance on ACE 2005 dataset neither the external knowledge base nor the event arguments are used explicitly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper focuses on addressing event detection task, which is a crucial subtask of event extraction. According to Automatic Context Extraction (ACE) evaluation 4 , which annotates 8 types and 33 subtypes for event mention. An event is defined as a specific occurrence involving one or more participants. Firstly, we introduce some ACE terminologies to facilitate the understanding of event extraction task:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "2" }, { "text": "Entity: an object or a set of objects in one of the semantic categories of interests.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "2" }, { "text": "Entity mention: a reference to an entity (typically, a noun phrase).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "2" }, { "text": "Event trigger: the main word that most clearly expresses an event occurrence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "2" }, { "text": "Event arguments: the mentions that are involved in an event (participants).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "2" }, { "text": "Event mention: a phrase or sentence within which an event is described, including the trigger and arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "2" }, { "text": "Given an English document, an event extraction system should identify event triggers and their corresponding arguments with specific subtypes or the roles for each sentence, but an event detection system only needs to identify event trigger and their subtype. For instance, for the sentence \"central command says troops were involved in a gun battle yesterday\", an event extraction system is supposed to detect the word \"battle\" as the event trigger of Attack event and identify the word Figure 1 : An illustraction of our DLRNN model for detecting the trigger word \"battle\" in the input sentence \"central command says troops were involved in a gun battle yesterday\". \"troops\", \"gun\" and \"yesterday\" as event argument whose roles are Attacker, Instrument and Time-Within. However, for an event detection system, identifying the word \"troops\", \"gun\" and \"yesterday\" as event argument whose roles are Attacker, Instrument and Time-Within is not involved. Following previous work, we treat these simply as 33 separate event types and ignore the hierarchical structure among them.", "cite_spans": [], "ref_spans": [ { "start": 488, "end": 496, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Task Description", "sec_num": "2" }, { "text": "In this section, we give the details for the DLRNN model (show in Figure 1 ). First of all, we formalize the event detection task as a multi-classes classification problem following previous work. More precisely, for each word in a sentence, our goal is to classify them into one of 34 classes (33 trigger types and None class).", "cite_spans": [], "ref_spans": [ { "start": 66, "end": 74, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "Our DLRNN model primarily includes four parts: (i) word embedding, which contains lexical information for each word and is trained from external corpus in an unsupervised manner; (ii) document vector, which reveals the topic of a document is trained in an unsupervised mechanism; (iii) bidirectional recurrent neural networks encoding, which can learn the historical and future abstractive representation of a candidate trigger; (iv) trigger prediction, which calculates a confidence score for each event subtype candidate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "The representation of the words as continuous vectors (word embedding) are proved more powerful than discrete representation (Bengio et al., 2003) (Mikolov et al., 2013b) . Word embedding not only addresses the problem of dimension disaster, but also makes the word vector contain richer semantic information. The closer the vector space, the closer the semantic. In addition, word embedding can automatically learn lexical-level clues in the process of pre-training. Not only does not require human ingenuity, but also effectively alleviates the error propagation brought by other NLP lexical analysis toolkits. Recent work has demonstrated that using word embedding can enhance the robustness of event detection model (Nguyen and Grishman, 2015) (Chen et al., 2015)(Nguyen and .", "cite_spans": [ { "start": 125, "end": 146, "text": "(Bengio et al., 2003)", "ref_id": "BIBREF1" }, { "start": 147, "end": 170, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF20" }, { "start": 748, "end": 778, "text": "(Chen et al., 2015)(Nguyen and", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word Embedding", "sec_num": "3.1" }, { "text": "In this paper, we pre-trained word embedding via skip-gram model (Mikolov et al., 2013b) and New York Times corpus 5 . Given a sequence of training words w 1 ,w 2 ,w 3 ,...,w T , the skip-gram model trains the embedding by maximizing the average log probability:", "cite_spans": [ { "start": 65, "end": 88, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Word Embedding", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 T T \u2212k t=k log p(w t\u2212k , ..., w t+k |w t )", "eq_num": "(1)" } ], "section": "Word Embedding", "sec_num": "3.1" }, { "text": "where w t\u2212k ,...,w t+k is the context of w t and the window size is k, usually it is expressed by the concatenation or sum of all word vectors in the context; p(w t\u2212k ,...,w t+k |w t ) is calculated via softmax. There, we have:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Embedding", "sec_num": "3.1" }, { "text": "p(w t\u2212k , ..., w t+k |w t ) = e yw t\u2212k ,...,w t+k i e y i (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Embedding", "sec_num": "3.1" }, { "text": "where each of y i is un-normalized log-probability for each output context of the word i, computed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Embedding", "sec_num": "3.1" }, { "text": "y = b + U w t (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Embedding", "sec_num": "3.1" }, { "text": "where U,b are the sofmax parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Embedding", "sec_num": "3.1" }, { "text": "In order to illustrate the importance of the document vector for event detection in terms of disambiguity. we propose three hypotheses from intra and inter document context perspectives. H1: As we all know, the same word in different context often has different meanings. For instance, the word \"delivered\" in S1 can mean that someone is born or bring something to a destination, when given different context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Vector", "sec_num": "3.2" }, { "text": "H2: Events in a document exist consistency. For example, Die events and Marry events almost never appear in the same document, but Die events often occur with Attack events and Injure events in a document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Vector", "sec_num": "3.2" }, { "text": "H3: The event types in documents with the related topics exist consistency. For instance, if the document that describing a financial crisis contains End-Position events and End-Org events, and then another document related to the financial crisis topic is more likely to happen End-Position events and End-Org events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Vector", "sec_num": "3.2" }, { "text": "Based on the above three assumptions, we introduced an advanced document representation method. Documents are represented by the distributed vector like word embedding, which not only contains the main content of a document, but also the more relevant documents, the closer the document vector. For all the words in a document, the document vector is shared and is concatenated with word embedding, serving as the semantic representation of a word, as shown in Figure 1 . Concatenating the document vector to word embedding has the following advantages: (i) a word is no longer represented by a unique word vector, but expressed by different vector in different documents. This can help event detection model to disambiguate event type; (ii) the consistency of events in a document is guaranteed. Since all the words in a document share a document vector, which passes the identified event subtype information. For example, if some candidate triggers containing a particular document vector are mostly identified as Attack events, Die events, and Injuries events, and then the other candidate triggers that containing the document vector will be less likely to be identified as Marry events or Be-Born events. (iii) documents with related topic almost contain the same event types. Due to the fact that the more relevant topic of the documents, the closer document vectors, the model will be given high confidence to label candidate trigger in a document as the types that appearing in the relevant topic of the documents.", "cite_spans": [], "ref_spans": [ { "start": 461, "end": 469, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Document Vector", "sec_num": "3.2" }, { "text": "In this paper, we trained document vectors by using the PV-DM model (Le and Mikolov, 2014) , which is very similar to the CBOW model that is another word embedding model (Mikolov et al., 2013a) . Unlike the skip-gram model, given a document that contains training words w 1 ,w 2 ,w 3 ,...,w T , document vector is trained by maximizing the average log probability:", "cite_spans": [ { "start": 68, "end": 90, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF13" }, { "start": 170, "end": 193, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Document Vector", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 T T \u2212k t=k log p(w t |w t\u2212k , ..., w t+k , doc)", "eq_num": "(4)" } ], "section": "Document Vector", "sec_num": "3.2" }, { "text": "where w t\u2212k ,...,w t+k is the context of w t and the window size is k; doc is the document vector containing the training words, which can be randomly initialized to a fixed dimension of vector like word embedding, see (Le and Mikolov, 2014) for details.", "cite_spans": [ { "start": 219, "end": 241, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Document Vector", "sec_num": "3.2" }, { "text": "Recurrent neural networks (RNN) has been shown to perform considerably better than standard feedforward architecture (Hammerton, 2003 )(Sutskever et al., 2011) (Sundermeyer et al., 2014) . In this paper, we used RNN to encode word level information and document level clues. In the following, we describe our encoding model in detail. The traditional RNN predicts the current tag with the consideration of the current input and history information before the current input. It loses the following information after the current input. In order to address this problem, we ran two RNNs, one of the RNNs is responsible for encoding the history information, and the other one is responsible for encoding the future information. In addition, the standard RNN often suf-fers from gradient vanishing or gradient exploding problems during training via backpropagation (Bengio et al., 1994) . To remedy this problem, we used long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) that is a variant of RNN to replace the standard RNN.", "cite_spans": [ { "start": 117, "end": 133, "text": "(Hammerton, 2003", "ref_id": "BIBREF7" }, { "start": 160, "end": 186, "text": "(Sundermeyer et al., 2014)", "ref_id": "BIBREF24" }, { "start": 860, "end": 881, "text": "(Bengio et al., 1994)", "ref_id": "BIBREF2" }, { "start": 946, "end": 980, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Bidirectional Recurrent Neural Networks Encoding", "sec_num": "3.3" }, { "text": "Formally, given candidate input sequence X = {x 1 ,x 2 , ...,x n }. We run LSTM1 to get the hidden representation {h f 1 ,h f 2 , ...,h fn } and run LSTM2 to get the hidden representation {h b 1 ,h b 2 , ...,h bn }. Each h f i and h b i are computed by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bidirectional Recurrent Neural Networks Encoding", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h f i = \u2212\u2212\u2212\u2212\u2192 LST M (x i , h f i\u22121 ) (5) h b i = \u2190\u2212\u2212\u2212\u2212 LST M (x i , h b i+1 )", "eq_num": "(6)" } ], "section": "Bidirectional Recurrent Neural Networks Encoding", "sec_num": "3.3" }, { "text": "where x i is the concatenation of the word embedding of token i in candidate sentence and document vector that contains token i, as shown in Figure 1 ; h f i\u22121 contains the historical information before x i ; h b i+1 contains the future clues after x i . Eventually, we obtain the context information over the whole sentence {h 1 ,h 2 , ...,h n } with a greater focus on the position i by concentrating", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 149, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Bidirectional Recurrent Neural Networks Encoding", "sec_num": "3.3" }, { "text": "{h f 1 ,h f 2 , ...,h fn } and {h b 1 ,h b 2 , ...,h bn }, where h i = [h f i ,h b i ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bidirectional Recurrent Neural Networks Encoding", "sec_num": "3.3" }, { "text": "In the actual situation, due to the fact that a trigger may contain multiple words, we introduce the BILOU labeling method, which has been shown to be able to achieve better results than BIO labeling in entity recognition tasks (Gupta et al., 2016) . In the BILOU labeling method, B represents the beginning of a trigger word, I indicates that the word is inside a trigger word, L represents that the word is the last word for a trigger word, O signifies that the word is not a trigger word, U denotes the trigger word contains unique word. After bidirectional long short-term memory (BiLSTM) encoding, we get the global abstract representation h i that encapsulates all context of the input sentence (see in section 3.3). And then, we feed h i into a feed-forward neural network with a softmax layer (as shown in Figure 1 ). In the end, we get a 34 dimensions vector 6 , where the k-th term o k is the probability value for classifying x i to the k-th event type.", "cite_spans": [ { "start": 228, "end": 248, "text": "(Gupta et al., 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 814, "end": 822, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Trigger Prediction", "sec_num": "3.4" }, { "text": "Given all of our (suppose T) training samples (x (i) ;y (i) ), we can then define the loss function as the average negative log-likelihood:", "cite_spans": [ { "start": 49, "end": 52, "text": "(i)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Trigger Prediction", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "J(\u03b8) = \u2212 1 T T i=1 log p(y (i) |x (i) , \u03b8)", "eq_num": "(7)" } ], "section": "Trigger Prediction", "sec_num": "3.4" }, { "text": "In order to compute the network parameter \u03b8, we minimize the average negative log-likelihood J(\u03b8) via stochastic gradient descent (SGD) over shuffled mimi-batches with Adam update rule (Kingma and Ba, 2014) and the dropout regularization (Zaremba et al., 2014) .", "cite_spans": [ { "start": 238, "end": 260, "text": "(Zaremba et al., 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Trigger Prediction", "sec_num": "3.4" }, { "text": "We evaluate our DLRNN model on the ACE2005 English corpus. For fair comparisons, the same with (Ji and Grishman, 2008) to judge the correctness of the predicted event mentions and use Precision (P), Recall (R), F-measure (F 1 ) as the evaluation metrics. We set the the dimension of word embedding to 200, the dimension of document vectors to 100, the size of hidden layer to 300, the size of minibatch to 100, the dropout rate to 0.5, the learning rate to 0.002. All of the above hyper-parameter are adjusted on the development set.", "cite_spans": [ { "start": 95, "end": 118, "text": "(Ji and Grishman, 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments 4.1 Dataset and Experimental Setup", "sec_num": "4" }, { "text": "In order to validate our DLRNN model, we choose the following models as our baselines, which are the state-of-the-art methods in sentence level and cross-sentence level event detection models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Methods", "sec_num": "4.2" }, { "text": "1) Cross-Document Inference: It is the feature-based model proposed by (Ji and Grishman, 2008) , which is the first time to use document information to assist in sentence level event detection. They employed document theme clustering and designed a lot of reasoning rules to ensure event consistency within the scope of the document and clustering.", "cite_spans": [ { "start": 71, "end": 94, "text": "(Ji and Grishman, 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Sentence Level Baselines:", "sec_num": null }, { "text": "2) Cross-Event Inference: This is the featurebased method proposed by (Liao and 2010), which not only used the consistency information of the same type events in a document, but also explored the clues from the co-occurrence of different event types in the same document.", "cite_spans": [ { "start": 70, "end": 79, "text": "(Liao and", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Sentence Level Baselines:", "sec_num": null }, { "text": "3) Cross-Entity Inference: It is the featurebased approach proposed by (Hong et al., 2011) , which used the entity co-occurrence as a key feature to predict event mention. Sentence Level Baselines: 4) Joint Model: It is the feature-based model proposed by (Li et al., 2013) , which exploited argument information implicitly and captured the dependencies between two triggers within the same sentence. 5) Joint RNN: It is the representation-based method proposed by (Gupta et al., 2016) , which exploited the inter-dependence of event trigger and event argument. 6) DMCNN + Distant Supervision: It is the representation-based method proposed by , which used the Freebase and FrameNet to extend the training corpus through distant supervision. 7) ANN + Attention: It is the representationbased approach proposed by , which exploited argument information explicitly for event detection via supervised attention mechanisms. 1) The performance of representation-based models is better than that of feature-based models. It indicates the artificially well-designed features are not sufficient for event detection, and automatically extracting features based on neural networks can capture richer semantic clues. In detail, the F 1 score of our DLRNN model is higher than state-of-the-arts feature-based model (Liao's cross-event) by 1.7%; the other three representation-based models achieved better experimental results than that of Liao's cross-event model, which gain 0.5%,1.7% and 2.9% improvement, respectively.", "cite_spans": [ { "start": 71, "end": 90, "text": "(Hong et al., 2011)", "ref_id": "BIBREF9" }, { "start": 256, "end": 273, "text": "(Li et al., 2013)", "ref_id": "BIBREF14" }, { "start": 465, "end": 485, "text": "(Gupta et al., 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Sentence Level Baselines:", "sec_num": null }, { "text": "2) The feature-based models that using crosssentence information is more advantageous than the sentence level model. More accurately, in the cross-sentence models, only the performance of the Ji's cross-document method is slightly lower than Li's joint model (-0.2%), but the performance of the remaining models is better than Li's joint model (an improvement of 0.8% and 1.3% in F 1 score). It proves the clues beyond sentence are very important for event detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Comparison", "sec_num": "4.3" }, { "text": "3) Our DLRNN method outperforms all crosssentence level feature-based event detection models. In detail, DLRNN gains 3.2% improvement on F 1 score than Ji's cross-document, gains 1.7% improvement on F 1 score than Liao's cross-event and gains 2.2% improvement on F 1 score than Hong's cross-entity. The reasons are as follows: on the one hand, artificially constructed inference rules are difficult to cover all semantic laws; on the other hand, our DLRNN is better able to capture document level clues (including intra and interdocument context).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Comparison", "sec_num": "4.3" }, { "text": "In spite that the performance of our DLRNN model does not improve the F 1 score compared with Chen's DMCNN+DS model, even the performance is not as good as Liu's ANN+Attention model. However, our method neither explicitly utilized event argument information, nor extended training data through using world knowledge (Freebase) and linguistic knowledge (FrameNet). If removed the event argument information and the knowledge base (Chen's DMCNN and Liu's ANN) , the F 1 score of our DLRNN model is superior to the DMCNN and ANN methods, which are -1.4% and -1.7% lower, respectively. This not only illustrates that document level clues are very effective for the representation-based model, but also prove that the effectiveness of the proposed method.", "cite_spans": [ { "start": 429, "end": 457, "text": "(Chen's DMCNN and Liu's ANN)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "4)", "sec_num": null }, { "text": "In order to verify the effectiveness of the document vector trained by PV-DM model for event detection, we design four experiments as baselines for comparison with our DLRNN (as shown in Table 2 ): BiLSTM, BiLSTM+TF-IDF, BiLST-M+AVE and BiLSTM+LDA. 1) BiLSTM: BiLSTM is similar to DLRNN except for removing the document vectors, only uses word embedding as the input of model.", "cite_spans": [], "ref_spans": [ { "start": 187, "end": 194, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "The Effectiveness of Document Vector", "sec_num": "4.4" }, { "text": "2) BiLSTM+TF-IDF: Selected the word vector of the most important word for each document as the document vector for the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Effectiveness of Document Vector", "sec_num": "4.4" }, { "text": "3) BiLSTM+AVE: The document vector is obtained by averaging the vector of each word in the document. 4) BiLSTM+LDA: The probability that each document corresponds to each topic is the document vector of the document. 5) DLRNN: DLRNN model uses the document vector, which is trained by PV-DM approach instead of averaging the word vector in the document 7 . Table 2 : Overall Performance on Blind Test Data. \" \u2020\" designates the model that employs the evidences beyond sentence level.\"+TF-IDF\" represents the document vector was obtained by TF-IDF.\"+LDA\" represents the document vector was obtained by LDA.\"+AVE\" represents the document vector was obtained by averaging the word vector in the document.", "cite_spans": [], "ref_spans": [ { "start": 357, "end": 364, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "The Effectiveness of Document Vector", "sec_num": "4.4" }, { "text": "Seen from Table 2 , we get the following observations: 1) in addition to BiLSTM+TF-IDF, the event detection models with the document vector can achieve better experimental results. In detail, BiLSTM+AVE, BiLSTM+LDA and DLRN-N are 0.3%, 0.7% as well as 1.2% better than 7 we clean the documents up by converting everything to lower case and removing punctuation and the stop words.", "cite_spans": [ { "start": 269, "end": 270, "text": "7", "ref_id": null } ], "ref_spans": [ { "start": 10, "end": 17, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "The Effectiveness of Document Vector", "sec_num": "4.4" }, { "text": "BiLSTM on F 1 score, respectively. This indicates that document level clues can contribute to sentence level event detection model. 2) compared to BiLSTM+TF-IDF, BiLSTM+AVE, BiL-STM+LDA, DLRNN gains 1.4%, 0.9%, 0.5% on F 1 score. This illustrates PV-DM model is able to capture richer semantic information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Effectiveness of Document Vector", "sec_num": "4.4" }, { "text": "In addition, in order to illustrate the documents that their vectors are similar contain the consistent event types. We visualize the document vectors. In detail, we randomly selected a document containing the events from ACE2005 English corpus, and found a document that is most similar to the selected document by calculating the cosine similarity of document vectors. Finally, we systematically compared the events contained in the two documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Effectiveness of Document Vector", "sec_num": "4.4" }, { "text": "We randomly selected the document C-NNHL ENG 20030624 133331.33 as a source document, and found the document CNNHL ENG 20030624 230338.34 is most similar to it by computing the cosine similarity of document vectors 8 . Seen from Figure 2 , we observe that the two documents contain the same event types, except that the document CNNHL ENG 20030624 133331.33 does not contain Attack event. Event type overlapping rate is up to 80%. This proves that there is correlation between the documents of similar document vectors. ", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 237, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The Effectiveness of Document Vector", "sec_num": "4.4" }, { "text": "Seen from the Table 3 , we observe that the Injure event often appears along with the Attack events, the Die events, and the Transport events in the same document. The total probability of the above three types of events concurrence with Injure event is about 0.797. Furthermore, the Nominate events, the Elect events, and so on, have never been appeared in the same document containing the Injure events. This indicates that only certain types of events can occur in the same document, therefore the introduction of the document vector will help to predict event types in a document. Thus, the inter-document information reflected in document vector is useful to event detection.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 21, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "The Event Consistency in a Document", "sec_num": "4.5" }, { "text": "According to statistics, ACE2005 English corpus contains 235 trigger words, which are composed of multiple words, about 4.39% of the total trigger words. It is not appropriate to treat identifying the triggers that contains multiple words as a word classification task, because most of the triggers of multiple words contain prepositions. However, the prepositions in such triggers do not trigger event independently. Therefore, using BILOU encoding helps to treat the multiple words trigger as a whole. Table 3 demonstrates the effectiveness of the BILOU encoding (an improvement of 0.2% on F 1 score).", "cite_spans": [], "ref_spans": [ { "start": 504, "end": 511, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "The Effectiveness of BILOU Labeling", "sec_num": "4.6" }, { "text": "Methods P R F 1 DLRNN-BILOU 78.8 63.5 70.3 DLRNN 77.2 64.9 70.5 Table 4 : Overall Performance on Blind Test Data. \"-BILOU\" indicates that the model has not the BILOU labeling.", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 71, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "The Effectiveness of BILOU Labeling", "sec_num": "4.6" }, { "text": "Event detection is a challenging task in the field of natural language processing, which has attract-ed more and more researchers' attention in recent years. The current event detection models can roughly be divided into: (1) the sentence level event detection models and (2) the cross-sentence level event detection models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "(1) The sentence level event detection models: they are designed to use the sentence information for event classification. According to the differences in how to use sentence information, they can be divided into two categories: the feature-based models and the representation-based models. The early event detection models are almost all featurebased models, which transformed lexical features, syntactic features and semantic features into onehot vectors by other natural language processing toolkits, and then sended these well-designed features into the classifiers (eg: structure perceptron or support vector machine) and eventually completed the event classification (Ahn, 2006) (Li et al., 2013) . With the success of deep learning in entity identification and relationship classification (Collobert and Weston, 2008) (Zeng et al., 2014) , many event detection researchers turned to focus on the representation-based models. This kind of models do not need to extract the features manually. They used the distributed word vector as the input and encoded the word vector into lowdimensional abstractive representation by the neural network to complete event detection (Nguyen and Grishman, 2015) (Chen et al., 2015 (Liu et al., 2016) .", "cite_spans": [ { "start": 673, "end": 684, "text": "(Ahn, 2006)", "ref_id": "BIBREF0" }, { "start": 685, "end": 702, "text": "(Li et al., 2013)", "ref_id": "BIBREF14" }, { "start": 796, "end": 824, "text": "(Collobert and Weston, 2008)", "ref_id": "BIBREF5" }, { "start": 825, "end": 844, "text": "(Zeng et al., 2014)", "ref_id": "BIBREF27" }, { "start": 1202, "end": 1220, "text": "(Chen et al., 2015", "ref_id": "BIBREF4" }, { "start": 1221, "end": 1239, "text": "(Liu et al., 2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "(2) The cross-sentence level event detection models: they aim to explore the clues beyond sentence to improve sentence level event detection. Remarkable researches are cross-document inference (Ji and Grishman, 2008) , cross-event inference (Liao and Grishman, 2010) , cross-entity inference (Hong et al., 2011) and modeling textual cohesion (Huang and Riloff, 2012) . There mainly have two disadvantages: 1) The existing crosssentence event detection models are feature-based models, which not only need to construct complex manual features and lack generalization ability; 2) utilizing the clues beyond sentence through designing complex and numerous reasoning rules, is not only complex, but also can not cover all semantic phenomenon. Different from the above methods, our approach makes the machine automatically learn the document level information by the representation based way to improve the per-formance of event detection.", "cite_spans": [ { "start": 193, "end": 216, "text": "(Ji and Grishman, 2008)", "ref_id": "BIBREF11" }, { "start": 241, "end": 266, "text": "(Liao and Grishman, 2010)", "ref_id": "BIBREF15" }, { "start": 292, "end": 311, "text": "(Hong et al., 2011)", "ref_id": "BIBREF9" }, { "start": 342, "end": 366, "text": "(Huang and Riloff, 2012)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In this paper, we propose a novel model (DLRN-N) to automatically extract cross-sentence level clues for event detection by concatenating word vector and document vector. Moreover, we use BILOU encoding to solve the problem that contains multiple words in a trigger word. In order to prove the effectiveness of the proposed method, we systematically conduct a series of experiments on ACE2005 dataset. Experimental results show that the proposed method is better than state-ofthe-arts cross-sentence level feature-based models and the sentence level representation-based models without using argument information and external corpus, such as Freebase and FrameNet , which demonstrates that intra and inter-document context is effective for event detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Selected from the file \"CNN IP 20030414.1600.04\". 3 Selected from the file \"CNN CF 20030303.1900.05\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://project.ldc.upenn.edu/ace", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://catalog.ldc.upenn.edu/LDC2008T19", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As a result of the BILOU tag, the actual dimension is more than 34 dimensions, but it is described as 34 dimensions for ease of understanding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The cosine similarity is 0.992", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The stages of event extraction", "authors": [ { "first": "David", "middle": [], "last": "Ahn", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Ahn. 2006. The stages of event extraction. In Proceedings of ACL, pages 1-8.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Rejean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Jauvin", "suffix": "" } ], "year": 2003, "venue": "The Journal of Machine Learning Research", "volume": "3", "issue": "6", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. The Journal of Machine Learning Re- search 3(6):1137-1155.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning long-term dependencies with gradient descent is difficult", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Patrice", "middle": [], "last": "Simard", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Frasconi", "suffix": "" } ], "year": 1994, "venue": "The Journal of IEEE transactions on neural networks", "volume": "5", "issue": "2", "pages": "157--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradi- ent descent is difficult. The Journal of IEEE trans- actions on neural networks 5(2):157-166.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Automatically labeled data generation for large scale event extraction", "authors": [ { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Shulin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yubo Chen, Shulin Liu, Xiang Zhang, Kang Liu, and Jun Zhao. 2017. Automatically labeled data genera- tion for large scale event extraction. In Proceedings of ACL .", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Event extraction via dynamic multi-pooling convolutional neural networks", "authors": [ { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Liheng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" } ], "year": 2015, "venue": "Proceedings of IJCNLP", "volume": "", "issue": "", "pages": "167--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynam- ic multi-pooling convolutional neural networks. In Proceedings of IJCNLP, pages 167-176.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A unified architecture for natural language processing:deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified ar- chitecture for natural language processing:deep neu- ral networks with multitask learning. In Proceed- ings of ICML, pages 160-167.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Table filling multi-task recurrent neural network for joint entity and relation extraction", "authors": [ { "first": "Pankaj", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Schtze", "suffix": "" }, { "first": "Bernt", "middle": [], "last": "Andrassy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "2537--2547", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pankaj Gupta, Hinrich Schtze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural net- work for joint entity and relation extraction. In Pro- ceedings of COLING, pages 2537-2547.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Named entity recognition with long short-term memory", "authors": [ { "first": "James", "middle": [], "last": "Hammerton", "suffix": "" } ], "year": 2003, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "172--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Hammerton. 2003. Named entity recognition with long short-term memory. In Proceedings of NAACL, pages 172-175.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "Jurgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "The Journal of Neural Computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. The Journal of Neural Compu- tation, 9(8):1735-1780.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Using cross-entity inference to improve event extraction", "authors": [ { "first": "Yu", "middle": [], "last": "Hong", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Jianmin", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Qiaoming", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1127--1136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In Proceedings of ACL, pages 1127-1136.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Modeling textual cohesion for event extracion", "authors": [ { "first": "Ruihong", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2012, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "1664--1670", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruihong Huang and Ellen Riloff. 2012. Modeling tex- tual cohesion for event extracion. In Proceedings of AAAI, pages 1664-1670.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Refining event extraction through cross-document inference", "authors": [ { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "254--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heng Ji and Ralph Grishman. 2008. Refining even- t extraction through cross-document inference. In Proceedings of ACL, pages 254-262.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "Diederik", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "1188--1196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Proceed- ings of ICML, pages 1188-1196.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Joint event extraction via structured prediction with global features", "authors": [ { "first": "Qi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Heng", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "73--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global fea- tures. In Proceedings of ACL, pages 73-82.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Using document level cross-event inference to improve event extraction", "authors": [ { "first": "Shasha", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "789--797", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shasha Liao and Ralph Grishman. 2010. Using doc- ument level cross-event inference to improve event extraction. In Proceedings of ACL, pages 789-797.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A recursive recurrent neural network for statistical machine translation", "authors": [ { "first": "Shujie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1491--1500", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shujie Liu, Nan Yang, Mu Li, and Ming Zhou. 2014. A recursive recurrent neural network for statistical machine translation. In Proceedings of ACL, pages 1491-1500.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Leveraging framenet to improve automatic event detection", "authors": [ { "first": "Shulin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Shizhu", "middle": [], "last": "He", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL pages", "volume": "", "issue": "", "pages": "2134--2143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shulin Liu, Yubo Chen, Shizhu He, Kang Liu, and Jun Zhao. 2016. Leveraging framenet to improve auto- matic event detection. In Proceedings of ACL pages 2134-2143.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Exploiting argument information to improve event detection via supervised attention mechanisms", "authors": [ { "first": "Shulin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017. Exploiting argument information to improve event detection via supervised attention mechanisms. In Proceedings of ACL .", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word rep- resentations in vector space. arXiv preprint arX- iv:1301.3781 .", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corra- do, and Jeffrey Dean. 2013b. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of Advances in Neural Infor- mation Processing Systems, pages 3111-3119.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Joint event extraction via recurrent neural networks", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Thien Huu Nguyen", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Cho", "suffix": "" }, { "first": "", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "300--309", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thien Huu Nguyen, Kyunghyun Cho, and Ralph Gr- ishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of NAACL, pages 300-309.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Event detection and domain adaptation with convolution neural networks", "authors": [ { "first": "Huu", "middle": [], "last": "Thien", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2015, "venue": "Proceedings of IJCNLP", "volume": "", "issue": "", "pages": "365--371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015. Even- t detection and domain adaptation with convolution neural networks. In Proceedings of IJCNLP, pages 365-371.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Modeling skip-grams for event detection with convolution neural networks", "authors": [ { "first": "Huu", "middle": [], "last": "Thien", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "886--891", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thien Huu Nguyen and Ralph Grishman. 2016. Mod- eling skip-grams for event detection with convolu- tion neural networks. In Proceedings of EMNLP, pages 886-891.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Translation modeling with bidirectional recurrent neural networks", "authors": [ { "first": "Martin", "middle": [], "last": "Sundermeyer", "suffix": "" }, { "first": "Tamer", "middle": [], "last": "Alkhouli", "suffix": "" }, { "first": "Joern", "middle": [], "last": "Wuebker", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "14--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Sundermeyer, Tamer Alkhouli, Joern Wuebker, and Hermann Ney. 2014. Translation modeling with bidirectional recurrent neural networks. In Proceed- ings of EMNLP, pages 14-25.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Generating text with recurrent neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "James", "middle": [], "last": "Martens", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "1017--1024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, James Martens, and Geoffrey Hinton. 2011. Generating text with recurrent neural net- works. In Proceedings of ICML, pages 1017-1024.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization", "authors": [ { "first": "Wojciech", "middle": [], "last": "Zaremba", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.2329" ] }, "num": null, "urls": [], "raw_text": "Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyal- s. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329 .", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Relation classification via convolutional deep neural network", "authors": [ { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Siwei", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Guangyou", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "2335--2344", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via con- volutional deep neural network. In Proceedings of COLING, pages 2335-2344.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "The comparison of event types on the most similar documents.", "type_str": "figure", "num": null }, "TABREF2": { "html": null, "text": "are the comparisons of experimental results of our method with the baseline methods on the same blind test dataset. Seen fromTable 1, we make the following observations:", "content": "", "num": null, "type_str": "table" }, "TABREF5": { "html": null, "text": "The ranking probability of events cooccurrence with Injure events.", "content": "
", "num": null, "type_str": "table" } } } }