Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N15-1002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:33:02.425974Z"
},
"title": "Predicate Argument Alignment using a Global Coherence Model",
"authors": [
{
"first": "Travis",
"middle": [],
"last": "Wolfe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"region": "MD",
"country": "USA"
}
},
"email": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"region": "MD",
"country": "USA"
}
},
"email": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"region": "MD",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a joint model for predicate argument alignment. We leverage multiple sources of semantic information, including temporal ordering constraints between events. These are combined in a max-margin framework to find a globally consistent view of entities and events across multiple documents, which leads to improvements over a very strong local baseline.",
"pdf_parse": {
"paper_id": "N15-1002",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a joint model for predicate argument alignment. We leverage multiple sources of semantic information, including temporal ordering constraints between events. These are combined in a max-margin framework to find a globally consistent view of entities and events across multiple documents, which leads to improvements over a very strong local baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural language understanding (NLU) requires analysis beyond the sentence-level. For example, an entity may be mentioned multiple times in a discourse, participating in various events, where each event may itself be referenced elsewhere in the text. Traditionally the task of coreference resolution has been defined as finding those entity mentions within a single document that co-refer, while crossdocument coreference resolution considers a wider discourse context across many documents, yet still pertains strictly to entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Predicate argument alignment, or entity-event cross-document coreference resolution, enlarges the set of possible co-referent elements to include the mentions of situations in which entities participate. This expanded definition drives practitioners towards a more complete model of NLU, where systems must not only consider who is mentioned, but also what happened. However, despite the drive towards an expanded notion of discourse, models typically are formulated with strong notions of localindependence: viewing a multi-document task as one limited to individual pairs of sentences. This creates a mis-match between the goals of such work -considering entire documents -with the systemsconsider individual sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we consider a system that takes a document level view in considering coreference for entities and predictions: the task of predicate argument linking. We treat this task as a global inference problem, leveraging multiple sources of semantic information identified at the document level. Global inference for this problem is mostly unexplored, with the exception of Lee et al. (2012) (discussed in \u00a7 8). Especially novel here is the use of document-level temporal constraints on events, representing a next step forward on the path to full understanding.",
"cite_spans": [
{
"start": 379,
"end": 396,
"text": "Lee et al. (2012)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach avoids the pitfalls of local inference while still remaining fast and exact. We use the pairwise features of a very strong predicate argument aligner (Wolfe et al., 2013 ) (competitive with the state-of-the-art (Roth, 2014) ), and add quadratic factors that constrain local decisions based on global document information. These global factors lead to superior performance compared to the previous state-of-the-art. We release both our code and data. 1",
"cite_spans": [
{
"start": 163,
"end": 182,
"text": "(Wolfe et al., 2013",
"ref_id": "BIBREF23"
},
{
"start": 224,
"end": 236,
"text": "(Roth, 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Consider the two sentences from the document pair shown in Figure 1 . These sentences describe the same event, although with different details. The source sentence has four predicates and four arguments, while the target has three predicates and three arguments. In this case, one of the predicates from each sentence aligns, as do three of the arguments. We also show additional information potentially helpful to determining alignments: temporal relations between the predicates. The goal of predicate argument alignment is to assign these links indicating coreferent predicates and arguments across a document pair (Roth and Frank, 2012) .",
"cite_spans": [
{
"start": 618,
"end": 640,
"text": "(Roth and Frank, 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "Previous work by Wolfe et al. (2013) formulated Predicates appear as hollow ovals, have blue mentions, and are aligned considering their arguments (dashed lines). Arguments, in black diamonds with green mentions, represent a document-level entity (coreference chain), and are aligned using their predicate structure and mention-level features. The alignment choices appear in the middle in red. Temporal relation information is lifted into the global inference over alignments.",
"cite_spans": [
{
"start": 17,
"end": 36,
"text": "Wolfe et al. (2013)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "this as a binary classification problem: given a pair of arguments or predicates, construct features and score the pair, where scores above threshold indicate links. A binary classification framework has advantages: it's fast since individual decisions can be made quickly, but it comes at the cost of global information across links. The result may be links that conflict in their interpretation of the document. The global nature of this task is similar to word alignment for machine translation (MT). Many systems consider alignment links between words individually, selecting the best link for each word independently of the other words in the sentence. Just as with an independent linking strategy in predicate argument alignment, this can lead to inconsistencies in the output. Lacoste-Julien et al. (2006) introduced a model that jointly resolved word alignments based on the introduction of quadratic variables, factors that depend on two alignment decisions which characterize patterns that span word-word links. Their approach achieved improved results even in the presence of little training data.",
"cite_spans": [
{
"start": 784,
"end": 812,
"text": "Lacoste-Julien et al. (2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "We present a global predicate argument alignment model based on considering quadratic interactions between alignment variables to captures patterns we expect in coherent discourse. We introduce factors which are comprised of a binary variable, multiple quadratic constraints on that variable, and features that determine the cost associated with that variable in order to characterize the dependence between alignment decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "While the mathematical framework we use is similar to Lacoste-Julien et al. (2006) , predicate argument alignment greatly differs from word alignment; thus our joint factors are based on different sources of regularity. Word alignment favors monotonicity in word order, but this effect is very weak in predicate argument alignment: aligned items can be spread throughout a document, and are often nested, gapped, or shuffled. Instead, we encode assumptions about consistency of temporal relations between coreferent events, coherence between predicates and arguments that appear in both documents, and fertility (to prevent over-alignment). We also note that our setting has much less data than typical word alignment tasks, as well as richer features that utilize semantic resources.",
"cite_spans": [
{
"start": 54,
"end": 82,
"text": "Lacoste-Julien et al. (2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "Notation An alignment between an item indexed by i in the source document and j in the target document is represented by variable z ij \u2208 {0, 1}, where z ij = 1 indicates that items i and j are aligned. In some cases, we will explicitly indicate when the two items are predicates as z p ij ; an argument alignment will be z a ij . We represent all alignments for a document pair as matrix z.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "For clarity, we omit any variable representing observed data when discussing feature functions; alignment variables are endowed with this information. For each pair of items we use \"local\" feature functions f (\u2022) and corresponding parameters w, which capture the similarity between two items without the context of other alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s ij = w \u2022 f (z ij )",
"eq_num": "(1)"
}
],
"section": "Model",
"sec_num": "2"
},
{
"text": "where s ij is the score of linking items i and j. Using only local features, our system would greedily select alignments. To capture global aspects we add joint factors that capture effects between alignment variables. Each joint factor \u03c6 is comprised of a constrained binary variable z \u03c6 associated with features f (\u03c6) that indicates when the factor is active. Together with parameters w these form additional scores s \u03c6 for the objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s \u03c6 = w \u2022 f (\u03c6)",
"eq_num": "(2)"
}
],
"section": "Model",
"sec_num": "2"
},
{
"text": "The full linear scoring function on alignments sums over both local similarity and joint factors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ij s ij z ij + \u03c6\u2208\u03a6 s \u03c6 z \u03c6 .",
"eq_num": "(3)"
}
],
"section": "Model",
"sec_num": "2"
},
{
"text": "Lastly, it is convenient to describe the local feature functions and their corresponding alignment variable as factors with no constraints, and we will do so when describing the full score function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "Local factors encode features based on the mention pair, which include a wide variety of similarity measures, e.g. whether two headwords appear as synonyms in WordNet, gender agreement based on possessive pronouns. We adopt the features of Wolfe et al. (2013) , a strong baseline system which doesn't use global inference. 2 These features are built on top of a variety of semantic resources (PPDB (Ganitkevitch et al., 2013) , WordNet (Miller, 1995) , FrameNet (Baker et al., 1998) ) and methods for comparing mentions (tree edit distance , string transducer (Andrews et al., 2012) ).",
"cite_spans": [
{
"start": 240,
"end": 259,
"text": "Wolfe et al. (2013)",
"ref_id": "BIBREF23"
},
{
"start": 398,
"end": 425,
"text": "(Ganitkevitch et al., 2013)",
"ref_id": "BIBREF6"
},
{
"start": 436,
"end": 450,
"text": "(Miller, 1995)",
"ref_id": "BIBREF13"
},
{
"start": 462,
"end": 482,
"text": "(Baker et al., 1998)",
"ref_id": "BIBREF1"
},
{
"start": 560,
"end": 582,
"text": "(Andrews et al., 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Local Factors",
"sec_num": "3"
},
{
"text": "Our goal is to develop joint factors that improve over the feature rich local factors baseline by considering global information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Factors",
"sec_num": "4"
},
{
"text": "Fertility A common mistake when making independent classification decisions is to align many source items to a single target item. While each link looks promising on its own, they clearly cannot all be right. Empirically, the training set reveals that many to one alignments are uncommon; thus many to one predictions are likely errors. We add a fertility factor for predicates and arguments, where fertility is defined as the number of links to an item. Higher fertilities are undesired and are thus penalized. Formally, for matrix z, the fertility of a row i or column j is the sum of that row or column. We discuss fertility in terms of rows below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Factors",
"sec_num": "4"
},
{
"text": "We include two types of fertility factors. First, factor \u03c6 fert1 distinguishes between rows with at least one link from those with none. For row i, we add one instance of the linear factor \u03c6 fert1 with constraints",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Factors",
"sec_num": "4"
},
{
"text": "z \u03c6 fert1 \u2265 z ij \u2200j (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Factors",
"sec_num": "4"
},
{
"text": "The cost associated with z \u03c6 fert1 , which we will refer to as s fert1 , will be incurred any time an item is mentioned in both documents. For data sets with many singletons, s fert1 more strongly penalizes nonsingleton rows, reflecting this pattern in the training data. We make s fert1 parametric, where the features of the \u03c6 fert1 factor allow us to learn different weights for predicates and arguments, as well as the size of the row, i.e. number of items in the pairing. The second fertility factory \u03c6 fert2 considers items with a fertility greater than one, penalizing items for having too many links. Its binary variable has the quadratic constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Factors",
"sec_num": "4"
},
{
"text": "z \u03c6 fert2 \u2265 z ij z ik \u2200j < k (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Factors",
"sec_num": "4"
},
{
"text": "This factor penalizes rows that have fertility of at least two, but does not distinguish beyond that. An alternative would be to introduce a factor for every pair of variables in a row, each with one constraint. This would heavily penalize fertilities greater than two. We found that the resulting quadratic program took longer to solve and gave worse results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Factors",
"sec_num": "4"
},
{
"text": "Since documents have been processed to identify in-document coreference chains, we do not expect multiple arguments from a source document to align to a single target item. For this reason, we expect \u03c6 fert2 for arguments to have a large negative weight. In contrast, since predicates do not form chains, we may have multiple source predicates for one target.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Factors",
"sec_num": "4"
},
{
"text": "We note an important difference between our fertility factor compared with Lacoste-Julien et al. (2006) . We parameterize fertility for only two cases (1 and 2) whereas they consider fertility factors from 2 to D. We do not parameterize fertilities higher than two because they are not common in our dataset and come at a high computational cost.",
"cite_spans": [
{
"start": 97,
"end": 103,
"text": "(2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Factors",
"sec_num": "4"
},
{
"text": "The features f (\u03c6) for both \u03c6 fert1 and \u03c6 fert2 are an intercept feature (which always fires), indicator features for whether this row corresponds to an argument or a predicate, and a discretized feature for how many alignments are in this row.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Factors",
"sec_num": "4"
},
{
"text": "Predicate Argument Structure We expect structure among links that involve a predicate and its associated arguments. Therefore, we add joint factors that consider a predicate and its associated alignments: the predicate argument structure. We determine this structure from a dependency parse, though the idea is general to any semantic binding, e.g. FrameNet or Propbank style parses. Given a coherent discourse, there are several expected types of patterns in the PAS; we add factors for these.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Factors",
"sec_num": "4"
},
{
"text": "We begin with a predicatecentric factor, which views scores an alignment between predicates based on their arguments, i.e. the two predicates share the same arguments. Ideally, two predicates can only align when their arguments are coreferent. However, in practice we may incorrectly resolve argument links, or there may be implicit arguments that do not appear as syntactic dependencies of the predicate trigger. Therefore, we settle for a weaker condition, that there should be some overlap in the arguments of two coreferent predicates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate-centric",
"sec_num": null
},
{
"text": "For every predicate alignment z p ij , we add a factor \u03c6 psa whose score s psa is a penalty for having no argument overlap; predicates share arguments (psa). To constrain the variable of \u03c6 psa , we add a quadratic constraint that considers every possible pair of argument alignments that might overlap:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate-centric",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z \u03c6psa \u2265 z p ij 1 \u2212 max k\u2208args(p i ) l\u2208args(p j ) z a kl",
"eq_num": "(6)"
}
],
"section": "Predicate-centric",
"sec_num": null
},
{
"text": "where args(p i ) finds the indices of all arguments governed by the predicate p i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate-centric",
"sec_num": null
},
{
"text": "We expect similar behavior from arguments (entities). If an entity appears in two documents, it is likely that this entity will be mentioned in the context of a common predicate, i.e. arguments share predicates (asp). For a given argument alignment z a ij we add quadratic constraints so that z \u03c6asp represents a penalty for two arguments not sharing a single predicate:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "z \u03c6asp \u2265 z a ij 1 \u2212 max k\u2208preds(a i ) l\u2208preds(a j ) z p kl (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "where preds(a i ) finds the indices of all predicates that govern any mention of argument a i . The features f (\u03c6) for both psa and asp are an intercept feature and a bucketed count of the size of args(p i ) \u00d7 args(p j ) or preds(a i ) \u00d7 preds(a j ) respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "Temporal Information Temporal ordering, in contrast to textual ordering, can indicate when predicates cannot align: we expect aligned predicates in both documents to share the same temporal relations. SemEval 2013 included a task on predicting temporal relations between events (UzZaman et al., 2013). Many systems produced partial relations of events in a document based on lexical aspect and tense, as well as discourse connectives like \"during\" or \"after\". We obtain temporal relations with CAEVO, a state-of-the-art sieve-based system (Chambers et al., 2014) .",
"cite_spans": [
{
"start": 539,
"end": 562,
"text": "(Chambers et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "TimeML (Pustejovsky et al., 2003) , the format for specifying temporal relations, defines relations between predicates (e.g. immediately before and simultaneous), each with an inverse (e.g. immediately after and simultaneous respectively). We will refer to a relation as R and its inverse as R \u22121 . Suppose we had p a and p b in the source document, p x and p y in the target document, and p a R 1 p b , p x R 2 p y . Given this configuration the following alignments conflict with the in-doc relations:",
"cite_spans": [
{
"start": 7,
"end": 33,
"text": "(Pustejovsky et al., 2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "z ax z by z ay z bx In-Doc Relations * * 1 1 R 1 = R 2 1 1 * * R 1 = R \u22121 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "where 1 means there is a link and * means there is a link or no link (wildcard). The simplest example that fits this pattern is: 'a before b', 'x before y', 'a corefers with y', and 'b corefers with x' implies a conflict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "We introduce a factor that penalizes these conflicting configurations. In every instance where the predicted temporal relation for a pair of predicate alignments matches one of the conflict patterns above, we add a factor using z \u03c6temp :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "z \u03c6temp \u2265 z ay z bx if p a R 1 p b , p x R 2 p y , R 1 = R 2 z \u03c6temp \u2265 z ax z by if p a R 1 p b , p x R 2 p y , R 1 = R \u22121 2 (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "Thus s \u03c6temp is the cost of disagreeing with the indoc temporal relations. This is a general technique for incorporating relational information into coreference decisions. It only requires specifying when two relations are incompatible, e.g. spouseOf and siblingOf are incompatible relations (in most states). We leave this for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "Since CAEVO gives each relation prediction a probability, we incorporate this into the feature by indicating the probability of a conflict not arising:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (\u03c6 temp ) = log 1 \u2212 p(R 1 )p(R 2 ) +",
"eq_num": "(9)"
}
],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "avoids large negative values since CAEVO probabilities are not perfectly calibrated. We use = 0.1, allowing feature values of at most \u22122.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "Summary The objective is a linear function over binary variables. There is a local similarity score coefficient on every alignment variable, and a joint factor similarity score on every quadratic variable. These quadratic variables are constrained by products of the original alignment variables. Decoding an alignment requires solving this quadratically constrained integer program; in practice is can be solved quickly without relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-centric",
"sec_num": null
},
{
"text": "Learning We use the supervised structured SVM formulation of Joachims et al. (2009) . As is common in structure prediction we use margin rescaling and 1 slack variable, with the structural SVM objective:",
"cite_spans": [
{
"start": 61,
"end": 83,
"text": "Joachims et al. (2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min w ||w|| 2 2 + C\u03be s.t. \u03be \u2265 0 \u03be + N i=1 w \u2022 f (z i ) \u2265 N i=1 w \u2022 f (\u1e91 i ) + \u2206(z i ,\u1e91 i ) \u2200\u1e91 i \u2208 Z i",
"eq_num": "(10)"
}
],
"section": "Inference",
"sec_num": "5"
},
{
"text": "where Z i is the set of all possible alignments that have the same shape as z i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "The score function for an alignment uses three types of terms: weights, features, and alignment variables. When we decode, we take the product of the weights and the features to get the costs for the ILP (e.g. s \u03c6 = w \u2022 f (\u03c6)). When we optimize our SVM objective, we take the product of the alignment variables and the features to get modified features for the SVM:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (z) = ij z ij f (z ij ) + \u03c6\u2208\u03a6 z \u03c6 f (\u03c6)",
"eq_num": "(11)"
}
],
"section": "Inference",
"sec_num": "5"
},
{
"text": "Since we cannot iterate over the exponentially many margin constraints, we solve for this optimization using the cutting-plane learning algorithm. This algorithm repeatedly asks the \"separation oracle\" for the most violated SVM constraint, which finds this constraint by solving:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "arg max z 1 ...\u1e91 N i w \u2022 f (\u1e91 i ) + \u2206(z i ,\u1e91 i )",
"eq_num": "(12)"
}
],
"section": "Inference",
"sec_num": "5"
},
{
"text": "subject to the constraints defined by the joint factors. When the separation oracle returns a constraint that is not violated or is already in the working set, then we have a guarantee that we solved the original SVM problem with exponentially many constraints. This is the most time-consuming aspect of learning, but since the problem decomposes over document alignments, we cache solutions on a per document alignment basis. With caching, we only call the separation oracle around 100-300 times. We implement the separation oracle using an ILP solver, CPLEX, 3 due to complexity of the discrete optimization problem: there are 2 m n possible alignments for and m \u00d7 n alignment grid. In practice this is solved very efficiently, taking less than a third of a second per document alignment on average. We would like \u2206 to be F1, but we need a decomposable loss to include it in a linear objective (Taskar et al., 2003) . Instead, we use Hamming loss as a surrogate, as in Lacoste-Julien et al. (2006) .",
"cite_spans": [
{
"start": 896,
"end": 917,
"text": "(Taskar et al., 2003)",
"ref_id": "BIBREF20"
},
{
"start": 971,
"end": 999,
"text": "Lacoste-Julien et al. (2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "Our training data is heavily biased towards negative examples, performing poorly on F1 since precision and recall are unbalanced. We use an asymmetric version of Hamming loss that incurs c F P cost for predicting an alignment for two unaligned items and c F N for predicting no alignment for two aligned items. We fixed c F P = 1 and tuned c F N \u2208 {1, 2, 3, 4} on dev data. Additionally we found it useful to tune the scale of the loss function across { 1 2 , 1, 2, 4}. Previous work, such as Joachims et al. (2009) , use a hand-chosen constant for the scale of the Hamming loss, but we observe some sensitivity in this parameter and choose to optimize it.",
"cite_spans": [
{
"start": 493,
"end": 515,
"text": "Joachims et al. (2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "Decoding Following Wolfe et al. 2013, we tune the threshold for classification \u03c4 on dev data to maximize F1 (via linesearch). For SVMs \u03c4 is typically fixed at 0: this is not necessarily good practice when your training loss differs from test loss (Hamming vs F1). In our case this extra parameter is worth allocating a portion of training data to enable tuning. Tuning \u03c4 addresses the same problem as using an asymmetric Hamming loss, but we found that doing both led to better results. 4 Since we are using a global scoring function rather than a set of classifications, \u03c4 is implemented as a test-time unary factor on every alignment.",
"cite_spans": [
{
"start": 487,
"end": 488,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "5"
},
{
"text": "Data We consider two datasets for evaluation. The first is a cross-document entity and event coreference resolution dataset called the Extended Event Coref Bank (EECB) created by Lee et al. (2012) and based on a corpus from Bejan and Harabagiu (2010). The dataset contains clusters of news articles taken from Google News with annotations about coreference over entities and events. Following the procedure of Wolfe et al. (2013) , we select the first document in every cluster and pair it with every other document in the cluster.",
"cite_spans": [
{
"start": 179,
"end": 196,
"text": "Lee et al. (2012)",
"ref_id": "BIBREF10"
},
{
"start": 410,
"end": 429,
"text": "Wolfe et al. (2013)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "The second dataset (RF) comes from Roth and Frank (2012) . The dataset contains pairs of news articles that describe the same news story, and are annotated for predicate links between the document pairs. Due to the lack of annotated arguments, we can only report predicate linking performance and the psa and asp factors do not apply. Lastly, the size of the RF data should be noted as it is much smaller than EECB: the test set has 60 document pairs and the dev set has 10 document pairs. Both datasets are annotated with parses and indocument coreference labels provided by the toolset of Napoles et al. (2012) 5 and are available with our code release. Due to the small data size, we use kfold cross validation for both datasets. We choose k = 10 for RF due to its very small size (more folds give more training examples) and k = 5 on EECB to save computation time (amount of training data in EECB is less of a concern). Hyperparameters were chosen by hand using using cross validation on the EECB dataset using F1 as the criteria (rather than Hamming). Figures report averages across these folds.",
"cite_spans": [
{
"start": 35,
"end": 56,
"text": "Roth and Frank (2012)",
"ref_id": "BIBREF17"
},
{
"start": 591,
"end": 612,
"text": "Napoles et al. (2012)",
"ref_id": "BIBREF14"
},
{
"start": 613,
"end": 614,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Systems Following Roth and Frank (2012) and Wolfe et al. (2013) we include a Lemma baseline for identifying alignments which will align any two predicates or arguments that have the same lemmatized head word. 6 The Local baseline uses the same features as Wolfe et al., but none of our joint factors. In addition to running our joint model with all factors, we measure the efficacy of each individual factor by evaluating each with the local features.",
"cite_spans": [
{
"start": 18,
"end": 39,
"text": "Roth and Frank (2012)",
"ref_id": "BIBREF17"
},
{
"start": 44,
"end": 63,
"text": "Wolfe et al. (2013)",
"ref_id": "BIBREF23"
},
{
"start": 209,
"end": 210,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "For evaluation we use a generous version of F1 that is defined for alignment labels composed of sure, G s , and possible links, G p and the system's proposed links H (following Cohn et al. (2008) , Roth and Frank (2012) and Wolfe et al. (2013) ).",
"cite_spans": [
{
"start": 177,
"end": 195,
"text": "Cohn et al. (2008)",
"ref_id": "BIBREF5"
},
{
"start": 198,
"end": 219,
"text": "Roth and Frank (2012)",
"ref_id": "BIBREF17"
},
{
"start": 224,
"end": 243,
"text": "Wolfe et al. (2013)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "P = |H \u2229 G p | |H| R = |H \u2229 G s | |G s | F = 2P R P + R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Note that the EECB data does not have a sure and possible distinction, so G s = G p , resulting in standard F1. In addition to F1, we separately measure predicate and argument F1 to demonstrate where our model makes the largest improvements. We performed a one-sided paired-bootstrap test where the null hypothesis was that the joint model was no better than the Local baseline (described in Koehn (2004) ). Cases where p < 0.05 are bolded.",
"cite_spans": [
{
"start": 392,
"end": 404,
"text": "Koehn (2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Results for EECB and RF are reported in Table 7 . As previously reported, using just local factors (features on pairs) improves over lemma baselines (Wolfe et al., 2013) . The joint factors make statistically significant gains over local factors in almost all experiments. Fertility factors provide the largest improvements from any single constraint. A fertility penalty actually allows the pairwise weights to be more optimistic in that they can predict more alignments for reasonable pairs, allowing the fertility penalty to ensure only the best is chosen. This penalty also prevents the \"garbage collecting\" effect that arises for instances that have rare features (Brown et al., 1993) .",
"cite_spans": [
{
"start": 149,
"end": 169,
"text": "(Wolfe et al., 2013)",
"ref_id": "BIBREF23"
},
{
"start": 669,
"end": 689,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "Temporal constraints are relatively sparse, appearing just 2.8 times on average. Nevertheless, it was very helpful across all experiments, though only statistically significantly on the RF dataset. This is one of the first results to demonstrate benefits of temporal relations affecting an downstream task. Perhaps surprisingly, these improvements result from a a temporal relation system that has relatively poor absolute performance. Despite this, improvements are possibly due to the orthogonal nature of temporal information; no other feature captures this signal. This suggests that future work on temporal relation prediction may yield further improvements and deserves more attention as a useful feature for semantic tasks in NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "The predicate-centric factors improved performance significantly on both datasets. For the predicate-centric factor, when a predicate was aligned there is a 72.3% chance that there was at least one argument aligned as well, compared to only 14.1% of case of non-aligned predicates. As mentioned before, the reason the former number isn't 100% is primarily due to implicit arguments and errors in argument identification. The argument-centric features helped almost as much as the predicate-centric version, but the improvements were not significant on the EECB dataset. Running the same diagnostic as the predicate-centric feature reveals similar support: in 57.1% of the cases where an argument was aligned, at least one predicate it partook in was aligned too, compared to 7.6% of cases for non-aligned arguments. Figure 3 : Cross validation results for EECB (above) (Lee et al., 2012) and RF (left) (Roth and Frank, 2012) . Statistically significant improvements from Local marked * (p < 0.05 using a one-sided pairedbootstrap test) and best results are bolded.",
"cite_spans": [
{
"start": 869,
"end": 887,
"text": "(Lee et al., 2012)",
"ref_id": "BIBREF10"
},
{
"start": 902,
"end": 924,
"text": "(Roth and Frank, 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 816,
"end": 824,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "predicate-and argument-centric improve similarly across both predicates and arguments on EECB. While each of the joint factors all improve over the baselines on RF, the full model with all the joint factors does not perform as well as with some factors excluded. Specifically, the fertility model performs the best. We attribute this small gap to lack of training data (RF only contains 64 training document pairs in our experiments), as this is not a problem on the larger EECB dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "Additionally, the joint models seem to trade precision for recall on the RF dataset compared to the Local baseline. Note that both models are tuned to maximize F1, so this tells you more about the shape of the ROC curve as opposed to either models' ability to achieve either high precision or recall. Since we don't see this behavior on the EECB corpus, it is more likely that this is a property of the data than the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "The task of predicate argument linking was introduced by Roth and Frank (2012) , who used a graph parameterized by a small number of semantic features to express similarities between predicates and used min-cuts to produce an alignment. This was followed by Wolfe et al. (2013) , who gave a locallyindependent, feature-rich log-linear model that utilized many lexical semantic resources, similar to the sort employed in RTE challenges. Lee et al. (2012) considered a similar problem but sought to produce clusters of entities and events rather than an alignment between two documents with the goal of improving coreference resolution. They used features which consider previous event and entity coreference decisions to make future coreference decisions in a greedy manner. This differs from our model which is built on non-greedy joint inference, but much of the signal indicating when two mentions corefer or are aligned is similar.",
"cite_spans": [
{
"start": 57,
"end": 78,
"text": "Roth and Frank (2012)",
"ref_id": "BIBREF17"
},
{
"start": 258,
"end": 277,
"text": "Wolfe et al. (2013)",
"ref_id": "BIBREF23"
},
{
"start": 436,
"end": 453,
"text": "Lee et al. (2012)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "In the context of in-document coreference resolution, Recasens et al. (2013) sought to overcome the problem of opaque mentions 7 by finding highprecision paraphrases of entities by pivoting off verbs mentioned in similar documents. We address the issue of opaque mentions not by building a paraphrase table, but by jointly reasoning about entities that participate in coreferent events (c.f. \u00a74); the approaches are complementary.",
"cite_spans": [
{
"start": 54,
"end": 76,
"text": "Recasens et al. (2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "In this work we incorporate ordering information of events. Though we consider it an upstream task, there is a line of work trying to predict temporal relations between events (Pustejovsky et al., 2003; Mani et al., 2006; Chambers et al., 2014) . Our results indicate this is a useful source of information, one of the first results to show an improvement from this type of system (Glava\u0161 and\u0160najder, 2013) .",
"cite_spans": [
{
"start": 176,
"end": 202,
"text": "(Pustejovsky et al., 2003;",
"ref_id": "BIBREF15"
},
{
"start": 203,
"end": 221,
"text": "Mani et al., 2006;",
"ref_id": "BIBREF11"
},
{
"start": 222,
"end": 244,
"text": "Chambers et al., 2014)",
"ref_id": "BIBREF4"
},
{
"start": 381,
"end": 406,
"text": "(Glava\u0161 and\u0160najder, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "We utilize an ILP to improve upon a pipelined system, similar to Roth and Yih (2004) , but our work differs in that we do not use piecewise-trained classifiers. Our local similarity scores are calibrated according to a global objective by propagating the gradient back from the loss to every parameter in the model. When using piecewise training, local classifiers must focus more on recall (in the spirit of Weiss and Taskar (2010)) than they would for an ordinary classification task with no global objective. Our method trains classifiers jointly with a global convex objective. While our training procedure requires decoding an integer program, the parameters we learn are globally optimal.",
"cite_spans": [
{
"start": 65,
"end": 84,
"text": "Roth and Yih (2004)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "We presented a max-margin quadratic cost model for predicate argument alignment, seeking to exploit discourse level semantic features to improve on previous, locally independent approaches. Our model includes factors that consider fertility of predicates and arguments, the predicate argument structure present in coherent discourses, and soft constraints on predicate coreference determined by a temporal relation classifier. We have shown that this model significantly improves upon prior work which uses extensive lexical resources but without the benefit of joint inference. Additionally, this is one of the first demonstrations of the benefits of temporal relation identification. Overall, this work demonstrates the benefits of considering global document information as part of natural language understanding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "Future work should extend the problem formulation of predicate argument alignment to consider incremental linking: starting with a pair of documents, perform linking, and then continue to add in documents over time. This problem formulation would capture the evolution of a breaking news story, which closely matches the type of data (news articles) considered in this work (EECB and RF datasets). This formulation ties into existing work on news summarization, topic detection and tracking, an multi-document NLU. This goes hand with work on better intra-document relation prediction methods, such as the temporal relation model used in this work, to lead to better joint linking decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "https://github.com/hltcoe/parma2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Some features inspect the apparent predicate argument structure, based on things like dependency parses, but the model may not inspect more than one of its own decisions (joint factors) while scoring an alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www-01.ibm.com/software/ commerce/optimization/cplex-optimizer/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Only tuning \u03c4 performed almost as well as tuning \u03c4 and the Hamming loss, but not tuning \u03c4 performed much worse than only tuning the Hamming loss at train time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/cnap/anno-pipeline 6 The lemma baseline is obviously sensitive to the lemmatizer used. We used the Stanford CoreNLP lemmatizer (Manning et al., 2014) and found it yielded slightly better results than previously reported as the lemma baseline(Roth and Frank, 2012), so we used it for all systems to ensure fairness and that the baseline is as strong as it could be.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A lexically disparate description of an entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Name phylogeny: A generative model of string variation",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Andrews",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2012,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "344--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Andrews, Jason Eisner, and Mark Dredze. 2012. Name phylogeny: A generative model of string variation. In EMNLP-CoNLL, pages 344-355. ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The berkeley framenet project",
"authors": [
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In Proceed- ings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics -Volume 1, ACL '98, pages 86-90, Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised event coreference resolution with rich linguistic features",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Cosmin",
"suffix": ""
},
{
"first": "Sanda",
"middle": [],
"last": "Bejan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harabagiu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10",
"volume": "",
"issue": "",
"pages": "1412--1422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cosmin Adrian Bejan and Sanda Harabagiu. 2010. Un- supervised event coreference resolution with rich lin- guistic features. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguis- tics, ACL '10, pages 1412-1422, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "But dictionaries are data too",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Meredith",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Goldsmith",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Hajic",
"suffix": ""
},
{
"first": "Surya",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohanty",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Workshop on Human Language Technology, HLT '93",
"volume": "",
"issue": "",
"pages": "202--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, Meredith J. Goldsmith, Jan Hajic, Robert L. Mercer, and Surya Mohanty. 1993. But dictionaries are data too. In Proceedings of the Workshop on Human Language Technology, HLT '93, pages 202-205, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Dense event ordering with a multi-pass architecture",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Cassidy",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Mcdowell",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense event ordering with a multi-pass architecture. Transactions of the Associ- ation for Computational Linguistics, 2.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Constructing corpora for the development and evaluation of paraphrase systems",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Comput. Linguist",
"volume": "34",
"issue": "4",
"pages": "597--614",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Cohn, Chris Callison-Burch, and Mirella Lapata. 2008. Constructing corpora for the development and evaluation of paraphrase systems. Comput. Linguist., 34(4):597-614, December.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "PPDB: The paraphrase database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of NAACL-HLT, pages 758- 764, Atlanta, Georgia, June. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recognizing identical events with graph kernels",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jan\u0161najder",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "797--803",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goran Glava\u0161 and Jan\u0160najder. 2013. Recognizing identical events with graph kernels. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 797-803, Sofia, Bulgaria, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Finley",
"suffix": ""
},
{
"first": "Chun-Nam John",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP 2004",
"volume": "77",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims, Thomas Finley, and Chun-Nam John Yu. 2009. Cutting-plane training of structural svms. Mach. Learn., 77(1):27-59, October. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 388-395, Barcelona, Spain, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Word alignment via quadratic assignment",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Lacoste-Julien",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2006,
"venue": "HLT-NAACL. The Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Lacoste-Julien, Benjamin Taskar, Dan Klein, and Michael I. Jordan. 2006. Word alignment via quadratic assignment. In Robert C. Moore, Jeff A. Bilmes, Jennifer Chu-Carroll, and Mark Sanderson, editors, HLT-NAACL. The Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Joint entity and event coreference resolution across documents",
"authors": [
{
"first": "Heeyoung",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Angel",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "12",
"issue": "",
"pages": "489--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012. Joint entity and event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP- CoNLL '12, pages 489-500, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Machine learning of temporal relations",
"authors": [
{
"first": "Inderjeet",
"middle": [],
"last": "Mani",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Wellner",
"suffix": ""
},
{
"first": "Chong",
"middle": [
"Min"
],
"last": "Lee",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "753--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inderjeet Mani, Marc Verhagen, Ben Wellner, Chong Min Lee, and James Pustejovsky. 2006. Machine learn- ing of temporal relations. In Proceedings of the 21st International Conference on Computational Linguis- tics and the 44th annual meeting of the Association for Computational Linguistics, pages 753-760. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Wordnet: A lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. Wordnet: A lexical database for english. Communications of the ACM, 38:39-41.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Annotated gigaword",
"authors": [
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Gormley",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2012,
"venue": "AKBC-WEKEX Workshop at NAACL 2012",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In AKBC- WEKEX Workshop at NAACL 2012, June.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Timeml: Robust specification of event and temporal expressions in text",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Jos",
"middle": [],
"last": "Castao",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Ingria",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Saur",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Setzer",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Katz",
"suffix": ""
}
],
"year": 2003,
"venue": "Fifth International Workshop on Computational Semantics (IWCS-5)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky, Jos Castao, Robert Ingria, Roser Saur, Robert Gaizauskas, Andrea Setzer, and Graham Katz. 2003. Timeml: Robust specification of event and temporal expressions in text. In in Fifth Interna- tional Workshop on Computational Semantics (IWCS- 5).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Same referent, different words: Unsupervised mining of opaque coreferent mentions",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Can",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "897--906",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Recasens, Matthew Can, and Daniel Jurafsky. 2013. Same referent, different words: Unsupervised mining of opaque coreferent mentions. In Proceed- ings of the 2013 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, pages 897-906, Atlanta, Georgia, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Aligning predicate argument structures in monolingual comparable texts: a new corpus for a new task",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation, SemEval '12",
"volume": "1",
"issue": "",
"pages": "218--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Roth and Anette Frank. 2012. Aligning pred- icate argument structures in monolingual comparable texts: a new corpus for a new task. In Proceedings of the First Joint Conference on Lexical and Com- putational Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, SemEval '12, pages 218-227, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A linear programming formulation for global inference in natural language tasks",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of CoNLL-2004",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In In Proceedings of CoNLL-2004, pages 1-8.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Inducing Implicit Arguments via Cross-document Alignment: A Framework and its Applications",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Roth. 2014. Inducing Implicit Arguments via Cross-document Alignment: A Framework and its Ap- plications. Ph.D. thesis, Heidelberg University, June.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Max-margin markov networks",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Taskar, Carlos Guestrin, and Daphne Koller. 2003. Max-margin markov networks. MIT Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Semeval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations",
"authors": [
{
"first": "Naushad",
"middle": [],
"last": "Uzzaman",
"suffix": ""
},
{
"first": "Hector",
"middle": [],
"last": "Llorens",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Verhagen",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation",
"volume": "2",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naushad UzZaman, Hector Llorens, Leon Derczynski, James Allen, Marc Verhagen, and James Pustejovsky. 2013. Semeval-2013 task 1: Tempeval-3: Evaluat- ing time expressions, events, and temporal relations. In Second Joint Conference on Lexical and Compu- tational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Eval- uation (SemEval 2013), pages 1-9, Atlanta, Georgia, USA, June. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Structured prediction cascades",
"authors": [
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research -Proceedings Track",
"volume": "9",
"issue": "",
"pages": "916--923",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Weiss and Benjamin Taskar. 2010. Structured pre- diction cascades. Journal of Machine Learning Re- search -Proceedings Track, 9:916-923.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Parma: A predicate argument aligner",
"authors": [
{
"first": "Travis",
"middle": [],
"last": "Wolfe",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Andrews",
"suffix": ""
},
{
"first": "Charley",
"middle": [],
"last": "Bellar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Jay",
"middle": [],
"last": "Deyoung",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Snyder",
"suffix": ""
},
{
"first": "Jonathann",
"middle": [],
"last": "Weese",
"suffix": ""
},
{
"first": "Tan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xuchen",
"middle": [],
"last": "Yao",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Travis Wolfe, Benjamin Van Durme, Mark Dredze, Nicholas Andrews, Charley Bellar, Chris Callison- Burch, Jay DeYoung, Justin Snyder, Jonathann Weese, Tan Xu, and Xuchen Yao. 2013. Parma: A predicate argument aligner. In Proceedings of the 51th Annual Meeting of the Association for Computational Linguis- tics (Volume 2: Short Papers). Association for Compu- tational Linguistics, July.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Answer extraction as sequence tagging with tree edit distance",
"authors": [
{
"first": "Xuchen",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callisonburch",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2013,
"venue": "North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuchen Yao, Benjamin Van Durme, Chris Callison- burch, and Peter Clark. 2013. Answer extraction as sequence tagging with tree edit distance. In In North American Chapter of the Association for Computa- tional Linguistics (NAACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "An example analysis and predicate argument alignment task between a source and target document."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figure 1 makes clear that jointly considering all links at once can aid individual decisions, for example, by including temporal ordering of predicates."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "factors: costs = dot(w, phi.features) z_mv.add_terms(costs, phi.vars) z_mv.add_constraints(phi.constraints) solve_ILP(z_mv) mu = (z.size + k) / (avg_z_size + k) delta_features += mu * (f(z) -f(z_mv)) loss += mu * Delta(z, z_mv) return Constraint(delta_features, loss) def hinge(c, w):return max(0, c.loss -dot(w, c.delta_features)) Learning algorithm (caching and ILP solver not shown). The sum in each constraint is performed once when finding the constraint, and implicitly thereafter."
}
}
}
}