Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N06-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:46:33.221774Z"
},
"title": "Multilingual Dependency Parsing using Bayes Point Machines",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Corston-Oliver",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research One Microsoft Way Redmond",
"location": {
"postCode": "98052",
"region": "WA"
}
},
"email": "simonco@microsoft.com"
},
{
"first": "Anthony",
"middle": [],
"last": "Aue",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {
"addrLine": "One Microsoft Way Redmond",
"postCode": "98052",
"region": "WA"
}
},
"email": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": "",
"affiliation": {},
"email": "duh@ee.washington.edu"
},
{
"first": "Eric",
"middle": [],
"last": "Ringger",
"suffix": "",
"affiliation": {},
"email": "ringger@cs.byu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We develop dependency parsers for Arabic, English, Chinese, and Czech using Bayes Point Machines, a training algorithm which is as easy to implement as the perceptron yet competitive with large margin methods. We achieve results comparable to state-of-the-art in English and Czech, and report the first directed dependency parsing accuracies for Arabic and Chinese. Given the multilingual nature of our experiments, we discuss some issues regarding the comparison of dependency parsers for different languages.",
"pdf_parse": {
"paper_id": "N06-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "We develop dependency parsers for Arabic, English, Chinese, and Czech using Bayes Point Machines, a training algorithm which is as easy to implement as the perceptron yet competitive with large margin methods. We achieve results comparable to state-of-the-art in English and Czech, and report the first directed dependency parsing accuracies for Arabic and Chinese. Given the multilingual nature of our experiments, we discuss some issues regarding the comparison of dependency parsers for different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dependency parsing is an alternative to constituency analysis with a venerable tradition going back at least two millenia. The last century has seen attempts to formalize dependency parsing, particularly in the Prague School approach to linguistics (Tesni\u00e8re, 1959; Mel\u010duk, 1988) .",
"cite_spans": [
{
"start": 249,
"end": 265,
"text": "(Tesni\u00e8re, 1959;",
"ref_id": "BIBREF15"
},
{
"start": 266,
"end": 279,
"text": "Mel\u010duk, 1988)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In a dependency analysis of syntax, words directly modify other words. Unlike constituency analysis, there are no intervening non-lexical nodes. We use the terms child and parent to denote the dependent term and the governing term respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Parsing has many potential applications, ranging from question answering and information retrieval to grammar checking. Our intended application is machine translation in the Microsoft Research Treelet Translation System . This system expects an analysis of the source language in which words are related by directed, unlabeled dependencies. For the purposes of developing machine translation for several language pairs, we are interested in dependency analyses for multiple languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of this paper are two-fold: First, we present a training algorithm called Bayes Point Machines (Herbrich et al., 2001; Harrington et al., 2003) , which is as easy to implement as the perceptron, yet competitive with large margin methods. This algorithm has implications for anyone interested in implementing discriminative training methods for any application. Second, we develop parsers for English, Chinese, Czech, and Arabic and probe some linguistic questions regarding dependency analyses in different languages. To the best of our knowledge, the Arabic and Chinese results are the first reported results to date for directed dependencies. In the following, we first describe the data (Section 2) and the basic parser architecture (Section 3). Section 4 introduces the Bayes Point Machine while Section 5 describes the features for each language. We conclude with experimental results and discussions in Sections 6 and 7.",
"cite_spans": [
{
"start": 113,
"end": 136,
"text": "(Herbrich et al., 2001;",
"ref_id": "BIBREF8"
},
{
"start": 137,
"end": 161,
"text": "Harrington et al., 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We utilize publicly available resources in Arabic, Chinese, Czech, and English for training our dependency parsers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "For Czech we used the Prague Dependency Treebank version 1.0 (LDC2001T10). This is a corpus of approximately 1.6 million words. We divided the data into the standard splits for training, devel-opment test and blind test. The Prague Czech Dependency Treebank is provided with human-edited and automatically-assigned morphological information, including part-of-speech labels. Training and evaluation was performed using the automaticallyassigned labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "For Arabic we used the Prague Arabic Dependency Treebank version 1.0 (LDC2004T23). Since there is no standard split of the data into training and test sections, we made an approximate 70%/15%/15% split for training/development test/blind test by sampling whole files. The Arabic Dependency Treebank is considerably smaller than that used for the other languages, with approximately 117,000 tokens annotated for morphological and syntactic relations. The relatively small size of this corpus, combined with the morphological complexity of Arabic and the heterogeneity of the corpus (it is drawn from five different newspapers across a three-year time period) is reflected in the relatively low dependency accuracy reported below. As with the Czech data, we trained and evaluated using the automatically-assigned part-of-speech labels provided with the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "Both the Czech and the Arabic corpora are annotated in terms of syntactic dependencies. For English and Chinese, however, no corpus is available that is annotated in terms of dependencies. We therefore applied head-finding rules to treebanks that were annotated in terms of constituency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "For English, we used the Penn Treebank version 3.0 (Marcus et al., 1993) and extracted dependency relations by applying the head-finding rules of (Yamada and Matsumoto, 2003) . These rules are a simplification of the head-finding rules of (Collins, 1999) . We trained on sections 02-21, used section 24 for development test and evaluated on section 23. The English Penn Treebank contains approximately one million tokens. Training and evaluation against the development test set was performed using human-annotated part-of-speech labels. Evaluation against the blind test set was performed using part-of-speech labels assigned by the tagger described in (Toutanova et al., 2003) .",
"cite_spans": [
{
"start": 51,
"end": 72,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF9"
},
{
"start": 146,
"end": 174,
"text": "(Yamada and Matsumoto, 2003)",
"ref_id": "BIBREF19"
},
{
"start": 239,
"end": 254,
"text": "(Collins, 1999)",
"ref_id": "BIBREF1"
},
{
"start": 654,
"end": 678,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "For Chinese, we used the Chinese Treebank version 5.0 (Xue et al., 2005) . This corpus contains approximately 500,000 tokens. We made an approximate 70%/15%/15% split for training/development test/blind test by sampling whole files. As with the English Treebank, training and evaluation against the development test set was performed using human-annotated part-of-speech labels. For evaluation against the blind test section, we used an implementation of the tagger described in (Toutanova et al., 2003) . Trained on the same training section as that used for training the parser and evaluated on the development test set, this tagger achieved a token accuracy of 92.2% and a sentence accuracy of 63.8%.",
"cite_spans": [
{
"start": 54,
"end": 72,
"text": "(Xue et al., 2005)",
"ref_id": "BIBREF18"
},
{
"start": 479,
"end": 503,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "The corpora used vary in homogeneity from the extreme case of the English Penn Treebank (a large corpus drawn from a single source, the Wall Street Journal) to the case of Arabic (a relatively small corpus-approximately 2,000 sentences-drawn from multiple sources). Furthermore, each language presents unique problems for computational analysis. Direct comparison of the dependency parsing results for one language to the results for another language is therefore difficult, although we do attempt in the discussion below to provide some basis for a more direct comparison. A common question when considering the deployment of a new language for machine translation is whether the natural language components available are of sufficient quality to warrant the effort to integrate them into the machine translation system. It is not feasible in every instance to do the integration work first and then to evaluate the output. Table 1 summarizes the data used to train the parsers, giving the number of tokens (excluding traces and other empty elements) and counts of sentences. 1",
"cite_spans": [],
"ref_spans": [
{
"start": 925,
"end": 932,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "We take as our starting point a re-implementation of McDonald's state-of-the-art dependency parser (McDonald et al., 2005a) . Given a sentence x, the goal of the parser is to find the highest-scoring pars\u00ea y among all possible parses y \u2208 Y : For a given parse y, its score is the sum of the scores of all its dependency links (i, j) \u2208 y:",
"cite_spans": [
{
"start": 99,
"end": 123,
"text": "(McDonald et al., 2005a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parser Architecture",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y = arg max y\u2208Y s(x, y)",
"eq_num": "(1)"
}
],
"section": "Parser Architecture",
"sec_num": "3"
},
{
"text": "s(x, y) = (i,j)\u2208y d(i, j) = (i,j)\u2208y w \u2022 f (i, j) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser Architecture",
"sec_num": "3"
},
{
"text": "where the link (i, j) indicates a head-child dependency between the token at position i and the token at position j. The score d(i, j) of each dependency link (i, j) is further decomposed as the weighted sum of its features f (i, j). This parser architecture naturally consists of three modules: (1) a decoder that enumerates all possible parses y and computes the argmax; (2) a training algorithm for adjusting the weights w given the training data; and (3) a feature representation f (i, j). Two decoders will be discussed here; the training algorithm and feature representation are discussed in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser Architecture",
"sec_num": "3"
},
{
"text": "A good decoder should satisfy several properties: ideally, it should be able to search through all valid parses of a sentence and compute the parse scores efficiently. Efficiency is a significant issue since there are usually an exponential number of parses for any given sentence, and the discriminative training methods we will describe later require repeated decoding at each training iteration. We reimplemented Eisner's decoder (Eisner, 1996) , which searches among all projective parse trees, and the Chu-Liu-Edmonds' decoder (Chu and Liu, 1965; Edmonds, 1967) , which searches in the space of both projective and non-projective parses. (A projective tree is a parse with no crossing dependency links.) For the English and Chinese data, the headfinding rules for converting from Penn Treebank analyses to dependency analyses creates trees that are guaranteed to be projective, so Eisner's algorithm suffices. For the Czech and Arabic corpora, a non-projective decoder is necessary. Both algorithms are O(N 3 ), where N is the number of words in a sentence. 2 Refer to (McDonald et al., 2005b) for a detailed treatment of both algorithms.",
"cite_spans": [
{
"start": 433,
"end": 447,
"text": "(Eisner, 1996)",
"ref_id": "BIBREF5"
},
{
"start": 532,
"end": 551,
"text": "(Chu and Liu, 1965;",
"ref_id": "BIBREF0"
},
{
"start": 552,
"end": 566,
"text": "Edmonds, 1967)",
"ref_id": "BIBREF4"
},
{
"start": 1074,
"end": 1098,
"text": "(McDonald et al., 2005b)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parser Architecture",
"sec_num": "3"
},
{
"text": "In this section, we describe an online learning algorithm for training the weights w. First, we argue why an online learner is more suitable than a batch learner like a Support Vector Machine (SVM) for this task. We then review some standard online learners (e.g. perceptron) before presenting the Bayes Point Machine (BPM) (Herbrich et al., 2001; Harrington et al., 2003) .",
"cite_spans": [
{
"start": 324,
"end": 347,
"text": "(Herbrich et al., 2001;",
"ref_id": "BIBREF8"
},
{
"start": 348,
"end": 372,
"text": "Harrington et al., 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training: The Bayes Point Machine",
"sec_num": "4"
},
{
"text": "An online learner differs from a batch learner in that it adjusts w incrementally as each input sample is revealed. Although the training data for our parsing problem exists as a batch (i.e. all input samples are available during training), we can apply online learning by presenting the input samples in some sequential order. For large training set sizes, a batch learner may face computational difficulties since there already exists an exponential number of parses per input sentence. Online learning is more tractable since it works with one input at a time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning",
"sec_num": "4.1"
},
{
"text": "A popular online learner is the perceptron. It adjusts w by updating it with the feature vector whenever a misclassification on the current input sample occurs. It has been shown that such updates converge in a finite number of iterations if the data is linearly separable. The averaged perceptron (Collins, 2002) is a variant which averages the w across all iterations; it has demonstrated good generalization especially with data that is not linearly separable, as in many natural language processing problems.",
"cite_spans": [
{
"start": 298,
"end": 313,
"text": "(Collins, 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning",
"sec_num": "4.1"
},
{
"text": "Recently, the good generalization properties of Support Vector Machines have prompted researchers to develop large margin methods for the online setting. Examples include the margin perceptron (Duda et al., 2001) , ALMA (Gentile, 2001) , and MIRA (which is used to train the parser in (McDonald et al., 2005a) ). Conceptually, all these methods attempt to achieve a large margin and approximate the maximum margin solution of SVMs.",
"cite_spans": [
{
"start": 193,
"end": 212,
"text": "(Duda et al., 2001)",
"ref_id": "BIBREF3"
},
{
"start": 220,
"end": 235,
"text": "(Gentile, 2001)",
"ref_id": "BIBREF6"
},
{
"start": 285,
"end": 309,
"text": "(McDonald et al., 2005a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Online Learning",
"sec_num": "4.1"
},
{
"text": "The Bayes Point Machine (BPM) achieves good generalization similar to that of large margin methods, but is motivated by a very different philosophy of Bayesian learning or model averaging. In the Bayesian learning framework, we assume a prior distribution over w. Observations of the training data revise our belief of w and produce a posterior distribution. The posterior distribution is used to create the final w BPM for classification:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayes Point Machines",
"sec_num": "4.2"
},
{
"text": "w BPM = E p(w|D) [w] = |V (D)| i=1 p(w i |D) w i (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayes Point Machines",
"sec_num": "4.2"
},
{
"text": "where p(w|D) is the posterior distribution of the weights given the data D and E p(w|D) is the expectation taken with respect to this distribution. The term |V (D)| is the size of the version space V (D), which is the set of weights w i that is consistent with the training data (i.e. the set of w i that classifies the training data with zero error). This solution achieves the so-called Bayes Point, which is the best approximation to the Bayes optimal solution given finite training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayes Point Machines",
"sec_num": "4.2"
},
{
"text": "In practice, the version space may be large, so we approximate it with a finite sample of size I. Further, assuming a uniform prior over weights, we get the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayes Point Machines",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w BPM = E p(w|D) [w] \u2248 I i=1 1 I w i",
"eq_num": "(4)"
}
],
"section": "Bayes Point Machines",
"sec_num": "4.2"
},
{
"text": "Equation 4 can be computed by a very simple algorithm: (1) Train separate perceptrons on different random shuffles of the entire training data, obtaining a set of w i . (2) Take the average (arithmetic mean) of the weights w i . It is well-known that perceptron training results in different weight vector solutions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayes Point Machines",
"sec_num": "4.2"
},
{
"text": "Input: Training set D = ((x 1 , y 1 ), (x 2 , y 2 ), . . . , (x T , y T )) Output: w BPM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayes Point Machines",
"sec_num": "4.2"
},
{
"text": "Initialize: wBPM = 0 for i = 1 to I; do Randomly shuffle the sequential order of samples in D Initialize: wi = 0 for t = 1 to T; d\u00f4 if the data samples are presented sequentially in different orders. Therefore, random shuffles of the data and training a perceptron on each shuffle is effectively equivalent to sampling different models (w i ) in the version space. Note that this averaging operation should not be confused with ensemble techniques such as Bagging or Boosting-ensemble techniques average the output hypotheses, whereas BPM averages the weights (models).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayes Point Machines",
"sec_num": "4.2"
},
{
"text": "y t = w i \u2022 x t if (\u0177 t = y t ) then w i = w i + y t x t done w BPM = w BPM + 1 I w i done",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayes Point Machines",
"sec_num": "4.2"
},
{
"text": "The BPM pseudocode is given in Figure 1 . The inner loop is simply a perceptron algorithm, so the BPM is very simple and fast to implement. The outer loop is easily parallelizable, allowing speedups in training the BPM. In our specific implementation for dependency parsing, the line of the pseudocode corresponding to [\u0177 t = w i \u2022 x t ] is replaced by Eq. 1 and updates are performed for each incorrect dependency link. Also, we chose to average each individual perceptron (Collins, 2002) prior to Bayesian averaging.",
"cite_spans": [
{
"start": 474,
"end": 489,
"text": "(Collins, 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Bayes Point Machines",
"sec_num": "4.2"
},
{
"text": "Finally, it is important to note that the definition of the version space can be extended to include weights with non-zero training error, so the BPM can handle data that is not linearly separable. Also, although we only presented an algorithm for linear classifiers (parameterized by the weights), arbitrary kernels can be applied to BPM to allow non-linear decision boundaries. Refer to (Herbrich et al., 2001 ) for a comprehensive treatment of BPMs.",
"cite_spans": [
{
"start": 389,
"end": 411,
"text": "(Herbrich et al., 2001",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bayes Point Machines",
"sec_num": "4.2"
},
{
"text": "Dependency parsers for all four languages were trained using the same set of feature types. The feature types are essentially those described in (Mc-Donald et al., 2005a) . For a given pair of tokens, where one is hypothesized to be the parent and the other to be the child, we extract the word of the parent token, the part of speech of the parent token, the word of the child token, the part of speech of the child token and the part of speech of certain adjacent and intervening tokens. Some of these atomic features are combined in feature conjunctions up to four long, with the result that the linear classifiers described below approximate polynomial kernels. For example, in addition to the atomic features extracted from the parent and child tokens, the feature [Par-entWord, ParentPOS, ChildWord, ChildPOS] is also added to the feature vector representing the dependency between the two tokens. Additional features are created by conjoining each of these features with the direction of the dependency (i.e. is the parent to the left or right of the child) and a quantized measure of the distance between the two tokens. Every token has exactly one parent. The root of the sentence has a special synthetic token as its parent.",
"cite_spans": [
{
"start": 145,
"end": 170,
"text": "(Mc-Donald et al., 2005a)",
"ref_id": null
},
{
"start": 770,
"end": 815,
"text": "[Par-entWord, ParentPOS, ChildWord, ChildPOS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "Like McDonald et al, we add features that consider the first five characters of words longer than five characters. This truncated word crudely approximates stemming. For Czech and English the addition of these features improves accuracy. For Chinese and Arabic, however, it is clear that we need a different backoff strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "For Chinese, we truncate words longer than a single character to the first character. 3 Experimental results on the development test set suggested that an alternative strategy, truncation of words longer than two characters to the first two characters, yielded slightly worse results.",
"cite_spans": [
{
"start": 86,
"end": 87,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "The Arabic data is annotated with gold-standard morphological information, including information about stems. It is also annotated with the output of an automatic morphological analyzer, so that researchers can experiment with Arabic without first needing to build these components. For Arabic, we truncate words to the stem, using the value of the lemma attribute.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "All tokens are converted to lowercase, and numbers are normalized. In the case of English, Czech and Arabic, all numbers are normalized to a sin-gle token. In Chinese, months are normalized to a MONTH token, dates to a DATE token, years to a YEAR token. All other numbers are normalized to a single NUMBER token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "The feature types were instantiated using all oracle combinations of child and parent tokens from the training data. It should be noted that when the feature types are instantiated, we have considerably more features than McDonald et al. For example, for English we have 8,684,328 whereas they report 6,998,447 features. We suspect that this is mostly due to differences in implementation of the features that backoff to stems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "The averaged perceptrons were trained on the one-best parse, updating the perceptron for every edge and averaging the accumulated perceptrons after every sentence. Experiments in which we updated the perceptron based on k-best parses tended to produce worse results. The Chu-Liu-Edmonds algorithm was used for Czech. Experiments with the development test set suggested that the Eisner decoder gave better results for Arabic than the Chu-Liu-Edmonds decoder. We therefore used the Eisner decoder for Arabic, Chinese and English. Table 2 presents the accuracy of the dependency parsers. Dependency accuracy indicates for how many tokens we identified the correct head. Root accuracy, i.e. for how many sentences did we identify the correct root or roots, is reported as F1 measure, since sentences in the Czech and Arabic corpora can have multiple roots and since the parsing algorithms can identify multiple roots. Complete match indicates how many sentences were a complete match with the oracle dependency parse.",
"cite_spans": [],
"ref_spans": [
{
"start": 528,
"end": 535,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "5"
},
{
"text": "A convention appears to have arisen when reporting dependency accuracy to give results for English excluding punctuation (i.e., ignoring punctuation tokens in the output of the parser) and to report results for Czech including punctuation. In order to facilitate comparison of the present results with previously published results, we present measures including and excluding punctuation for all four languages. We hope that by presenting both sets of measurements, we also simplify one dimension along which published results of parse accuracy differ. A direct comparison of parse results across languages is still difficult for reasons to do with the different nature of the languages, the corpora and the differing standards of linguistic detail annotated, but a comparison of parsers for two different languages where both results include punctuation is at least preferable to a comparison of results including punctuation to results excluding punctuation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The results reported here for English and Czech are comparable to the previous best published numbers in (McDonald et al., 2005a) , as Table 3 shows. This table compares McDonald et al.'s results for an averaged perceptron trained for ten iterations with no check for convergence (Ryan McDonald, pers. comm.) , MIRA, a large margin classifier, and the current Bayes Point Machine results. To determine statistical significance we used confidence intervals for p=0.95. For the comparison of English dependency accuracy excluding punctuation, MIRA and BPM are both statistically significantly better than the averaged perceptron result reported in (McDonald et al., 2005a) . MIRA is significantly better than BPM when measuring dependency accuracy and root accuracy, but BPM is significantly better when measuring sentences that match completely. From the fact that neither MIRA nor BPM clearly outperforms the other, we conclude that we have successfully replicated the results reported in (Mc-Donald et al., 2005a) for English.",
"cite_spans": [
{
"start": 105,
"end": 129,
"text": "(McDonald et al., 2005a)",
"ref_id": "BIBREF10"
},
{
"start": 280,
"end": 308,
"text": "(Ryan McDonald, pers. comm.)",
"ref_id": null
},
{
"start": 646,
"end": 670,
"text": "(McDonald et al., 2005a)",
"ref_id": "BIBREF10"
},
{
"start": 989,
"end": 1014,
"text": "(Mc-Donald et al., 2005a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 135,
"end": 142,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "For Czech we also determined significance using confidence intervals for p=0.95 and compared results including punctuation. For both dependency accuracy and root accuracy, MIRA is statisticallty significantly better than averaged perceptron, and BPM is statistically significantly better than MIRA. Measuring the number of sentences that match completely, BPM is statistically significantly better than averaged perceptron, but MIRA is significantly better than BPM. Again, since neither MIRA nor BPM outperforms the other on all measures, we conclude that the results constitute a valiation of the results reported in (McDonald et al., 2005a) .",
"cite_spans": [
{
"start": 619,
"end": 643,
"text": "(McDonald et al., 2005a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "For every language, the dependency accuracy of the Bayes Point Machine was greater than the accuracy of the best individual perceptron that contributed to that Bayes Point Machine, as Table 4 shows. As previously noted, when measuring against the development test set, we used humanannotated part-of-speech labels for English and Chinese.",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 191,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Although the Prague Czech Dependency Treebank is much larger than the English Penn Treebank, all measurements are lower than the corresponding measurements for English. This reflects the fact that Czech has considerably more inflectional morphology than English, leading to data sparsity for the lexical features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The results reported here for Arabic are, to our knowledge, the first published numbers for dependency parsing of Arabic. Similarly, the results for Chinese are the first published results for the dependency parsing of the Chinese Treebank 5.0. 4 Since the Arabic and Chinese numbers are well short of the numbers for Czech and English, we attempted to determine what impact the smaller corpora used for training the Arabic and Chinese parsers might have. We performed data reduction experiments, training the parsers on five random samples at each size smaller than the entire training set. Figure 2 shows the dependency accuracy measured on the complete development test set when training with samples of the data. The graph shows the average dependency accuracy for five runs at each sample size up to 5,000 sentences. English and Chinese accuracies in this graph use oracle part-of-speech tags. At all sample sizes, the dependency accuracy for English exceeds the dependency accuracy of the other languages. This difference is perhaps partly attributable to the use of oracle part-of-speech tags. However, we suspect that the major contributor to this difference is the part-of-speech tag set. The tags used in the English Penn Treebank encode traditional lexical categories such as noun, preposition, and verb. They also encode morphological information such as person (the VBZ tag for example is used for verbs that are third person, present tense-typically with the suffix -s), tense, number and degree of comparison. The part-of-speech tag sets used for the other languages encode lexical categories, but do not encode morphological information. 5 With small amounts of data, the perceptrons do not encounter sufficient instances of each lexical item to calculate reliable weights. The perceptrons are therefore forced to rely on the part-of-speech information. It is surprising that the results for Arabic and Chinese should be so close as we vary the size of the training data ( Figure 2) given that Arabic has rich morphology and Chinese very little. One possible explanation for the similarity in accuracy is that the rather poor root accuracy in Chinese indicates parses that have gone awry. Anecdotal inspection of parses suggests that when the root is not correctly identified, there are usually cascading related errors.",
"cite_spans": [
{
"start": 1654,
"end": 1655,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 592,
"end": 600,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1989,
"end": 1998,
"text": "Figure 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Czech, a morphologically complex language in which root identification is far from straightforward, exhibits the worst performance at small sample sizes. But (not shown) as the sample size increases, the accuracy of Czech and Chinese converge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We have successfully replicated the state-of-the-art results for dependency parsing (McDonald et al., 2005a) for both Czech and English, using Bayes Point Machines. Bayes Point Machines have the appealing property of simplicity, yet are competitive with online wide margin methods.",
"cite_spans": [
{
"start": 84,
"end": 108,
"text": "(McDonald et al., 2005a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "We have also presented first results for dependency parsing of Arabic and Chinese, together with some analysis of the performance on those languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "In future work we intend to explore the discriminative reranking of n-best lists produced by these parsers and the incorporation of morphological features. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The files in each partition of the Chinese and Arabic data are given at http://research.microsoft.com/\u02dcsimonco/ HLTNAACL2006.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Chu-Liu-Edmonds' decoder, which is based on a maximal spanning tree algorithm, can run in O(N 2 ), but our simpler implementation of O(N 3 ) was sufficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "There is a near 1:1 correspondence between characters and morphemes in contemporary Mandarin Chinese. However, most content words consist of more than one morpheme, typically two.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "(Wang et al., 2005) report numbers for undirected dependencies on the Chinese Treebank 3.0. We cannot meaningfully compare those numbers to the numbers here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For Czech and Arabic we followed the convention established in previous parsing work on the Prague Czech Dependency Treebank of using the major and minor part-of-speech tags but ignoring other morphological information annotated on each node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Ryan McDonald, Otakar Smr\u017e and Hiroyasu Yamada for help in various stages of the project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On the shortest arborescence of a directed graph",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "T",
"middle": [
"H"
],
"last": "Liu",
"suffix": ""
}
],
"year": 1965,
"venue": "Science Sinica",
"volume": "14",
"issue": "",
"pages": "1396--1400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Chu and T.H. Liu. 1965. On the shortest arbores- cence of a directed graph. Science Sinica, 14:1396- 1400.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Head-Driven Statistical Models for Natural Language Processing",
"authors": [
{
"first": "Michael John",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Collins",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael John Collins. 1999. Head-Driven Statistical Models for Natural Language Processing. Ph.D. the- sis, University of Pennsylvania.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of EMNLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Pattern Classification",
"authors": [
{
"first": "R",
"middle": [
"O"
],
"last": "Duda",
"suffix": ""
},
{
"first": "P",
"middle": [
"E"
],
"last": "Hart",
"suffix": ""
},
{
"first": "D",
"middle": [
"G"
],
"last": "Stork",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. O. Duda, P. E. Hart, and D. G. Stork. 2001. Pattern Classification. John Wiley & Sons, Inc.: New York.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Journal of Research of the National Bureau of Standards",
"authors": [
{
"first": "J",
"middle": [],
"last": "Edmonds",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "71",
"issue": "",
"pages": "233--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Edmonds. 1967. Optimum branchings. Journal of Re- search of the National Bureau of Standards, 71B:233- 240.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Three new probabilistic models for dependency parsing: An exploration",
"authors": [
{
"first": "Jason",
"middle": [
"M"
],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of COLING 1996",
"volume": "",
"issue": "",
"pages": "340--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceed- ings of COLING 1996, pages 340-345.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A new approximate maximal margin classification algorithm",
"authors": [
{
"first": "Claudio",
"middle": [],
"last": "Gentile",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Machine Learning Research",
"volume": "2",
"issue": "",
"pages": "213--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudio Gentile. 2001. A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2:213-242.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Online bayes point machines",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Harrington",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Herbrich",
"suffix": ""
},
{
"first": "Jyrki",
"middle": [],
"last": "Kivinen",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Platt",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"C"
],
"last": "Williamson",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. 7th Pacific-Asia Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "241--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Harrington, Ralf Herbrich, Jyrki Kivinen, John C. Platt, and Robert C. Williamson. 2003. On- line bayes point machines. In Proc. 7th Pacific-Asia Conference on Knowledge Discovery and Data Min- ing, pages 241-252.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bayes point machines",
"authors": [
{
"first": "Ralf",
"middle": [],
"last": "Herbrich",
"suffix": ""
},
{
"first": "Thore",
"middle": [],
"last": "Graepel",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Campbell",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "245--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralf Herbrich, Thore Graepel, and Colin Campbell. 2001. Bayes point machines. Journal of Machine Learning Research, pages 245-278.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Building a large annotated corpus of english: The Penn Treebank",
"authors": [
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of english: The Penn Treebank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Assocation for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In Proceedings of the 43rd Annual Meeting of the Assocation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005b. Online large-margin training of dependency parsers. Technical Report MS-CIS-05-11, Dept. of Computer and Information Science, Univ. of Pennsyl- vania.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dependency Syntax: Theory and Practice",
"authors": [
{
"first": "Igor",
"middle": [
"A"
],
"last": "Mel\u010duk",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Igor A. Mel\u010duk. 1988. Dependency Syntax: Theory and Practice. State University of New York Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Microsoft research treelet translation system: IWSLT evaluation",
"authors": [
{
"first": "Arul",
"middle": [],
"last": "Menezes",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arul Menezes and Chris Quirk. 2005. Microsoft re- search treelet translation system: IWSLT evaluation. In Proceedings of the International Workshop on Spo- ken Language Translation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Dependency treelet translation: Syntactically informed phrasal SMT",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Arul",
"middle": [],
"last": "Menezes",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk, Arul Menezes, and Colin Cherry. 2005. De- pendency treelet translation: Syntactically informed phrasal SMT. In Proceedings of the 43rd annual meet- ing of the Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "\u00c9l\u00e9ments de syntaxe structurale",
"authors": [
{
"first": "Lucien",
"middle": [],
"last": "Tesni\u00e8re",
"suffix": ""
}
],
"year": 1959,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucien Tesni\u00e8re. 1959.\u00c9l\u00e9ments de syntaxe structurale. Librairie C. Klincksieck.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Feature-rich part-of-speech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "252--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Pro- ceedings of HLT-NAACL 2003, pages 252-259.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Strictly lexical dependency parsing",
"authors": [
{
"first": "Qin Iris",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dale",
"middle": [],
"last": "Schuurmans",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "152--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qin Iris Wang, Dale Schuurmans, and Dekang Lin. 2005. Strictly lexical dependency parsing. In Proceedings of the Ninth International Workshop on Parsing Tech- nologies, pages 152-159.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The penn chinese treebank: Phrase structure annotation of a large corpus",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Fu-Dong",
"middle": [],
"last": "Chiou",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2005,
"venue": "Natural Language Engineering",
"volume": "",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural Lan- guage Engineering, 11(2).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Statistical dependency analysis with support vector machines",
"authors": [
{
"first": "Hiroyasu",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of IWPT",
"volume": "",
"issue": "",
"pages": "195--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT, pages 195-206.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Bayes Point Machine pseudo-code.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Dependency accuracy at various sample sizes. Graph shows average of five samples at each size and measures accuracy against the development test set.",
"type_str": "figure"
},
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Summary of data used to train parsers.",
"num": null
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Language</td><td>Algorithm</td><td/><td colspan=\"3\">DA RA CM</td></tr><tr><td>English</td><td colspan=\"2\">Avg. Perceptron</td><td colspan=\"3\">90.6 94.0 36.5</td></tr><tr><td colspan=\"2\">(exc punc) MIRA</td><td/><td colspan=\"3\">90.9 94.2 37.5</td></tr><tr><td/><td colspan=\"5\">Bayes Point Machine 90.8 93.7 37.6</td></tr><tr><td>Czech</td><td colspan=\"2\">Avg. Perceptron</td><td colspan=\"3\">82.9 88.0 30.3</td></tr><tr><td colspan=\"2\">(inc punc) MIRA</td><td/><td colspan=\"3\">83.3 88.6 31.3</td></tr><tr><td/><td colspan=\"5\">Bayes Point Machine 84.0 88.8 30.9</td></tr><tr><td/><td/><td colspan=\"4\">Arabic Chinese Czech English</td></tr><tr><td colspan=\"2\">Bayes Point Machine</td><td>78.4</td><td>83.8</td><td>84.5</td><td>91.2</td></tr><tr><td colspan=\"2\">Best averaged perceptron</td><td>77.9</td><td>83.1</td><td>83.5</td><td>90.8</td></tr><tr><td colspan=\"3\">Worst averaged perceptron 77.4</td><td>82.6</td><td>83.3</td><td>90.5</td></tr></table>",
"text": "Comparison to previous best published results reported in(McDonald et al., 2005a).",
"num": null
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Bayes Point Machine accuracy vs. averaged perceptrons, measured on development test set, excluding punctuation.",
"num": null
}
}
}
}