{ "paper_id": "I05-1008", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:24:49.076338Z" }, "title": "Using a Partially Annotated Corpus to Build a Dependency Parser for Japanese", "authors": [ { "first": "Manabu", "middle": [], "last": "Sassano", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fujitsu Laboratories, Ltd", "location": { "addrLine": "4-1-1, Nakahara-ku", "postCode": "211-8588", "settlement": "Kamikodanaka, Kawasaki", "country": "Japan" } }, "email": "sassano@jp.fujitsu.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We explore the use of a partially annotated corpus to build a dependency parser for Japanese. We examine two types of partially annotated corpora. It is found that a parser trained with a corpus that does not have any grammatical tags for words can demonstrate an accuracy of 87.38%, which is comparable to the current state-of-the-art accuracy on the Kyoto University Corpus. In contrast, a parser trained with a corpus that has only dependency annotations for each two adjacent bunsetsus (chunks) shows moderate performance. Nonetheless, it is notable that features based on character n-grams are found very useful for a dependency parser for Japanese.", "pdf_parse": { "paper_id": "I05-1008", "_pdf_hash": "", "abstract": [ { "text": "We explore the use of a partially annotated corpus to build a dependency parser for Japanese. We examine two types of partially annotated corpora. It is found that a parser trained with a corpus that does not have any grammatical tags for words can demonstrate an accuracy of 87.38%, which is comparable to the current state-of-the-art accuracy on the Kyoto University Corpus. In contrast, a parser trained with a corpus that has only dependency annotations for each two adjacent bunsetsus (chunks) shows moderate performance. Nonetheless, it is notable that features based on character n-grams are found very useful for a dependency parser for Japanese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Corpus-based supervised learning is now a standard approach to build a system which shows high performance for a given task in NLP. However, the weakness of such approach is to need an annotated corpus. Corpus annotation is labor intensive and very expensive. To reduce or avoid the cost of annotation, various approaches are proposed, which include unsupervised learning, minimally supervised learning (e.g., [1] ), and active learning (e.g., [2, 3] ).", "cite_spans": [ { "start": 410, "end": 413, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 444, "end": 447, "text": "[2,", "ref_id": "BIBREF1" }, { "start": 448, "end": 450, "text": "3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To discuss clearly the cost of corpus annotation, we here consider a simple model of the cost:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "annotation cost \u221d t c(t)n(t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "where t is a type of annotation such as POS tagging, chunk tagging, etc., c(t) is a cost per type t annotation, and n(t) is the number of type t annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work to tackle the problem of annotation cost has mainly focused on reducing n(t). For example, in active learning, useful examples to be annotated are selected based on some criteria, and then the number of examples to be annotated is considerably reduced. In contrast, we here focus on reducing c(t) instead of n(t). Obviously, if some portion of annotations are not given, the performance of a NLP system will deteriorate. The question here is how much the performance deteriorates. Is there a good trade-off between saving the cost and losing the performance?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Minimizing portions of annotations is also very important from the point of view of engineering. Suppose that we want to build an annotated corpus to make a parser for some real-world application. The design and strategy of corpus annotation is crucial in order to get a good parser while saving the cost. Furthermore, we have to keep in mind the maintenance cost of both the corpus and the parser. For example, we may find some errors in the annotations and the design of linguistic categories. In this situation fewer annotations lead to saving the cost because the corpus is more stable and less prone to errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main purpose of this study is to explore the use of a partially annotated corpus to build a dependency parser for Japanese. In this paper, we describe experiments to investigate the feasibility of a partially annotated corpus. In addition, we propose features for parsing which are based on character n-grams. Even if grammatical tags are not given, a parser with these features demonstrates better performance than does the maximum entropy parser [4] with full grammatical features. Similarly, we have conducted experiments on bunsetsu (described in Sect. 2.1) chunking trained with a corpus which does not have grammatical tags. After that, we have tested a parser trained with a corpus which is partially annotated for dependency structures.", "cite_spans": [ { "start": 452, "end": 455, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Japanese language is basically an SOV language. Word order is relatively free. In English the syntactic function of each word is represented with word order, while in Japanese postpositions represent the syntactic function of each word. For example, one or more postpositions following a noun play a similar role to declension of nouns in German, which indicates a grammatical case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Properties of Japanese", "sec_num": "2.1" }, { "text": "Based on such properties, the concept of bunsetsus 1 was devised and has been used to describe the structure of a sentence in Japanese. A bunsetsu consists of one or more content words followed by zero or more function words. By defining a bunsetsu like that, we can analyze a sentence in a similar way that is used when analyzing the grammatical role of words in inflecting languages like German.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Properties of Japanese", "sec_num": "2.1" }, { "text": "Thus, strictly speaking, bunsetsu order rather than word order is free except the bunsetsu that contains the main verb of a sentence. Such bunsetsu must be placed at the end of the sentence. For example, the following two sentences have an identical meaning: (1) Ken-ga kanojo-ni hon-wo age-ta. (2) Ken-ga hon-wo kanojo-ni age-ta. (-ga: subject marker, -ni: dative case particle, -wo: accusative case particle. English translation: Ken gave a book to her.) Note that the rightmost bunsetsu 'age-ta,' which is composed of a verb stem and a past tense marker, has to be placed at the end of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Properties of Japanese", "sec_num": "2.1" }, { "text": "We here list the constraints of Japanese dependency including ones mentioned above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Properties of Japanese", "sec_num": "2.1" }, { "text": "Each bunsetsu has only one head except the rightmost one. C2. Each head bunsetsu is always placed at the right hand side of its modifier. C3. Dependencies do not cross one another.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C1.", "sec_num": null }, { "text": "These properties are basically shared also with Korean and Mongolian.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C1.", "sec_num": null }, { "text": "Because Japanese has the properties above, the following steps are very common in parsing Japanese: S1. Break a sentence into morphemes (i.e. morphological analysis). S2. Chunk them into bunsetsus. S3. Analyze dependencies between these bunsetsus. S4. Label each dependency with a semantic role such as agent, object, location, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typical Steps of Parsing Japanese", "sec_num": "2.2" }, { "text": "Note that since Japanese does not have explicit word delimiters like white spaces, we first have to tokenize a sentence into morphemes and at the same time give a POS tag to each morpheme (S1). Therefore, when building an annotated corpus of Japanese, we have to decide boundaries of each word (morpheme) and POS tags of all the words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typical Steps of Parsing Japanese", "sec_num": "2.2" }, { "text": "3 Experimental Setup", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Typical Steps of Parsing Japanese", "sec_num": "2.2" }, { "text": "We employ the Stack Dependency Analysis (SDA) algorithm [7] to analyze the dependency structure of a sentence in Japanese. This algorithm, which takes advantage of C1, C2, and C3 in Sect. 2.1, is very simple and easy to implement. Sassano [7] has proved its efficiency in terms of time complexity and reported the best accuracy on the Kyoto University Corpus [8] . The SDA algorithm as well as Cascaded Chunking Model [9] is a shift-reduce type algorithm. The pseudo code of SDA is shown in Fig. 1 . This algorithm is used with any estimator that decides whether a bunsetsu modifies another bunsetsu. A trainable classifier, such as an SVM, a decision tree, etc., is a typical choice for the estimator.", "cite_spans": [ { "start": 56, "end": 59, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 239, "end": 242, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 359, "end": 362, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 418, "end": 421, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 491, "end": 497, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Parsing Algorithm", "sec_num": "3.1" }, { "text": "To facilitate comparison with previous results, we used the Kyoto University Corpus Version 2 [8] . Parsers used in experiments were trained on the articles on January 1st through 8th (7,958 sentences) and tested on the articles on January 9th (1,246 sentences). The articles on January 10th were used for development. The usage of these articles is the same as in [4, 10, 9, 7] .", "cite_spans": [ { "start": 94, "end": 97, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 365, "end": 368, "text": "[4,", "ref_id": "BIBREF3" }, { "start": 369, "end": 372, "text": "10,", "ref_id": "BIBREF9" }, { "start": 373, "end": 375, "text": "9,", "ref_id": "BIBREF8" }, { "start": 376, "end": 378, "text": "7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": "3.2" }, { "text": "We use SVMs [11] for estimating dependencies between two bunsetsus because they have excellent properties. One of them is that combinations of features in an example are automatically considered with polynomial kernels. Excellent performance has been reported for many NLP tasks including Japanese dependency parsing, e.g., [9] . Please see [11] for formal descriptions of SVMs.", "cite_spans": [ { "start": 12, "end": 16, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 324, "end": 327, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 341, "end": 345, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Choice for Classifiers", "sec_num": "3.3" }, { "text": "Polynomial kernels with the degree of 3 are used and the misclassification cost is set to 1. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SVM Setting", "sec_num": "3.4" }, { "text": "First we conducted experiments on dropping POS tags. In corpus building for a parser, disambiguating POS tags is one of time consuming tasks. In addition, it takes much time to prepare guidelines for POS tagging. Furthermore, in the case of a Japanese corpus, we will need more time because we have to deal with word boundaries as well as POS tags. Therefore, it would be desirable to avoid or reduce POS annotations while minimizing the loss of performance of the parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dropping POS Tags", "sec_num": "4" }, { "text": "To examine the effect of dropping POS tags, we built the following four sets of features and measured parsing performance with these feature sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "Standard Features. By the \"standard features\" here we mean the feature set commonly used in [4, 10, 12, 9, 7] . We employ the features below for each bunsetsu:", "cite_spans": [ { "start": 92, "end": 95, "text": "[4,", "ref_id": "BIBREF3" }, { "start": 96, "end": 99, "text": "10,", "ref_id": "BIBREF9" }, { "start": 100, "end": 103, "text": "12,", "ref_id": "BIBREF11" }, { "start": 104, "end": 106, "text": "9,", "ref_id": "BIBREF8" }, { "start": 107, "end": 109, "text": "7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "1. Rightmost Content Word -major POS, minor POS, conjugation type, conjugation form, surface form (lexicalized form) 2. Rightmost Function Word -major POS, minor POS, conjugation type, conjugation form, surface form (lexicalized form) 3. Punctuation (periods, and commas) 4. Open parentheses and close parentheses 5. Location -at the beginning of the sentence or at the end of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "In addition, features as to the gap between two bunsetsus are also used. They include: distance, particles, parentheses, and punctuation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "Words-Only Features. If POS tags are not available, we have to use only tokens (words) as features. In addition, we cannot identify easily content words and function words in a bunsetsu. Therefore, we here chose the simplest form of feature sets. We constructed a bag of words in each bunsetsu and then used them as features. For example, we assume that there are three words in a bunsetsu: keisan (computational), gengogaku (linguistics), no (of). In this case we get {keisan, gengogaku, no} as features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "Character N-Gram Features. Next we constructed a feature set without word boundaries or POS tags. In this feature set, we can use only the character string of a bunsetsu. At first glance, such a feature set is silly and it seems that a corpus without POS tags cannot yield a good parser. It is because no explicit syntactic information is given.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "Can we extract good features from a string? We found useful ideas in Sato and Kawase's papers [13, 14] . They define a similarity score between two sentences in Japanese and use it for ranking translation examples. Their similarity score is based on character subsequence matching. Just raw character strings are used and neither morphological analysis, POS tagging, nor parsing is applied. Although no advanced analysis was applied, they had good results enough for translation-aid. In [13] , DP matching based scores are investigated, and in [14] , the number of common 2-grams and 3-grams of characters between two sentences is incorporated into a similarity score.", "cite_spans": [ { "start": 94, "end": 98, "text": "[13,", "ref_id": "BIBREF12" }, { "start": 99, "end": 102, "text": "14]", "ref_id": "BIBREF13" }, { "start": 487, "end": 491, "text": "[13]", "ref_id": "BIBREF12" }, { "start": 544, "end": 548, "text": "[14]", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "In our experiments we use blended n-grams which are both 1-grams and 2-grams. All the 1-grams and 2-grams from the character string of a bunsetsu are extracted as features. For example, suppose we have a bunsetsu the string of which is a sequence of three characters: kano-jo-no where '-' represents a boundary between Japanese characters and this string is actually written with three characters in Japanese. The following features are extracted from the string: kano, jo, no, $-kano, kano-jo, jo-no, no-$, where '$' represents a bunsetsu boundary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.1" }, { "text": "The fourth feature set that we have investigated is a combination of \"standard features\" and character ngrams, which are described in the previous subsection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of \"Standard Features\" and Character N-grams.", "sec_num": null }, { "text": "Performance of parsers trained with these feature sets on the development set and the test set is shown in Table 1 . For comparison to previous work we use the standard measures for the Kyoto University Corpus: dependency accuracy and sentence accuracy. The dependency accuracy is the percentage of correct dependencies and the sentence accuracy is the percentage of sentences, all the dependencies in which are correctly analyzed. To our surprise, the parser with the feature set based on character n-grams achieved an accuracy of 87.38%, which is very good. Although this is worse than that of \"standard feature set,\" the performance is still surprising. We considered POS tags were essential for parsing. Why so successful?", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 114, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.2" }, { "text": "The reason would be explained by the writing system of Japanese and its usage. In modern Japanese text mainly five different scripts are used: kanji, hiragana, katakana, Arabic numerals, and Latin letters. Usage of these scripts indicates implicitly the grammatical role of a word. For example, kanji is mainly used to represent nouns or stems of verbs and adjectives. It is never used for particles, which are always written in hiragana. Essential morphological and syntactic categories are also often indicated in hiragana. Conjugation forms of verbs and adjectives are represented with one or two hiragana characters. Syntactic roles of a bunsetsu are often indicated by the rightmost morpheme in it. Most of such morphemes are endings of verbs or adjectives, or particles. In other words, the rightmost characters in a bunsetsu are expected to indicate the syntactic role of a bunsetsu. Bunsetsu Chunking. After we observed the results of the experiments on parsing, a new question arose to us. Can we chunk tokens to bunsetsus without POS tags, too? We carried out additional experiments on bunsetsu chunking. Following [15] , we encode bunsetsu chunking as a tagging problem. In bunsetsu chunking, we use the chunk tag set {B, I} where B marks the first word of some bunsetsu and words marked I are inside a bunsetsu. In these experiments on bunsetsu chunking, we estimated the chunk tag of each word using a SVM from five words and their derived attributes. These five words are a word to be estimated and its two preceding/following words. Features are extracted from the followings for each word: word (token) itself, major POS, minor POS, conjugation type, conjugation form, the leftmost character, the character type of the leftmost character, the rightmost character, and the character type of the rightmost character. A character type has a value which indicates a script. It can be either kanji, hiragana, katakana, Arabic numerals, or Latin letters.", "cite_spans": [ { "start": 1125, "end": 1129, "text": "[15]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.2" }, { "text": "We conducted experiments with four sets of features. Performance on the development set and the test set is shown in Table 2 . We used the same performance measures as in [16] . Precision (p) is defined as the percentage of words correctly marked B among all the words that the system marked B. Recall (r) is defined as the percentage of words correctly marked B among all the words that are marked B in the training set. F-measure is defined as: F-measure = 2pq/(p + q). The bunsetsu chunker with surface forms only yielded worse performance than did that with the grammatical tags including major/minor POS and conjugation type/form. However, the chunker with character features achieved good performance even if grammatical tags are not available. In addition, the feature set in which all the available features are used gives the best among the feature sets we tested. Again we found that features based on characters compensate performance deterioration caused by no grammatical tags.", "cite_spans": [ { "start": 171, "end": 175, "text": "[16]", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 117, "end": 124, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.2" }, { "text": "We have found that both a practical parser and a practical bunsetsu chunker can be constructed from a corpus which does not have POS information. This means we can make a parser for Japanese which is less dependent on a morphological analyzer. It would be useful for improving the modularity of an analysis system for Japanese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.2" }, { "text": "As previous work [4, 17] reports, approximately 65% of bunsetsus modify the one on their immediate right hand side. From this observation, we simplify dependency annotations. For each bunsetsu we give either the D tag or O where bunsetsus marked D modify the one on their immediate right hand side and bunsetsus marked O do not. Figure 2 shows a sample sentence with dependency annotations. This encoding scheme represents some portion of the dependency structure of a sentence. Annotating under this scheme is easier than selecting the head of each bunsetsu. We examined usefulness of this type of partially annotated corpus following the encoding scheme above.", "cite_spans": [ { "start": 17, "end": 20, "text": "[4,", "ref_id": "BIBREF3" }, { "start": 21, "end": 24, "text": "17]", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 329, "end": 337, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Dropping Longer Dependency Annotations", "sec_num": "5" }, { "text": "The SDA algorithm, which we employ for experiments, can work with a partially annotated corpus to parse a sentence in Japanese 2 . In training, first we construct a training set only from dependency annotations between two adjacent bunsetsus. We ignore relations between two bunsetsus which have a longer dependency. After that, we train a classifier for parsing from the training set. In testing, we use the classifier for both two adjacent bunsetsus and other pairs of bunsetsus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Partial Dependency Annotations", "sec_num": "5.1" }, { "text": "Performance on the development set and the test set are shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 72, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5.2" }, { "text": "The parser trained with the partially annotated corpus yielded good performance. However, its accuracy is considerably worse than that of the parser with the fully annotated corpus. This tendency is clearer in terms of sentence accuracy. To examine differences in terms of quantity, we plot the learning curves with the two corpora. The curves are shown in Fig. 3 . How many sentences which are partially annotated do we need in order to achieve a given accuracy with some number of fully annotated sentences? It is found that we need 8 -17 times the number of sentences when using the partially annotated corpus instead of the fully annotated one. If hiring linguistic experts for annotation is much more expensive than hiring non experts, or it is difficult to find a large enough number of experts, this type of partially annotated corpus could be useful.", "cite_spans": [], "ref_spans": [ { "start": 357, "end": 363, "text": "Fig. 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5.2" }, { "text": "The naive approach we examined was not so effective in the light of the number of sentences to be required. However, we should note that a partially annotated corpus is easier to maintain the consistency of annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5.2" }, { "text": "In this section we briefly review related work from three points of view, i.e., parsing performance, the use of partially annotated corpora, and the use of character n-grams. Parsing Performance. Although improvement of the performance of a parser is not a primary concern in this paper, comparison with other results will indicate to us how practical the parser is. Table 4 summarizes comparison to related work on parsing accuracy. Our parsers demonstrated good performance although they did not outperform the best. It is notable that the parser which does not use any explicit grammatical tags outperforms one by [4] , which employs a maximum entropy model with full grammatical features given by a morphological analyzer. Use of Partially Annotated Corpora. Several papers address the use of partially annotated corpora. Pereira and Schabes [19] proposed an algorithm of inferring a stochastic context-free grammar from a partially bracketed corpus. Riezler et al. [20] presented a method of discriminative estimation of an exponential model on LFG parses from partially labeled data. Our study differs in that we focus more on avoiding expensive types of annotations while minimizing the loss of performance of a parser.", "cite_spans": [ { "start": 617, "end": 620, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 846, "end": 850, "text": "[19]", "ref_id": "BIBREF18" }, { "start": 970, "end": 974, "text": "[20]", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 367, "end": 374, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Use of Character N-grams. Character n-grams are often used for POS tagging of unknown words, unsupervised POS tagging, and measures of string similarity. The number of common n-grams between two sentences is used for a similarity measure in [14] . This usage is essentially the same as in the spectrum kernel [21] , which is one of string kernels [22] .", "cite_spans": [ { "start": 241, "end": 245, "text": "[14]", "ref_id": "BIBREF13" }, { "start": 309, "end": 313, "text": "[21]", "ref_id": "BIBREF20" }, { "start": 347, "end": 351, "text": "[22]", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We have explored the use of a partially annotated corpus for building a dependency parser for Japanese. We have examined two types of partially annotated corpora. It is found that a parser trained with a corpus that does not have any grammatical tags for words can demonstrate an accuracy of 87.38%, which is comparable to the current stateof-the-art accuracy. In contrast, a parser trained with a corpus that has only dependency annotations for each two adjacent bunsetsus shows moderate performance. Nonetheless, it is notable that features based on character n-grams are found very useful for a dependency parser for Japanese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "R. Dale et al. (Eds.): IJCNLP 2005, LNAI 3651, pp. 82-92, 2005. c Springer-Verlag Berlin Heidelberg 2005", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The word 'bunsetsu' in Japanese is composed of two Chinese characters, i.e., 'bun' and 'setsu.' 'Bun' means a sentence and 'setsu' means a segment. A 'bunsetsu' is considered to be a small syntactic segment in a sentence. A eojeol in Korean[5] is almost the same concept as a bunsetsu. Chunks defined in[6] for English are also very similar to bunsetsus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Cascaded Chunking Model[9] also can be applicable to use a partially annotated corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised word sense disambiguation rivaling supervised methods", "authors": [ { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1995, "venue": "Proc. of ACL-1995", "volume": "", "issue": "", "pages": "189--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yarowsky, D.: Unsupervised word sense disambiguation rivaling supervised methods. In: Proc. of ACL-1995. (1995) 189-196", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Active learning for natural language parsing and information extraction", "authors": [ { "first": "C", "middle": [ "A" ], "last": "Thompson", "suffix": "" }, { "first": "M", "middle": [ "L" ], "last": "Califf", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 1999, "venue": "Proc. of the Sixteenth International Conference on Machine Learning", "volume": "", "issue": "", "pages": "406--414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thompson, C.A., Califf, M.L., Mooney, R.J.: Active learning for natural language parsing and information extraction. In: Proc. of the Sixteenth International Conference on Machine Learning. (1999) 406-414", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Active learning for statistical natural language parsing", "authors": [ { "first": "M", "middle": [], "last": "Tang", "suffix": "" }, { "first": "X", "middle": [], "last": "Luo", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL-2002", "volume": "", "issue": "", "pages": "120--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tang, M., Luo, X., Roukos, S.: Active learning for statistical natural language parsing. In: Proc. of ACL-2002. (2002) 120-127", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Japanese dependency structure analysis based on maximum entropy models", "authors": [ { "first": "K", "middle": [], "last": "Uchimoto", "suffix": "" }, { "first": "S", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "H", "middle": [], "last": "Isahara", "suffix": "" } ], "year": 1999, "venue": "Proc. of EACL-99", "volume": "", "issue": "", "pages": "196--203", "other_ids": {}, "num": null, "urls": [], "raw_text": "Uchimoto, K., Sekine, S., Isahara, H.: Japanese dependency structure analysis based on maximum entropy models. In: Proc. of EACL-99. (1999) 196-203", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Three types of chunking in Korean and dependency analysis based on lexical association", "authors": [ { "first": "J", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "K", "middle": [], "last": "Choi", "suffix": "" }, { "first": "M", "middle": [], "last": "Song", "suffix": "" } ], "year": 1999, "venue": "Proc. of the 18th Int. Conf. on Computer Processing of Oriental Languages", "volume": "", "issue": "", "pages": "59--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon, J., Choi, K., Song, M.: Three types of chunking in Korean and dependency analysis based on lexical association. In: Proc. of the 18th Int. Conf. on Computer Processing of Oriental Languages. (1999) 59-65", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Parsing by chunks", "authors": [ { "first": "S", "middle": [ "P" ], "last": "Abney", "suffix": "" } ], "year": 1991, "venue": "Principle-Based Parsing: Computation and Psycholinguistics", "volume": "", "issue": "", "pages": "257--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abney, S.P.: Parsing by chunks. In Berwick, R.C., Abney, S.P., Tenny, C., eds.: Principle- Based Parsing: Computation and Psycholinguistics. Kluwer Academic Publishers (1991) 257-278", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Linear-time dependency analysis for Japanese", "authors": [ { "first": "M", "middle": [], "last": "Sassano", "suffix": "" } ], "year": 2004, "venue": "Proc. of COLING", "volume": "", "issue": "", "pages": "8--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sassano, M.: Linear-time dependency analysis for Japanese. In: Proc. of COLING 2004. (2004) 8-14", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Building a Japanese parsed corpus while improving the parsing system", "authors": [ { "first": "S", "middle": [], "last": "Kurohashi", "suffix": "" }, { "first": "M", "middle": [], "last": "Nagao", "suffix": "" } ], "year": 1998, "venue": "Proc. of the 1st LREC", "volume": "", "issue": "", "pages": "719--724", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kurohashi, S., Nagao, M.: Building a Japanese parsed corpus while improving the parsing system. In: Proc. of the 1st LREC. (1998) 719-724", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Japanese dependency analysis using cascaded chunking", "authors": [ { "first": "T", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2002, "venue": "Proc. of CoNLL-2002", "volume": "", "issue": "", "pages": "63--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kudo, T., Matsumoto, Y.: Japanese dependency analysis using cascaded chunking. In: Proc. of CoNLL-2002. (2002) 63-69", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Backward beam search algorithm for dependency analysis of Japanese", "authors": [ { "first": "S", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "K", "middle": [], "last": "Uchimoto", "suffix": "" }, { "first": "H", "middle": [], "last": "Isahara", "suffix": "" } ], "year": 2000, "venue": "Proc. of COLING-00", "volume": "", "issue": "", "pages": "754--760", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sekine, S., Uchimoto, K., Isahara, H.: Backward beam search algorithm for dependency analysis of Japanese. In: Proc. of COLING-00. (2000) 754-760", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Nature of Statistical Learning Theory", "authors": [ { "first": "V", "middle": [ "N" ], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer-Verlag (1995)", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Japanese dependency structure analysis based on support vector machines", "authors": [ { "first": "T", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2000, "venue": "Proc. of EMNLP/VLC 2000", "volume": "", "issue": "", "pages": "18--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kudo, T., Matsumoto, Y.: Japanese dependency structure analysis based on support vector machines. In: Proc. of EMNLP/VLC 2000. (2000) 18-25", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "CTM: An example-based translation aid system", "authors": [ { "first": "S", "middle": [], "last": "Sato", "suffix": "" } ], "year": 1992, "venue": "Proc. of COLING-92", "volume": "", "issue": "", "pages": "1259--1263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sato, S.: CTM: An example-based translation aid system. In: Proc. of COLING-92. (1992) 1259-1263", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A high-speed best match retrieval method for Japanese text", "authors": [ { "first": "S", "middle": [], "last": "Sato", "suffix": "" }, { "first": "T", "middle": [], "last": "Kawase", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sato, S., Kawase, T.: A high-speed best match retrieval method for Japanese text. Technical Report IS-RR-94-9I, Japan Advanced Institute of Science and Technology, Hokuriku (1994)", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Text chunking using transformation-based learning", "authors": [ { "first": "L", "middle": [ "A" ], "last": "Ramshaw", "suffix": "" }, { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" } ], "year": 1995, "venue": "Proc. of VLC 1995", "volume": "", "issue": "", "pages": "82--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramshaw, L.A., Marcus, M.P.: Text chunking using transformation-based learning. In: Proc. of VLC 1995. (1995) 82-94", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Bunsetsu identification using categoryexclusive rules", "authors": [ { "first": "M", "middle": [], "last": "Murata", "suffix": "" }, { "first": "K", "middle": [], "last": "Uchimoto", "suffix": "" }, { "first": "Q", "middle": [], "last": "Ma", "suffix": "" }, { "first": "H", "middle": [], "last": "Isahara", "suffix": "" } ], "year": 2000, "venue": "Proc. of COLING-00", "volume": "", "issue": "", "pages": "565--571", "other_ids": {}, "num": null, "urls": [], "raw_text": "Murata, M., Uchimoto, K., Ma, Q., Isahara, H.: Bunsetsu identification using category- exclusive rules. In: Proc. of COLING-00. (2000) 565-571", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A statistical property of Japanese phrase-to-phrase modifications", "authors": [ { "first": "H", "middle": [], "last": "Maruyama", "suffix": "" }, { "first": "S", "middle": [], "last": "Ogino", "suffix": "" } ], "year": 1992, "venue": "Mathematical Linguistics", "volume": "18", "issue": "", "pages": "348--352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maruyama, H., Ogino, S.: A statistical property of Japanese phrase-to-phrase modifications. Mathematical Linguistics 18 (1992) 348-352", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Japanese dependency analysis using a deterministic finite state transducer", "authors": [ { "first": "S", "middle": [], "last": "Sekine", "suffix": "" } ], "year": 2000, "venue": "Proc. of COLING-00", "volume": "", "issue": "", "pages": "761--767", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sekine, S.: Japanese dependency analysis using a deterministic finite state transducer. In: Proc. of COLING-00. (2000) 761-767", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Inside-outside reestimation from partially bracketed corpora", "authors": [ { "first": "F", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Y", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1992, "venue": "Proc. of ACL-92", "volume": "", "issue": "", "pages": "128--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pereira, F., Schabes, Y.: Inside-outside reestimation from partially bracketed corpora. In: Proc. of ACL-92. (1992) 128-135", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Parsing the Wall Street Journal using a lexical-functional grammar and discriminative estimation techniques", "authors": [ { "first": "S", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "T", "middle": [ "H" ], "last": "King", "suffix": "" }, { "first": "R", "middle": [ "M" ], "last": "Kaplan", "suffix": "" }, { "first": "R", "middle": [], "last": "Crouch", "suffix": "" }, { "first": "J", "middle": [ "T M" ], "last": "Iii", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL-2002", "volume": "", "issue": "", "pages": "271--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riezler, S., King, T.H., Kaplan, R.M., Crouch, R., III, J.T.M., Johnson, M.: Parsing the Wall Street Journal using a lexical-functional grammar and discriminative estimation techniques. In: Proc. of ACL-2002. (2002) 271-278", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The spectrum kernel: A string kernel for SVM protein classification", "authors": [ { "first": "C", "middle": [], "last": "Leslie", "suffix": "" }, { "first": "E", "middle": [], "last": "Eskin", "suffix": "" }, { "first": "W", "middle": [ "S" ], "last": "Noble", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 7th Pacific Symposium on Biocomputing", "volume": "", "issue": "", "pages": "564--575", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leslie, C., Eskin, E., Noble, W.S.: The spectrum kernel: A string kernel for SVM protein classification. In: Proc. of the 7th Pacific Symposium on Biocomputing. (2002) 564-575", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Text classification using string kernels", "authors": [ { "first": "H", "middle": [], "last": "Lodhi", "suffix": "" }, { "first": "C", "middle": [], "last": "Saunders", "suffix": "" }, { "first": "J", "middle": [], "last": "Shawe-Tayor", "suffix": "" }, { "first": "N", "middle": [], "last": "Cristianini", "suffix": "" }, { "first": "C", "middle": [], "last": "Watkins", "suffix": "" } ], "year": 2002, "venue": "Journal of Machine Learning Research", "volume": "2", "issue": "", "pages": "419--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lodhi, H., Saunders, C., Shawe-Tayor, J., Cristianini, N., Watkins, C.: Text classification using string kernels. Journal of Machine Learning Research 2 (2002) 419-444", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Input: N: the number of bunsetsus in a sentence. // w[]: an array that keeps a sequence of bunsetsus in the sentence. // // Output: outdep[]: an integer array that stores an analysis result, // i.e., dependencies between the bunsetsus. For example, the // head of w[j] is outdep[j]. // // stack: a stack that holds IDs of modifier bunsetsus // in the sentence. If it is empty, the pop method // returns EMPTY (\u22121). // // function estimate dependency(j, i, w[]): // a function that returns non-zero when the j-th // bunsetsu should modify the i-th bunsetsu. // Otherwise returns zero. // procedure analyze(w[], N, outdep[]) // Push 0 on the stack. stack.push(0); // Variable i for a head and j for a modifier. for (int i = 1; i < N; i++) { // Pop a value off the stack. int j = stack.pop(); while (j != EMPTY && (i == N \u2212 1 || estimate dependency(j, i, w))) { // The j-th bunsetsu modifies the i-th bunsetsu. outdep[j] = i; // Pop a value off the stack to update j. j = stack.pop(); } if (j != EMPTY)stack.push(j); stack.push(i); } Pseudo code of the Stack Dependency Analysis algorithm. Note that \"i == N \u2212 1\" means the i-th bunsetsu is the rightmost one in the sentence. Any classifiers can be used in estimate dependency().", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "Learning curves of parsers trained with the partially annotated corpus and the fully annotated corpus", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "html": null, "num": null, "text": "Performance on Development Set and Test Set", "content": "
Dev. SetTest Set
Feature SetDep. Acc. Sent. Acc. Dep. Acc. Sent. Acc.
\"Standard\"88.9746.1888.7245.28
Bag of Words (Words Only)85.2235.0284.4334.95
Character N-Grams87.7942.6687.3840.84
\"Standard\" + Character N-Grams 89.7247.0489.0746.89
", "type_str": "table" }, "TABREF1": { "html": null, "num": null, "text": "Bunsetsu Chunking Performance on Development Set and Test Set. Grammatical tags include POS tags and conjugation types/forms.", "content": "
Feature SetDev. Set (F) Test Set (F)
Surface Form + Grammatical Tags99.5899.57
Surface Form Only97.6597.02
Surface Form + Char. Features (No Grammatical Tags)99.0999.07
Mixed99.6499.64
", "type_str": "table" }, "TABREF3": { "html": null, "num": null, "text": "Performance of parsers trained with the fully annotated corpus and the partially anno-", "content": "
tated corpus
# of TrainingDev. SetTest Set
Training SetExamples Dep. Acc. Sent. Acc. Dep. Acc. Sent. Acc.
Full98,68988.9746.1888.7245.28
Adjacent Annotations Only61,89985.6538.0085.5038.58
", "type_str": "table" }, "TABREF4": { "html": null, "num": null, "text": "Comparison to related work on parsing accuracy. KM02 = Kudo and", "content": "
Matsumoto 2002
", "type_str": "table" } } } }