{ "paper_id": "I05-1042", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:26:32.770378Z" }, "title": "Empirical Study of Utilizing Morph-Syntactic Information in SMT", "authors": [ { "first": "Young-Sook", "middle": [], "last": "Hwang", "suffix": "", "affiliation": { "laboratory": "ATR SLT Research Labs", "institution": "", "location": { "addrLine": "2-2-2 Hikaridai Seika-cho, Soraku-gun Kyoto", "postCode": "619-0288", "country": "Japan" } }, "email": "youngsook.hwang@atr.jp" }, { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "", "affiliation": { "laboratory": "ATR SLT Research Labs", "institution": "", "location": { "addrLine": "2-2-2 Hikaridai Seika-cho, Soraku-gun Kyoto", "postCode": "619-0288", "country": "Japan" } }, "email": "taro.watanabe@atr.jp" }, { "first": "Yutaka", "middle": [], "last": "Sasaki", "suffix": "", "affiliation": { "laboratory": "ATR SLT Research Labs", "institution": "", "location": { "addrLine": "2-2-2 Hikaridai Seika-cho, Soraku-gun Kyoto", "postCode": "619-0288", "country": "Japan" } }, "email": "yutaka.sasaki@atr.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present an empirical study that utilizes morph-syntactical information to improve translation quality. With three kinds of language pairs matched according to morph-syntactical similarity or difference, we investigate the effects of various morpho-syntactical information, such as base form, part-of-speech, and the relative positional information of a word in a statistical machine translation framework. We learn not only translation models but also word-based/class-based language models by manipulating morphological and relative positional information. And we integrate the models into a log-linear model. Experiments on multilingual translations showed that such morphological information as part-of-speech and base form are effective for improving performance in morphologically rich language pairs and that the relative positional features in a word group are useful for reordering the local word orders. Moreover, the use of a class-based n-gram language model improves performance by alleviating the data sparseness problem in a word-based language model.", "pdf_parse": { "paper_id": "I05-1042", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present an empirical study that utilizes morph-syntactical information to improve translation quality. With three kinds of language pairs matched according to morph-syntactical similarity or difference, we investigate the effects of various morpho-syntactical information, such as base form, part-of-speech, and the relative positional information of a word in a statistical machine translation framework. We learn not only translation models but also word-based/class-based language models by manipulating morphological and relative positional information. And we integrate the models into a log-linear model. Experiments on multilingual translations showed that such morphological information as part-of-speech and base form are effective for improving performance in morphologically rich language pairs and that the relative positional features in a word group are useful for reordering the local word orders. Moreover, the use of a class-based n-gram language model improves performance by alleviating the data sparseness problem in a word-based language model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "For decades, many research efforts have contributed to the advance of statistical machine translation. Such an approach to machine translation has proven successful in various comparative evaluations. Recently, various works have improved the quality of statistical machine translation systems by using phrase translation [1, 2, 3, 4] or using morpho-syntactic information [6, 8] . But most statistical machine translation systems still consider surface forms and rarely use linguistic knowledge about the structure of the languages involved [8] . In this paper, we address the question of the effectiveness of morpho-syntactic features such as parts-of-speech, base forms, and relative positions in a chunk or an agglutinated word for improving the quality of statistical machine translations.", "cite_spans": [ { "start": 322, "end": 325, "text": "[1,", "ref_id": "BIBREF0" }, { "start": 326, "end": 328, "text": "2,", "ref_id": "BIBREF1" }, { "start": 329, "end": 331, "text": "3,", "ref_id": "BIBREF2" }, { "start": 332, "end": 334, "text": "4]", "ref_id": "BIBREF3" }, { "start": 373, "end": 376, "text": "[6,", "ref_id": "BIBREF5" }, { "start": 377, "end": 379, "text": "8]", "ref_id": "BIBREF7" }, { "start": 542, "end": 545, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Basically, we take a statistical machine translation model based on an IBM model that consists of a language model and a separate translation model [5] : e I 1 = argmax e I 1 P r(f J 1 |e I 1 )P r(e I 1 )", "cite_spans": [ { "start": 148, "end": 151, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The translation model links the source language sentence to the target language sentence. The target language model describes the well-formedness of the target language sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One of the main problems in statistical machine translation is to learn the less ambiguous correspondences between the words in the source and target languages from the bilingual training data. When translating one source language(which may be inflectional or non-inflectional) into the morphologically rich language such like Japanese or Korean, the bilingual training data can be exploited better by explicitly taking into account the interdependencies of related inflected or agglutinated forms. In this study, we represent a word with its morphological features in both sides of the source and the target language to learn less ambiguous correspondences between the source and the target language words or phrases. In addition, we utilize the relative positional information of a word in its word group to consider the word order in an agglutinated word or a chunk.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another problem is to produce a correct target sentence. To produce more correct target sentence, we should consider the following problems: word reordering in a language pair with different word order, production of correct inflected and agglutinated words in an inflectional or agglutinative target language. In this study, we tackle the problem with language models. For learning language model that can treat morphological and word-order problem, we represent a word with its morphological and positional information. However, a word-based language model with enriched word is likely to suffer from a severe data sparseness problem. To alleviate the problem, we interpolate the word-based language model with a class-based n-gram model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the next section, we briefly discuss related works. Then, we describe the method that utilizes morpho-syntactic information under consideration for improving the quality of translations. Then we report the experimental results with some analysis and conclude our study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Few papers deal with the integration of linguistic information into the process of statistical machine translation. [8] introduced hierarchical lexicon models including base-form and POS information for translation from German into English. Irrelevant information contained in the German entries for the generation of the English translation were omitted. They trained the lexicon model using maximum entropy. [6] enriched English with knowledge to help select the correct fullform from morphologically richer languages such as Spanish and Catalan. In other words, they introduced a splicing operation that merged the pronouns/modals and verbs for treating differences in verbal expressions. To treat the unknown entries in the lexicon resulting from the splicing operation, they trained the lexicon model using maximum entropy and used linguistic knowledge just in the source language part and not in the target language. They don't use any linguistic knowledge in the target language and use full-form words during training.", "cite_spans": [ { "start": 116, "end": 119, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 410, "end": 413, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In addition, [6] and [8] proposed re-ordering operations to make similar word orders in the source and target language sentences. In other words, for the interrogative phrases with different word order from the declarative sentences, they introduced techniques of question inversion and removed unnecessary auxiliary verbs. But, such inversion techniques require additional preprocessing with heuristics.", "cite_spans": [ { "start": 13, "end": 16, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 21, "end": 24, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Unlike them, we investigate methods for utilizing linguistic knowledge in both of the source and the target language at the morpheme level. To generate a correct full-form word in a target language, we consider not only both the surface and base form of a morpheme but also the relative positional information in a full-form word. We strongly utilize the combined features in language modeling. By training alignments and language models with morphological and positional features at the morpheme-level, the severe data sparseness problem can be alleviated with the combined linguistic features. And the correspondence ambiguities between the source and target words can be decreased.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Generally, the probabilistic lexicon resulting from training a translation model contains all word forms occurring in the training corpus as separate entries, not taking into account whether they are inflected forms. A language model is also composed of the words in the training corpus. However, the use of a full-form word itself may cause severe data sparseness problem, especially relevant for more inflectional/agglutinative languages like Japanese and Korean. One alternative is to utilize the results of morphological analysis such as base form, part-of-speech and other information at the morpheme level. We address the usefulness of morphological information to improve the quality of statistical machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utilization of Morpho-Syntactic Information in SMT", "sec_num": "3" }, { "text": "A prerequisite for methods that improve the quality of statistical machine translation is the availability of various kinds of morphological and syntactic information. In this section, we examine the morpho-syntactic information available from the morphological analyzers of Korean, Japanese, English and Chinese and describe a method of utilizing the information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Available Morpho-Syntactic Information", "sec_num": "3.1" }, { "text": "Japanese and Korean are highly inflectional and agglutinative languages, and in English inflection has only a marginal role; whereas Chinese usually is regarded as an isolating language since it has almost no inflectional morphology. As the syntactic role of each word within Japanese and Korean sentences are often marked, word order in a sentence plays a relatively small role in characterizing the syntactic function of each word than in English or Chinese sentences. Thus, Korean and Japanese sentences have a relatively free word order; whereas words within Chinese and English sentences adhere to a rigid order. The treatment of inflection, and not word order, plays the most important role in processing Japanese and Korean, while word order has a central role in Chinese and English. Figure 1 shows some examples of morphological information by Chinese, Japanese, English and Korean morphological analyzers and Figure 2 the correspondences among the words. Note that Korean and Japanese are very similar: highly inflected and agglutinated. One difference in Korean from Japanese is that a Korean sentence consists of spacing units, eojeols, 1 while there are no space in a Japanese sentence. Especially, a spacing unit(i.e., eojeol) in Korean often becomes a base phrase that contains such syntactic information as subject, object, and the mood/tense of a verb in a given sentence. The treatment of such a Korean spacing unit may contribute to the improvement of translation quality because a morpheme can be represented with its relative positional information within an eojeol. The relative positional information is obtained by calculating the distance between the beginning syllable of a given eojeol and the beginning of each morpheme within the eojeol. The relative positional information is represented with indexes of the beginning and the ending syllables (See Figure 1 ).", "cite_spans": [], "ref_spans": [ { "start": 792, "end": 800, "text": "Figure 1", "ref_id": null }, { "start": 919, "end": 927, "text": "Figure 2", "ref_id": null }, { "start": 1878, "end": 1886, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Available Morpho-Syntactic Information", "sec_num": "3.1" }, { "text": "A word(i.e. morpheme) is represented by the combination of the information provided by a morphological analyzer including the surface form, base form, part-ofspeech or other information such as relative position within an eojeol. The word enriched by the combination of morph-syntactic information must alway include the surface form of a given word for the direct generation of target sentence without any post-processing. Other different morphological information is combined according to representation models such as surface plus base form (SB), surface plus part-of-speech (SP), surface plus relative position (SL), and so on. Table 1 shows the word representation of each language with every possible morphological information. Yet, we are not limited to only this word representation, but we have many possibilities of word representation by removing some morphological information or inserting additional morpho-syntactic information as mentioned previously. In order to develop the best translation systems, we select the best word representation models of the source and the target language through empirical experiments.", "cite_spans": [], "ref_spans": [ { "start": 632, "end": 639, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Word Representation", "sec_num": "3.2" }, { "text": "The inherent in the original word forms is augmented by a morphological analyzer. Of course, this results in an enlarged vocabulary while it may provide useful disambiguation clues. However, since we regard a morpheme as a word in a corpus(henceforth, we call a morpheme a word), the enlarged vocabulary does not make more severe data sparseness problem than using the inflected or agglutinated word. By taking the approch of morpheme-level alignment, we may obtain more accurate correspondences among words as illustrated in Figure 2 . Moreover, by learning the language model with rich morph-syntactic information, we can generate more syntactically fluent and correct sentence.", "cite_spans": [], "ref_spans": [ { "start": 526, "end": 534, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Word Representation", "sec_num": "3.2" }, { "text": "In order to improve translation quality, we evaluate the translation candidates by using the relevant features in a log-linear model framework [11] . The log-linear model used in our statistical translation process, P r(e I 1 |f J 1 ), is:", "cite_spans": [ { "start": 143, "end": 147, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P r(e I 1 |f I 1 ) = exp( m \u03bb m h m (e I 1 , f J 1 , a J 1 )) e I 1 ,f I 1 ,a I 1 exp( m \u03bb m h m (e I 1 , f J 1 , a J 1 ))", "eq_num": "(2)" } ], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "h m (e I 1 , f J 1 , a J 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "is the logarithm value of the m-th feature; \u03bb m is the weight of the m-th feature. Integrating different features in the equation results in different models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "The statistical machine translation process in IBM models is as follows; a given source string f J ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "1 = f 1 \u2022 \u2022 \u2022 f J is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "In IBM model 4, translation model P (f J 1 |e I 1 ) is further decomposed into four submodels: In addition to the five features (P r(e I 1 ), t(f |e), n(\u03c6|e), d, p 1 ) from IBM model 4, we incorporate the following features into the log-linear translation model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "-Lexicon", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "-Class-based n-gram model P r(e I 1 ) = i P r(e i |c i )P r(c i |c i\u22121 1 ):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "Grouping of words into C classes is done according to the statistical similarity of their surroundings. Target word e i is mapped into its class, c i , which is one of C classes [13] .", "cite_spans": [ { "start": 178, "end": 182, "text": "[13]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "-Length model P r(l|e I 1 , f J i ): l is the length (number of words) of a translated target sentence.", "cite_spans": [ { "start": 22, "end": 23, "text": "I", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "-Example matching score: The translated target sentence is matched with phrase translation examples. A score is derived based on the number of matches [10] . To extract phrase translation examples, we compute the intersection of word alignment of both directions and derive the union. Then we grab the phrase translation pairs that contain at least one intersected word alignment and some unioned word alignments [1] .", "cite_spans": [ { "start": 151, "end": 155, "text": "[10]", "ref_id": "BIBREF9" }, { "start": 413, "end": 416, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "Under the framework of log-linear models, we investigate the effects of morphosyntactic information with word representation. The overall training and testing process with morphological and positional information is depicted in Figure 3 . In the training step, we train the word-and class-based language models with various word representation methods [12] . Also, we make word alignments through the learning of IBM models by using GIZA++ toolkit [3] : we learn the translation model toward IBM model 4, initiating translation iterations from IBM model 1 with intermediate HMM model iterations. Then, we extract example phrases and translation model features from the alignment results. Then in the test step, we perform morphological anlysis of a given sentence for word representation corresponding to training corpus representation. We decode the best translation of a given test sentence by generating word graphs and searching for the best hypothesis in a log-linear model [7] . ", "cite_spans": [ { "start": 352, "end": 356, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 448, "end": 451, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 979, "end": 982, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 228, "end": 236, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Log-Linear Model for Statistical Machine Translation", "sec_num": "3.3" }, { "text": "The corpus for the experiment was extracted from the Basic Travel Expression Corpus (BTEC), a collection of conversational travel phrases for Chinese, English, Japanese and Korean [15] . The entire corpus was split into three parts: 152,169 sentences in parallel for training, 10,150 sentences for testing and the remaining 10,148 sentences for parameter tuning, such as termination criteria for training iteration and parameter tuning for decoders. For the reconstruction of each corpus with morphological information, we used in-house morphological analyzers for four languages: Chinese morphological analyzer with 31 parts-ofspeech tags, English morphological analyzer with 34 tags, Japanese morphological analyzer with 34 tags, and Korean morphological analyzer with 49 tags. The accuracies of Chinese, English, Japanese and Korean morphological analyzers including segmentation and POS tagging are 95.82% , 99.25%, 98.95%, and 98.5% respectively. Table 2 summarizes the morph-syntactic statistics of the Chinese, English, Japanese, and Korean.", "cite_spans": [ { "start": 180, "end": 184, "text": "[15]", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 952, "end": 959, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental Environments", "sec_num": "4.1" }, { "text": "For the four languages, word-based and class-based n-gram language models were trained on the training set by using SRILM toolkit [12] . The perplexity of each language model is shown in Table 3 .", "cite_spans": [ { "start": 130, "end": 134, "text": "[12]", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 187, "end": 194, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experimental Environments", "sec_num": "4.1" }, { "text": "For the four languages, we chose three kinds of language pairs according to the linguistic characteristics of morphology and word order, Chinese-Korean, Japanese-Korean, and English-Korean. 42 translation models based on word representation methods(S, SB, SP, SBP, SBL, SPL,SBPL) were trained by using GIZA++ [3] .", "cite_spans": [ { "start": 309, "end": 312, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Environments", "sec_num": "4.1" }, { "text": "Translation evaluations were carried out on 510 sentences selected randomly from the test set. The metrics for the evaluations are as follows: mWER(multi-reference Word Error Rate), which is based on the minimum edit distance between the target sentence and the sentences in the reference set [9] . BLEU, which is the ratio of the n-gram for the translation results found in the reference translations with a penalty for too short sentences [14] . NIST which is a weighted n-gram precision in combination with a penalty for too short sentences.", "cite_spans": [ { "start": 293, "end": 296, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 441, "end": 445, "text": "[14]", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "For this evaluation, we made 16 multiple references available. We computed all of the above criteria with respect to these multiple references. Table 4 , 5 and 6 show the evaluation results on three kinds of language pairs. The effects of morpho-syntactic information and class-based n-gram language models on multi-lingual machine translation are shown: The combined morphological information was useful for improving the translation quality in the NIST, BLEU and mWER evaluations. Moreover, the class-based n-gram language models were effective in the BLEU and the mWER scores. In detail, Table 4 shows the effects of the morphological and relative positional information on Japanese-to-Korean and Korean-to-Japanese translation. In almost of the evaluation metrics, the SP model in which a word is represented by a combination of its surface form and part-of-speech showed the best performance. The SBL model utilizing the base form and relative positional information only in Korean showed the second best performance. In Korean-to-Japanese translation, the SBPL model showed the best score in BLEU and mWER. In this language pair of highly inflectional and agglutinative languages, the part-of-speech information combined with surface form was the most effective in improving the performance. The base form and relative positional information were less effective than part-of-speech. It could be explained in several points: Japanese and Korean are very similar languages in the word order of SOVs and the ambiguities of translation correspondences in both directions were converged into 1.0 by combining the distinctive morphological information with the surface form. When refering to the vocabulay size of SP model in Table 2 , it makes it more clear. The Japanese-to-Korean translation outperforms the Korean-to-Japanese. It might be closely related to the language model: the perplexity of the Korean language model is lower than Japanese according to our corpus statistics. Table 5 shows the performance of the English-to-Korean and Korean-to-English translation: a pair of highly inflectional and agglutinative language with partially free word-order and an inflectional language with rigid word order. In this language pair, the combined word representation models improved the translation performance into significantly higher BLEU and mWER scores in both directions. The part-of-speech and the base form information were distinctive features. When comparing the performance of SP, SB and SL models, part-ofspeech might be more effective than base form or relative positional information, and the relative positional information in Korean might play a role not only in controlling word order in the language models but also in discriminating word correspondences during alignment.", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 151, "text": "Table 4", "ref_id": null }, { "start": 591, "end": 598, "text": "Table 4", "ref_id": null }, { "start": 1726, "end": 1734, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1986, "end": 1993, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "When the target language was Korean, we had higher BLEU scores in all the morpho-syntactic models but lower NIST scores. In other words, we took advantage of generating more accurate full-form eojeol with positional information, i.e. local word ordering. Table 6 shows the performance of the Chinese-to-Korean and Korean-to-Chinese translation: a pair of a highly inflectional and agglutinative language with partially free word order and a non-inflectional language with rigid word order. This language pair is a quite morpho-syntactically different. When a noninflectional language is a target language(i.e. Korean-to-Chinese translation), the performance was the worst compared with other language pairs and directions in BLEU and mWER. On the other hand, the performance of Chinese-to-Korean was much better than Korean-to-Chinese, meaning that it is easier to generate Korean sentence from Chinese the same as in Japanese-to-Korean and Englishto-Korean. In this language pair, we had gradual improvements according to the use of combined morpho-syntactic information, but there was no significant difference from the use of only the surface form. There was scant contribution of Chinese morphological information such as part-of-speech. On the other hand, we could get some advantageous Korean morpho-syntactic information in the Chinese-to-Korean translation, i.e., the advantage of language and translation models using morpho-syntactic information.", "cite_spans": [], "ref_spans": [ { "start": 255, "end": 262, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "In this paper, we described an empirical study of utilizing morpho-syntactic information in a statistical machine translation framework. We empirically investigated the effects of morphological information with several language pairs: Japanese and Korean with the same word order and high inflection/ agglutination, English and Korean, a pair of a highly inflecting and agglutinating language with partial free word order and an inflecting language with rigid word order, and Chinese-Korean, a pair of a highly inflecting and agglutinating language with partially free word order and a non-inflectional language with rigid word order. As the results of experiments, we found that combined morphological information is useful for improving the translation quality in BLEU and mWER evaluations. According to the language pair and the direction, we had different combinations of morpho-syntactic information that are the best for improving the translation quality: SP(surface form and part-of-speech) for translating J-to-K or K-to-J, SBP(surface form, base form and part-of-speech) for E-to-K or K-to-E, SPL(surface form, part-of-speech and relative position) for C-to-K. The utilization of morpho-syntactic information in the target language was the most effective. Language models based on morpho-syntactic information were very effective for performance improvement. The class-based n-gram models improved the performance with smoothing effects in the statistical language model. However, when translating an inflectional language, Korean into a non-inflectional language, Chinese with quite different word order, we found very few advantages using morphological information. One of the main reasons might be the relatively low performance of the Chinese morphological analyzer. The other might come from the linguistic difference. For the latter case, we need to adopt approaches to reflect the structural characteristics such like using a chunker/parser, context-dependent translation modeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Works", "sec_num": "5" }, { "text": "An eojeol is composed of no less than one morpheme by agglutination principle.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": ",006,838 1,128,151 1,226,774 1,313,407 Vocabulary size(S) 17,472 11,737 19,485 17,600 Vocabulary size(B) 17,472 9172 15,939 15,410 Vocabulary size(SB) 17,472 13,385 20,197 18,259 Vocabulary size(SP) 18,505 13,467 20,118 20,249 Vocabulary size(SBP(L)) 18,505 14,408 20,444 20,369(26,668) # of singletons(S) 7,137 4,046 8,107 7,045 # of singletons(B) 7,137 3,025 6,497 6,303 # of singletons(SB) 7,137 4,802 9,453 7,262 # of singletons(SP) 7,601 4,693 8,343 7,921 # of singletons(SBP(L)) 7,601 5,140 8,525 7,983(11,319)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The research reported here was supported in part by a contract with the National Institute of Information and Communications Technology entitled \"A study of speech dialogue translation technology based on a large corpus\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Statistical Phrase-Based Translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proc. of the Human Language Technology Conference(HLT/NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn P., Och F.J., and Marcu D.: Statistical Phrase-Based Translation, Proc. of the Human Language Technology Conference(HLT/NAACL) (2003)", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Improved alignment models for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1999, "venue": "Proc. of EMNLP/WVLC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och F. J., Tillmann C., Ney H.: Improved alignment models for statistical machine translation, Proc. of EMNLP/WVLC (1999).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Improved Statistical Alignment Models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proc. of the 38th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och F.J. and Ney H. Improved Statistical Alignment Models, Proc. of the 38th Annual Meeting of the Association for Computational Linguistics (2000) pp. 440- 447.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Improvements in Phrase-Based Statistical Machine Translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proc. of the Human Language Technology Conference (HLT-NAACL", "volume": "", "issue": "", "pages": "257--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zens R. and Ney H.: Improvements in Phrase-Based Statistical Machine Transla- tion, Proc. of the Human Language Technology Conference (HLT-NAACL) (2004) pp. 257-264", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "Della", "middle": [], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Mercer", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown P. F., Della Pietra S. A., Della Pietra V. J., and Mercer R. L.: The math- ematics of statistical machine translation: Parameter estimation, Computational Linguistics, (1993) 19(2):263-311", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Using POS Information for Statistical Machine Translation into Morphologically Rich Languages", "authors": [ { "first": "N", "middle": [], "last": "Ueffing", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Proc. 10th Conference of the European Chapter of the Association for Computational Linguistics (EACL)", "volume": "", "issue": "", "pages": "347--354", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ueffing N., Ney H.: Using POS Information for Statistical Machine Translation into Morphologically Rich Languages, In Proc. 10th Conference of the European Chapter of the Association for Computational Linguistics (EACL), (2003) pp. 347- 354", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Generation of Word Graphs in Statistical Machine Translation In Proc. Conference on Empirical Methods for Natural Language Processing", "authors": [ { "first": "N", "middle": [], "last": "Ueffing", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "156--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ueffing N., Och F.J., Ney H.: Generation of Word Graphs in Statistical Machine Translation In Proc. Conference on Empirical Methods for Natural Language Pro- cessing, (2002) pp. 156-163", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Statistical Machine Translation with Scarce Resources using Morpho-syntactic Information", "authors": [ { "first": "S", "middle": [], "last": "Niesen", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "2", "pages": "181--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niesen S., Ney H.: Statistical Machine Translation with Scarce Resources using Morpho-syntactic Information, Computational Linguistics, (2004) 30(2):181-204", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An Evaluation Tool for Machine Translation: Fast Evaluation for MT Research", "authors": [ { "first": "S", "middle": [], "last": "Niesen", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "G", "middle": [], "last": "Leusch", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proc. of the 2nd International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "39--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niesen S., Och F.J., Leusch G., Ney H: An Evaluation Tool for Machine Transla- tion: Fast Evaluation for MT Research, Proc. of the 2nd International Conference on Language Resources and Evaluation, (2000) pp. 39-45", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Example-based Decoding for Statistical Machine Translation", "authors": [ { "first": "T", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2003, "venue": "Proc. of MT Summit IX", "volume": "", "issue": "", "pages": "410--417", "other_ids": {}, "num": null, "urls": [], "raw_text": "Watanabe T. and Sumita E.: Example-based Decoding for Statistical Machine Translation, Proc. of MT Summit IX (2003) pp. 410-417", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Discriminative Training and Maximum Entropy Models for Statistical Machine Translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och F. J. Och and Ney H.: Discriminative Training and Maximum Entropy Models for Statistical Machine Translation, Proc. of ACL (2002)", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "SRILM -an extensible language modeling toolkit", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proc. Intl. Conf. Spoken Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, A.: SRILM -an extensible language modeling toolkit. In Proc. Intl. Conf. Spoken Language Processing, (2002) Denver.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Class-Based n-gram Models of Natural Language", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "Della", "middle": [], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "", "suffix": "" }, { "first": "P", "middle": [ "V" ], "last": "Lai", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Mercer", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "", "suffix": "" } ], "year": 1992, "venue": "Computational Linguistics", "volume": "18", "issue": "4", "pages": "467--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brown P. F., Della Pietra V. J. and deSouza P. V. and Lai J. C. and Mercer R.L.: Class-Based n-gram Models of Natural Language, Computational Linguistics (1992) 18(4) pp. 467-479", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Zhu W.-J", "middle": [], "last": "", "suffix": "" } ], "year": 2001, "venue": "IBM Research Report", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Papineni K., Roukos S., Ward T., and Zhu W.-J.: Bleu: a method for automatic evaluation of machine translation, IBM Research Report,(2001) RC22176.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world", "authors": [ { "first": "T", "middle": [], "last": "Takezawa", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "F", "middle": [], "last": "Sugaya", "suffix": "" }, { "first": "H", "middle": [], "last": "Yamamoto", "suffix": "" }, { "first": "Yamamoto", "middle": [ "S" ], "last": "", "suffix": "" } ], "year": 2002, "venue": "Proc. of LREC", "volume": "", "issue": "", "pages": "147--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takezawa T., Sumita E., Sugaya F., Yamamoto H., and Yamamoto S.: Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world, Proc. of LREC (2002), pp. 147-152.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Examples of linguistic information from Chinese, Japanese, English, and Korean morphological analyzers Correspondences among the words in parallel sentences", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "Model, t(f |e): probability of word f in the source language being translated into word e in the target language. -Fertility model, n(\u03c6|e): probability of target language word e generating \u03c6 words. -Distortion model d: probability of distortion, which is decomposed into the distortion probabilities of head words and non-head words. -NULL translation model p 1 : a fixed probability of inserting a NULL word after determining each target word.", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "Overview of training and test of statistical machine translation system with linguistic information", "num": null, "uris": null }, "TABREF0": { "type_str": "table", "num": null, "html": null, "text": "Word Representation According to Morpho-Syntactic Characteristics (S: surface form, B:base form, P:part-of-speech, L:RelativePosition)", "content": "
ChineseEnglishJapaneseKorean
Morph-Syntactic no inflection Inflectional Inflectional,Inflectional
CharacteristicsAgglutinativeAgglutinative
Spacing Unit
(Word-Order)RigidRigidPartial FreePartial Free
Word RepresentationS PS B PS B PS B P L
S B, S PS B, S P S B P, S B L, S P L
S B, S P, S L
" }, "TABREF1": { "type_str": "table", "num": null, "html": null, "text": "to be translated into e I 1 = e 1 \u2022 \u2022 \u2022 e I . According to the Bayes' decision rule, we choose the optimal translation for given string f J 1 that maximizes the product of target language model P r(e I 1", "content": "
) and translation
model P r(f J 1 |e I 1 )e I 1 = argmax e I 1 P r(f J 1 |e I 1 )P r(e I 1 )
" }, "TABREF2": { "type_str": "table", "num": null, "html": null, "text": "Statistics of Basic Travel Expression Corpus", "content": "
Chinese English JapaneseKorean
# of sentences167,163
# of words(morph)
" }, "TABREF3": { "type_str": "table", "num": null, "html": null, "text": "Perplexities of tri-gram language model trained on the training corpora with S, SB, SP SBP, SBL, and SBPL morpho-syntactic representation: word-based 3-gram/class-based 5-gram Korean 15.54/12.42 15.41/12.09 16.04/11.89 16.03/11.88 16.48/12.24 17.13/11.99", "content": "
SSBSPSBPSBLSBPL
Chinese 31.57/24.09N/S35.83/26.28N/AN/AN/A
English 22.35/18.82 22.19/18.54 22.24/18.12 22.08/18.03N/AN/A
Japanese 17.89/ 13.44 17.92/13.29 17.82/13.13 17.83/13.06N/AN/A
" }, "TABREF4": { "type_str": "table", "num": null, "html": null, "text": "Evaluation results of Japanese to Korean and Korean to Japaneses translations(with class-based n-gram/word-based n-gram language model) .46/8.64 0.694/0.682 26.33/26.73 8.21/8.39 0.666/0.649 25.00/25.81 SB 8.05/8.32 0.705/0.695 26.82/26.97 7.67/8.17 0.690/0.672 23.77/24.68 SP 9.15/9.25 0.755/0.747 21.71/22.22 9.02/9.13 0.720/0.703 21.94/23.50 SL 8.37/8.47 0.699/0.667 25.49/27.76 8.48/8.74 0.671/0.629 25.14/27.88 SBL 8.92/9.12 0.748/0.730 22.66/23.36 8.85/8.92 0.712/0.691 21.88/23.37 SBP 8.19/8.57 0.713/0.696 26.17/27.09 8.21/8.39 0.698/0.669 22.94/24.88 SBPL 8.41/8.85 0.772/0.757 22.30/21.74 7.77/7.83 0.626/0.619 25.19/25.57 Evaluation results of English to Korean and Korean to .19 0.552/0.502 37.63/42.34 8.01/8.46 0.512/0.460 35.13/40.91 SL 6.66/6.96 0.546/0.516 38.20/40.67 7.71/8.02 0.484/0.436 36.79/42.88 SPL 6.16/7.01 0.542/0.519 38.21/39.85 7.83/8.22 0.482/0.443 37.52/41.63 SBL 6.52/6.93 0.547/0.504 37.76/42.23 7.64/8.08 0.479/0.439 37.10/42.30 SBP 7.42/7.60 0.612/0.573 32.17/35.96 8.86/9.05 0.551/0.523 33.13/37.07 SBPL 6.29/6.59 0.580/0.561 36.73/38.36 8.08/8.36 0.528/0.515 36.46/38.21 Evaluation results of Chinese to Korean and Korean to Chinese translations(with class-based n-gram/word-based n-gram language model)", "content": "
J to KK to J
NISTBLEUWERNISTBLEUWER
S 8English transla-
tions(with class-based n-gram/word-based n-gram language model)
E to KK to E
NISTBLEUWERNISTBLEUWER
S 5.12/5.79 0.353/0.301 51.12/58.52 5.76/6.05 0.300/0.255 52.54/61.23
SB 6.71/6.87 0.533/0.474 39.10/47.18 7.72/8.15 0.482/0.446 37.86/42.71
SP 6.88/7C to KK to C
NISTBLEUWERNISTBLEUWER
S 7.62/7.82 0.640/0.606 30.01/32.79 7.85/7.69 0.380/0.365 53.65/58.46
SB 7.73/7.98 0.643/0.632 29.26/30.08 7.68/7.50 0.366/0.349 54.48/60.49
SP 7.71/7.98 0.651/0.643 28.26/28.60 8.00/7.77 0.383/0.362 54.15/58.30
SL 7.64/7.97 0.656/0.635 28.94/30.33 7.84/7.65 0.373/0.350 54.53/58.38
SPL 7.69/7.93 0.665/0.659 28.43/28.88 7.78/7.62 0.373/0.351 56.14/59.54
SBL 7.65/7.94 0.659/0.635 28.76/30.87 7.85/7.64 0.377/0.354 55.01/58.39
SBP 7.81/7.98 0.660/0.643 28.85/29.61 7.94/7.68 0.386/0.360 53.99/58.94
SBPL 7.64/7.90 0.652/0.634 29.54/30.46 7.82/7.66 0.376/0.358 55.64/58.79
" } } } }