{ "paper_id": "2005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:21:23.335539Z" }, "title": "Integrated Chinese Word Segmentation in Statistical Machine Translation", "authors": [ { "first": "Jia", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "xujia@cs.rwth-aachen.de" }, { "first": "Evgeny", "middle": [], "last": "Matusov", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "matusov@cs.rwth-aachen.de" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "zens@cs.rwth-aachen.de" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "ney@cs.rwth-aachen.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A Chinese sentence is represented as a sequence of characters, and words are not separated from each other. In statistical machine translation, the conventional approach is to segment the Chinese character sequence into words during the pre-processing. The training and translation are performed afterwards. However, this method is not optimal for two reasons: 1. The segmentations may be erroneous. 2. For a given character sequence, the best segmentation depends on its context and translation. In order to minimize the translation errors, we take different segmentation alternatives instead of a single segmentation into account and integrate the segmentation process with the search for the best translation. The segmentation decision is only taken during the generation of the translation. With this method we are able to translate Chinese text at the character level. The experiments on the IWSLT 2005 task showed improvements in the translation performance using two translation systems: a phrase-based system and a finite state transducer based system. For the phrase-based system, the improvement of the BLEU score is 1.5% absolute.", "pdf_parse": { "paper_id": "2005", "_pdf_hash": "", "abstract": [ { "text": "A Chinese sentence is represented as a sequence of characters, and words are not separated from each other. In statistical machine translation, the conventional approach is to segment the Chinese character sequence into words during the pre-processing. The training and translation are performed afterwards. However, this method is not optimal for two reasons: 1. The segmentations may be erroneous. 2. For a given character sequence, the best segmentation depends on its context and translation. In order to minimize the translation errors, we take different segmentation alternatives instead of a single segmentation into account and integrate the segmentation process with the search for the best translation. The segmentation decision is only taken during the generation of the translation. With this method we are able to translate Chinese text at the character level. The experiments on the IWSLT 2005 task showed improvements in the translation performance using two translation systems: a phrase-based system and a finite state transducer based system. For the phrase-based system, the improvement of the BLEU score is 1.5% absolute.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In Chinese texts, words composed of single or multiple characters are not separated by white space, which is different from most of the European languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In statistical machine translation, the conventional way is to segment the Chinese character sequence into Chinese words before the training and translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We compared different segmentation methods in [1] . The training and test texts can be segmented into words or used at the character level. In the experiments in [1] , the translation results with the previous method outperformed the results with the latter one.", "cite_spans": [ { "start": 46, "end": 49, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 162, "end": 165, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Here we continued the investigation on the translation of the text at the character level and developed a new method yielding better translation results than when translation is at the word level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This method handles all the segmentation alternatives instead of only the single-best segmentation. The single-best one may contain errors or may be not optimal with respect to the training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Instead of reading a single best segmented sentence, our system handles all the segmentation alternatives by reading a segmentation lattice. Similar approaches were applied in the speech translation, e.g. [2] , where the speech recognition and text translation are combined by using the recognition lattices. We also weight the different segmentations with a language model trained on the Chinese corpus at the word level. Weighting the word segmentation by language model cost was introduced in [3] .", "cite_spans": [ { "start": 205, "end": 208, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 496, "end": 499, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To verify the improvements with the integrated segmentation method, we experimented on two translation systems: translation with the weighted finite state transducers and translation with the phrase based approach. On the IWSLT 2005 task [4] , using a phrase-based translation system, the improvement of the BLEU score reached 1.5% absolute.", "cite_spans": [ { "start": 238, "end": 241, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This paper is structured as follows: first we will briefly review the baseline statistical machine translation system in Section 2. In Section 3 we will discuss the idea, the theory, as well as the generation process of the integrated segmentation approach compared to the conventional approach. The experimental results for the IWSLT 2005 task [4] will be presented in Section 4.", "cite_spans": [ { "start": 345, "end": 348, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In statistical machine translation, we are given a source language sentence f J 1 = f 1 . . . f j . . . f J , which is to be translated into a target language sentence e I 1 = e 1 . . . e i . . . e I . Among all possible target language sentences, we will choose the sentence with the highest probability:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes decision rule", "sec_num": "2.1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e\u00ce 1 = argmax e I 1 ,I P r(e I 1 |f J 1 )", "eq_num": "(1)" } ], "section": "Bayes decision rule", "sec_num": "2.1." }, { "text": "= argmax", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bayes decision rule", "sec_num": "2.1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e I 1 ,I P r(e I 1 ) \u2022 P r(f J 1 |e I 1 )", "eq_num": "(2)" } ], "section": "Bayes decision rule", "sec_num": "2.1." }, { "text": "The decomposition into two knowledge sources in Equation 2 is known as the source-channel approach to statistical machine translation [5] . It allows an independent model-Single-best segmentation:", "cite_spans": [ { "start": 134, "end": 137, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Bayes decision rule", "sec_num": "2.1." }, { "text": "T ext / / Segmentation: Equation 3 / / Decision: Equation 5 / / T ranslation Segmentation lattice: T ext / / Global decision: Equation 6 / / T ranslation Figure 1 : Segmentation methods ing of the target language model P r(e I 1 ) and the translation model P r(f J 1 |e I 1 ) 1 . In our system, the translation model is trained on a bilingual corpus using GIZA++ [6] , and the language model is trained with the SRILM toolkit [7] .", "cite_spans": [ { "start": 363, "end": 366, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 426, "end": 429, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 154, "end": 162, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Bayes decision rule", "sec_num": "2.1." }, { "text": "We use the weighted finite-state tool by [8] . A weighted finite-state transducer", "cite_spans": [ { "start": 41, "end": 44, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Weighted finite-state transducer-based translation", "sec_num": "2.2." }, { "text": "(Q, \u03a3 \u222a { }, \u2126 \u222a { }, K, E, i, F, \u03bb, \u03c1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighted finite-state transducer-based translation", "sec_num": "2.2." }, { "text": "is a structure with a set of states Q, an alphabet of input symbols \u03a3, an alphabet of output symbols \u2126, a weight semiring K, a set of arcs E, a single initial state i with weight \u03bb and a set of final states F weighted by the function \u03c1 : F \u2192 K. A weighted finite-state acceptor is a weighted finite-state transducer without the output alphabet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighted finite-state transducer-based translation", "sec_num": "2.2." }, { "text": "A composition algorithm is defined as: Let T 1 : \u03a3 * \u00d7 \u2126 * \u2192 K and T 2 : \u2126 * \u00d7 \u0393 * \u2192 K be two transducers defined over the same semiring K.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighted finite-state transducer-based translation", "sec_num": "2.2." }, { "text": "Their composition T 1 \u2022T 2 realizes the function T : \u03a3 * \u00d7 \u0393 * \u2192 K.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighted finite-state transducer-based translation", "sec_num": "2.2." }, { "text": "By using the structure of the weighted finite-state transducers, the translation model is simply estimated as the language model on a bilanguage of source phrase/target phrase tuples, see [9] .", "cite_spans": [ { "start": 188, "end": 191, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Weighted finite-state transducer-based translation", "sec_num": "2.2." }, { "text": "The phrase-based translation model is described in [10] . A phrase is a contiguous sequence of words. The pairs of source and target phrases are extracted from the training corpus and used in the translation.", "cite_spans": [ { "start": 51, "end": 55, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-based translation", "sec_num": "2.3." }, { "text": "The phrase translation probability P r(e I 1 |f J 1 ) is modeled directly using a weighted log-linear combination of a trigram language model and various translation models: a phrase translation model and a word-based lexicon model. These translation models are used for both directions: p(f |e) and p(e|f ). Additionally, we use a word penalty and a phrase penalty. The model scaling factors are optimized with respect to some evaluation criterion [11] . 1 The notational convention will be as follows: we use the symbol P r(\u2022) to denote general probability distributions with (nearly) no specific assumptions. In contrast, for model-based probability distributions, we use the generic symbol p(\u2022).", "cite_spans": [ { "start": 449, "end": 453, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 456, "end": 457, "text": "1", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-based translation", "sec_num": "2.3." }, { "text": "In this section, we give a short overview of the current Chinese word segmentation methods in statistical machine translation, most of these methods can be classified into three categories:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conventional segmentation methods", "sec_num": "3.1." }, { "text": "\u2022 The training and test texts are segmented with an automatic segmentation tool.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conventional segmentation methods", "sec_num": "3.1." }, { "text": "Many segmentation tools use the dynamic programming algorithm and find the word boundaries which maximize the product of the word frequencies. But the segmentation may contain some errors, and we also found that a much more accurate word segmentation does not always lead to a large improvement in the translation performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conventional segmentation methods", "sec_num": "3.1." }, { "text": "\u2022 The training and test texts are segmented manually.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conventional segmentation methods", "sec_num": "3.1." }, { "text": "Manual segmentation avoids segmentation errors but requires a human effort. Moreover, the correct segmentation will not result in the best translation result, if the segmentations in the test and training sets are inconsistent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conventional segmentation methods", "sec_num": "3.1." }, { "text": "\u2022 Each Chinese character is treated as a word", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conventional segmentation methods", "sec_num": "3.1." }, { "text": "Training and translation at the Chinese character level do not require additional tool or human effort. But [1] showed that the translation results are not so good as the results obtained when translation is at the word level.", "cite_spans": [ { "start": 108, "end": 111, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Conventional segmentation methods", "sec_num": "3.1." }, { "text": "To minimize the number of lexicon entries and to ensure the consistency of the segmentations in the training and in the translation, we developed a new segmentation method, which uses the training text at the word level and translate the test text at the character level. Figure 1 shows the translation procedures. With the conventional method, only a single-best word segmentation is transferred to the search for the best translation. This approach is not ideal because the segmentation may not be optimal for the translation. Taking hard decisions in word segmentation may lead to loss of the correct Chinese words. where do i complete boarding procedures ?", "cite_spans": [], "ref_spans": [ { "start": 272, "end": 280, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Conventional segmentation methods", "sec_num": "3.1." }, { "text": "With the integrated segmentation method in Figure 1 , for one input sentence, we take different segmentation alternatives into account and represent them as a lattice. The input to the translation system is then a set of lattices instead of the segmented text. The search decision of the word segmentation is therefore combined with the translation decision, and the best segmentation of a sentence is only selected while the translation is generated.", "cite_spans": [], "ref_spans": [ { "start": 43, "end": 51, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Idea", "sec_num": "3.2." }, { "text": "In this section, we will explain the methods in Figure 1 in detail. First, we will describe a general word segmentation model and then how it is used as a single-best segmentation or as a segmentation lattice.", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 56, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Theory", "sec_num": "3.3." }, { "text": "A Chinese input sentence is denoted here as c K ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theory", "sec_num": "3.3." }, { "text": "The best segmented Chinese sentencef\u0134 1 with\u0134 words can be represented as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word segmentation model", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f\u0134 1 = argmax f J 1 ,J P r(f J 1 |c K 1 ) = argmax f J 1 ,J P r(c K 1 |f J 1 ) \u2022 P r(f J 1 ) ,", "eq_num": "(3)" } ], "section": "Word segmentation model", "sec_num": null }, { "text": "which suggests a decomposition into two sub-models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word segmentation model", "sec_num": null }, { "text": "1. Correspondence of the word sequence f J 1 and the character sequence c K", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word segmentation model", "sec_num": null }, { "text": "For one Chinese word sequence, its character sequence is unique. Hence, we can define the probability as one, if the character sequence of a word sequence is the same as the input, and as zero otherwise:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "P r(c K 1 |f J 1 ) = 0 : C(f J 1 ) = c K 1 1 : C(f J 1 ) = c K 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "Here, C denotes the separation of a word sequence into characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "2. The source language model at the word level:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P r(f J 1 ) = J j=1 P r(f j |f j\u22121 1 ) \u223c = J j=1 p(f j |f j\u22121 j\u2212n+1 )", "eq_num": "(4)" } ], "section": "1", "sec_num": null }, { "text": "In practice, we use an n-gram language model as shown in the Equation 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "In the conventional approach, only the best segmentationf\u0134 1 is translated into the target sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Single-best segmentation", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e\u00ce 1 = argmax e I 1 ,I P r(e I 1 |f\u0134 1 )", "eq_num": "(5)" } ], "section": "Single-best segmentation", "sec_num": null }, { "text": "In the transfer of the single-best segmentation from Equation 3 to Equation 5, some segmentations which are potentially optimal for the translation may be lost. Therefore, we combine the two steps. The search is then rewritten as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation lattice", "sec_num": null }, { "text": "e\u00ce 1 = argmax I,e I 1 P r(e I 1 |c K 1 ) (6) = argmax I,e I 1 \uf8f1 \uf8f2 \uf8f3 f J 1 P r(f J 1 , e I 1 |c K 1 ) \uf8fc \uf8fd \uf8fe = argmax I,e I 1 \uf8f1 \uf8f2 \uf8f3 f J 1 P r(f J 1 |c K 1 ) \u2022 P r(e I 1 |f J 1 , c K 1 ) \uf8fc \uf8fd \uf8fe \u223c = argmax I,e I 1 max f J 1 P r(f J 1 |c K 1 ) \u2022 P r(e I 1 |f J 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation lattice", "sec_num": null }, { "text": "Because our translation model in Equation 1 is based on the words, here we make the approximation that the target sentence e I 1 depends only on the word based source sentence f J 1 , but not on the character based one c K 1 . We also use the maximum instead of the sum over the segmentations.", "cite_spans": [ { "start": 178, "end": 179, "text": "J", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Segmentation lattice", "sec_num": null }, { "text": "In this way, the segmentation model and the translation model are combined into a model for the global decision. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation lattice", "sec_num": null }, { "text": "Now we will take a short sentence as an example and simulate the segmentation process. The Chinese sentence is selected from the development corpus CStar'03 of the IWSLT 2005 task [4] , its Pin-yin form is written in Table 1 . The sentence consists of eight characters, including a punctuation mark. After the manual segmentation, it contains six words.", "cite_spans": [ { "start": 180, "end": 183, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 217, "end": 224, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Computational steps", "sec_num": "3.4." }, { "text": "Only the manually segmented sentence is translated. In this case, if any of the six words does not appear in the training corpus, its translation would be missing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Single-best segmentation", "sec_num": null }, { "text": "The input sentence is at the character level as mentioned before. We generate the segmentation lattice with the following steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Segmentation lattice", "sec_num": null }, { "text": "1. We make a word list from the vocabulary of the manually segmented Chinese training corpus. Each word in the list is mapped by its characters as shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 156, "end": 164, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "\u2022 Segmentation lattice", "sec_num": null }, { "text": "To avoid the problem of the unknown characters from the unsegmented corpus, the additional characters from the test corpus are also added in the word list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Segmentation lattice", "sec_num": null }, { "text": "2. We convert the mapping in Table 2 into a finitestate transducer for segmentation, as shown in Figure 5 . Here the input labels are the characters from the test corpus, and the output labels will be concatenated with the Chinese training words in xu shouxu the translation system. The epsilon word is denoted as \"eps\", and the state 0 is the start and end state.", "cite_spans": [], "ref_spans": [ { "start": 29, "end": 36, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 97, "end": 105, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "\u2022 Segmentation lattice", "sec_num": null }, { "text": "3. Inside the translation systems, the input character sequence is represented as a linear acceptor, as shown in Figure 2 . Figure 2 is composed with the segmentation transducer in Figure 5 . The result is a lattice which represents all possible segmentations of this sentence, as shown in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 121, "text": "Figure 2", "ref_id": null }, { "start": 124, "end": 132, "text": "Figure 2", "ref_id": null }, { "start": 181, "end": 189, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 290, "end": 298, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "\u2022 Segmentation lattice", "sec_num": null }, { "text": "Note that the alphabet in Figure 2 is a subset of the input alphabet in Figure 5 , because the unknown characters are added to the word list as single words.", "cite_spans": [], "ref_spans": [ { "start": 26, "end": 34, "text": "Figure 2", "ref_id": null }, { "start": 72, "end": 80, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "The linear automaton in", "sec_num": "4." }, { "text": "5. With these steps, we get a new finite-state acceptor representing all the alternatives of different word segmentations. To have an integrated word segmentation in the translation, we only need to read segmentation lattice in Figure 3 instead of the manual segmented sentence. ", "cite_spans": [], "ref_spans": [ { "start": 228, "end": 236, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "The linear automaton in", "sec_num": "4." }, { "text": "A problem of translation with the lattice in Figure 3 is that shorter paths are usually preferred because the search algorithm during translation finds the path with the smallest translation costs.", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 53, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Weighting with language model costs", "sec_num": "3.5." }, { "text": "Therefore, we add word segmentation costs to a lattice. A word segmentation model represents the fluency of a Chinese word sequence and can be built as an n-gram language model of the word-based text. We trained the language model on the Chinese training corpus with the SRILM toolkit [7] and used the modified Kneser-Ney discounting.", "cite_spans": [ { "start": 285, "end": 288, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Weighting with language model costs", "sec_num": "3.5." }, { "text": "To combine the segmentation lattice and the word based language model, we simply transform the language model into a finite-state transducer and compose the lattice with it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting with language model costs", "sec_num": "3.5." }, { "text": "After inserting the weights the number of nodes and arcs in a lattice may increase because of the language model histories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighting with language model costs", "sec_num": "3.5." }, { "text": "The translation experiments were carried out on the Basic Travel Expression Corpus (BTEC), a multilingual speech corpus which contains tourism-related sentences usually found in travel phrase books. We tested our system on the Chinese-to-English Supplied Task. The corpus was provided during the International Workshop on Spoken Language Translation [4] . The corpus statistics for the BTEC corpus are given in Table 3 .", "cite_spans": [ { "start": 350, "end": 353, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 411, "end": 418, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Translation experiments 4.1. Task and corpus statistics", "sec_num": "4." }, { "text": "We used 19851 sentence pairs instead of 20000 due to corpus filtering. The Chinese texts in words are segmented manually. The evaluation data is the CStar'03 data set, whose Chinese text in words is the input to the single-best segmentation and the text in characters is the input to the segmentation lattice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation experiments 4.1. Task and corpus statistics", "sec_num": "4." }, { "text": "So far, in machine translation research, a single generally accepted criterion for the evaluation of the experimental results does not exist. Therefore, we used different criteria: WER (word error rate), PER (position-independent word error rate), BLEU [12] and NIST [13] . For the evaluation corpus, we have sixteen references available. The four criteria are computed with respect to multiple references. The evaluation was case-insensitive. The BLEU and NIST scores measure accuracy, i.e. larger scores are better.", "cite_spans": [ { "start": 253, "end": 257, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 267, "end": 271, "text": "[13]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation criteria", "sec_num": "4.2." }, { "text": "We present the translation results on the IWSLT 2005 task [4] described in Section 4.1.", "cite_spans": [ { "start": 58, "end": 61, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation results", "sec_num": "4.3." }, { "text": "The experiments are based on two translation systems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation results", "sec_num": "4.3." }, { "text": "\u2022 Finite-state transducer-based translation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation results", "sec_num": "4.3." }, { "text": "In the finite-state transducer-based system we only use a monotone search because of the technical limitations of reordering with lattice input. Table 4 shows the results of the finite-state transducer based translations.", "cite_spans": [], "ref_spans": [ { "start": 145, "end": 152, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Evaluation results", "sec_num": "4.3." }, { "text": "Here, the translation using the single-best segmentation with a manually segmented input text has a BLEU score of 28.5%. By using the integrated segmentation, the BLEU score is increased by 0.5% absolute, and the NIST score by about 25% relative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation results", "sec_num": "4.3." }, { "text": "\u2022 Phrase-based translation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation results", "sec_num": "4.3." }, { "text": "The baseline results with the phrase-based translation have higher precision but also higher error rates as the results with the finite-state based translation. The reason is that many sentences translated by the finitestate transducer system are very short. There are only 2321 words in the translation hypothesis instead of 2521 words on average in the references. The phrasebased translation covered this shortcoming by including more feature functions as described in 2.3, especially the word penalty which can penalize shorter sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation results", "sec_num": "4.3." }, { "text": "The baseline translation results of the phrase-based translation system have a BLEU score of 38.9%, as shown in Table 5 . In our experiments, the reordering was taken at the phrase level and the model scaling factors were optimized on the evaluation data with respect to the combination of all the criteria. Here, using the segmentation lattice with a bi-gram source language model, the improvement in the BLEU score is 1.5% absolute compared to the baseline, and the WER and PER are reduced by 11.9% and 13.2%, respectively.", "cite_spans": [], "ref_spans": [ { "start": 112, "end": 119, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Evaluation results", "sec_num": "4.3." }, { "text": "We use the lattice density to measure the size of a segmentation lattice, which is defined as the number of arcs in the lattice divided by the number of characters in the sentence. For the 506 sentences in the evaluation set, on average, the density of the lattices without weights is 1.5, and it is 3.9 with bi-gram language model weights. The memory requirements with different segmentation methods for translation of the CStar'03 data set are as following: with the single-best segmentation, it is 54.2 MB, and with the segmentation lattice not using a source language model, it is 56.9 MB. If we use a bi-gram source language model the requirement increases to 65.8 MB.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Requirement", "sec_num": "4.4." }, { "text": "The translation speed using the segmentation with lattice is 0.266 second per sentence, it is almost as fast as the translation using the single-best segmentation, i.e. 0.262 second per sentence. By using a bi-gram source language model, the speed slows down to 0.820 second per sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational Requirement", "sec_num": "4.4." }, { "text": "We have successfully developed a new Chinese word segmentation method for statistical machine translation. The method combines the segmentation decisions directly in the search for the translations, which has two major advantages:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and future work", "sec_num": "5." }, { "text": "1. The Chinese input text is on character level. There is no need to segment the text during pre-processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and future work", "sec_num": "5." }, { "text": "2. The translation system with the integrated segmentation outperforms the one that uses single-best (manual) segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and future work", "sec_num": "5." }, { "text": "In the experiments on the IWSLT task 2005 [4] , the integrated segmentation approach outperforms the single-best segmentation using both the finite-state transducer based and phrase-based systems. With the phrase-based system, the BLEU score is increased by 1.5% absolute. Although these are promising results, so far the changes in word segmentation are only carried out in the translation process. As we mentioned in Section 3.1, to minimize the number of lexicon entries, we can try to perform a better segmentation in training. [14] suggested a way to perform the phrase segmentation and alignment in one step. By refining our model, we expect a further improvement with the integrated word segmentation method.", "cite_spans": [ { "start": 42, "end": 45, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 532, "end": 536, "text": "[14]", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and future work", "sec_num": "5." } ], "back_matter": [ { "text": "This work was partly funded by the DFG (Deutsche Forschungsgemeinschaft) under the grant NE572/5-1, project \"Statistische Text\u00fcbersetzung\" and the European Union under the integrated project TC-Star (Technology and Corpora for Speech to Speech Translation, IST-2002-FP6-506738, http://www.tc-star.org).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "6." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Do we need Chinese word segmentation for statistical machine translation", "authors": [ { "first": "J", "middle": [], "last": "Xu", "suffix": "" }, { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proc. of the Third SIGHAN Workshop on Chinese Language Learning", "volume": "", "issue": "", "pages": "122--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Xu, R. Zens, and H. Ney, \"Do we need Chinese word segmentation for statistical machine translation?\" in Proc. of the Third SIGHAN Workshop on Chinese Lan- guage Learning, Barcelona, Spain, July 2004, pp. 122- 128.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Speech translation: Coupling of recognition and translation", "authors": [ { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1999, "venue": "Proc. of IEEE Intl. Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "1149--1152", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Ney, \"Speech translation: Coupling of recognition and translation,\" in Proc. of IEEE Intl. Conference on Acoustics, Speech and Signal Processing, Phoenix, AZ, March 1999, pp. 1149-1152.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An iterative algorithm to build Chinese language models", "authors": [ { "first": "X", "middle": [], "last": "Luo", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" } ], "year": 1996, "venue": "Proc. of the 34th annual meeting of the Associaton for Computational Linguistics", "volume": "", "issue": "", "pages": "139--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Luo and S. Roukos, \"An iterative algorithm to build Chinese language models,\" in Proc. of the 34th annual meeting of the Associaton for Computational Linguis- tics, Santa Cruz, California, June 1996, pp. 139-143.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Intl. workshop on spoken language translation home page", "authors": [], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "IWSLT, \"Intl. workshop on spoken lan- guage translation home page,\" 2005, http://www.is.cs.cmu.edu/iwslt2005/CFP.html.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A statistical approach to machine translation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [], "last": "Cocke", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" }, { "first": "P", "middle": [ "S" ], "last": "Roossin", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "16", "issue": "2", "pages": "79--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. F. Brown, J. Cocke, S. A. Della Pietra, V. J. Della Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin, \"A statistical approach to machine trans- lation,\" Computational Linguistics, vol. 16, no. 2, pp. 79-85, June 1990.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och and H. Ney, \"A systematic comparison of var- ious statistical alignment models,\" Computational Lin- guistics, vol. 29, no. 1, pp. 19-51, March 2003.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "SRILM -an extensible language modeling toolkit", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proc. of Intl. Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "901--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Stolcke, \"SRILM -an extensible language modeling toolkit.\" in Proc. of Intl. Conference on Spoken Lan- guage Processing, Denver, Colorado, September 2002, pp. 901-904.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "FSA: An efficient and flexible C++ toolkit for finite state automata using ondemand computation", "authors": [ { "first": "S", "middle": [], "last": "Kanthak", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proc. of the 42nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "510--517", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Kanthak and H. Ney, \"FSA: An efficient and flex- ible C++ toolkit for finite state automata using on- demand computation,\" in Proc. of the 42nd Annual Meeting of the Association for Computational Linguis- tics, Barcelona, Spain, July 2004, pp. 510-517.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Speech-to-speech translation based on finite-state transducer", "authors": [ { "first": "F", "middle": [], "last": "Casacuberta", "suffix": "" }, { "first": "D", "middle": [], "last": "Llorens", "suffix": "" }, { "first": "C", "middle": [], "last": "Martinez", "suffix": "" }, { "first": "S", "middle": [], "last": "Molau", "suffix": "" }, { "first": "F", "middle": [], "last": "Nevado", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "M", "middle": [], "last": "Pasto", "suffix": "" }, { "first": "D", "middle": [], "last": "Pico", "suffix": "" }, { "first": "A", "middle": [], "last": "Sanchis", "suffix": "" }, { "first": "E", "middle": [], "last": "Vilar", "suffix": "" }, { "first": "J", "middle": [], "last": "Vilar", "suffix": "" } ], "year": 2001, "venue": "Proc. of IEEE Intl. Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "613--616", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Casacuberta, D. Llorens, C. Martinez, S. Molau, F. Nevado, H. Ney, M. Pasto, D. Pico, A. Sanchis, E. Vi- lar, and J. Vilar, \"Speech-to-speech translation based on finite-state transducer,\" in Proc. of IEEE Intl. Confer- ence on Acoustics, Speech and Signal Processing, Salt Lake City, Utah, May 2001, pp. 613-616.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improvements in phrase-based statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proc. of the Human Language Technology Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zens and H. Ney, \"Improvements in phrase-based statistical machine translation,\" in Proc. of the Human Language Technology Conference, Boston, MA, May 2004.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proc. of the 41th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och, \"Minimum error rate training in statistical machine translation,\" in Proc. of the 41th Annual Meet- ing of the Association for Computational Linguistics (ACL), Sapporo, Japan, July 2003, pp. 160-167.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [ "A" ], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W.-J", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. A. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, \"Bleu: a method for automatic evaluation of machine translation,\" in Proc. of the 40th Annual Meeting of the Association for Computational Linguistics, Philadel- phia, July 2002, pp. 311-318.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics", "authors": [ { "first": "G", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "Proc. of Human Language Technology", "volume": "", "issue": "", "pages": "128--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Doddington, \"Automatic evaluation of machine translation quality using n-gram co-occurrence statis- tics,\" in Proc. of Human Language Technology, San Diego, California, March 2002, pp. 128-132.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Integrated phrase segmentation and alignment algorithm for statistical machine translation", "authors": [ { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "S", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2003, "venue": "Proc. of Intl. Conference on Natural Language Processing and Knowledge Engineering (NLP-KE'01)", "volume": "", "issue": "", "pages": "567--573", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Zhang, S. Vogel, and A. Waibel, \"Integrated phrase segmentation and alignment algorithm for statistical machine translation,\" in Proc. of Intl. Conference on Natural Language Processing and Knowledge Engi- neering (NLP-KE'01), Beijing, China, October 2003, pp. 567-573.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "at the character level and f J 1 at the word level, where c 1 . . . c k . . . c K are the succeeding characters and f 1 . . . f j . . . f J are the succeeding words.", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Segmentation lattice: input sentence at the character level as a linear automaton. Segmentation lattice without weights.", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "Segmentation lattice with language model weights.", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "Segmentation transducer.", "num": null }, "TABREF0": { "text": "Example of a sentence and its translations.Source sentence in characters:zai na li ban li deng ji shou xu ? Manually segmented source sentence:zai nali banli dengji shouxu ? Translation by single-best segmentation: where to go through boarding formalities ?", "content": "
Translation by segmentation lattice:where do i make my boarding arrangements ?
One reference:
", "type_str": "table", "num": null, "html": null }, "TABREF2": { "text": "Word mapping from characters", "content": "
Characters Words
zaizai
....
na linali
ban libanli
deng jidengji
shou
", "type_str": "table", "num": null, "html": null }, "TABREF3": { "text": "Corpus statistics", "content": "
ChineseEnglish
", "type_str": "table", "num": null, "html": null }, "TABREF4": { "text": "Translation performance with monotone finite-state transducer based translation for different segmentation methods.", "content": "
Segmentation methodsWER [%] PER [%] NIST BLEU [%]
Single-best (manual) segmentation51.343.13.6028.5
Segmentation lattice without weights 51.642.24.6929.0
", "type_str": "table", "num": null, "html": null }, "TABREF5": { "text": "Translation performance with phrase-based translation for different segmentation methods.", "content": "
Segmentation methodsWER [%] PER[%] NIST BLEU[%]
Single-best (manual) segmentation53.643.88.1838.9
Segmentation lattice without weight47.038.18.0940.2
Segmentation lattice with bi-gram LM 47.238.08.1840.4
", "type_str": "table", "num": null, "html": null } } } }