{ "paper_id": "2004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:17:10.271695Z" }, "title": "The ISL Statistical Translation System for Spoken Language Translation", "authors": [ { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "", "affiliation": {}, "email": "vogel@cs.cmu.edu" }, { "first": "Sanjika", "middle": [], "last": "Hewavitharana", "suffix": "", "affiliation": {}, "email": "sanjika@cs.cmu.edu" }, { "first": "Muntsin", "middle": [], "last": "Kolss", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Karlsruhe", "location": { "settlement": "Karlsruhe", "country": "Germany" } }, "email": "muntsin@ira.uka.de" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Karlsruhe", "location": { "settlement": "Karlsruhe", "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we describe the components of our statistical machine translation system used for the spoken language translation evaluation campaign. This system is based on phrase-to-phrase translations extracted from a bilingual corpus. A new phrase alignment approaches will be introduced, which finds the target phrase by optimizing the overall word-to-word alignment for the sentence pair under the constraint that words within the source phrase are only aligned to words within the target phrase. The system will be used for Chinese-to-English translations under the small, additional and unlimited data conditions, and for the small Japanese-to-English translation track.", "pdf_parse": { "paper_id": "2004", "_pdf_hash": "", "abstract": [ { "text": "In this paper we describe the components of our statistical machine translation system used for the spoken language translation evaluation campaign. This system is based on phrase-to-phrase translations extracted from a bilingual corpus. A new phrase alignment approaches will be introduced, which finds the target phrase by optimizing the overall word-to-word alignment for the sentence pair under the constraint that words within the source phrase are only aligned to words within the target phrase. The system will be used for Chinese-to-English translations under the small, additional and unlimited data conditions, and for the small Japanese-to-English translation track.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Statistical machine translation (SMT) is currently the most promising approach, esp. to large vocabulary text translation. In the spirit of the Candide system developed in the early 90s at IBM [Brown et al. 1993 ], a number of statistical machine translation systems have been presented in the last few years [Wang and Waibel 1998 ], [Och and Ney 2000] , [Yamada and Knight 2000] . These systems share the basic underlying principles of applying a translation model to capture the lexical and word reordering relationships between two languages, complemented by a target language model to drive the search process through translation model hypotheses. Their primary differences lie in the structure and source of their translation models. Whereas the original IBM system was based on purely word-based translation models, current SMT systems try to incorporate more complex structure.", "cite_spans": [ { "start": 193, "end": 211, "text": "[Brown et al. 1993", "ref_id": null }, { "start": 309, "end": 330, "text": "[Wang and Waibel 1998", "ref_id": null }, { "start": 334, "end": 352, "text": "[Och and Ney 2000]", "ref_id": "BIBREF1" }, { "start": 355, "end": 379, "text": "[Yamada and Knight 2000]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The statistical machine translation system developed in the Interactive Systems Laboratories (ISL) uses phrase-to-phrase translations as the primary building blocks to capture local context information, leading to better lexical choice and more reliable local reordering. A new approach to extract phrase translation pairs from bilingual data has been developed, which is not using the Viterbi alignment, but is based on optimizing a constraint word-to-word alignment for the entire sentence pair. This is described in Section 2.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Finding good phrase translation pairs is very important. But as a source phrase can have alternative translations, it is also necessary to assign meaningful probabilities to those alternatives. Typically, longer phrases are seen only a few times. Probabilities estimated from relative frequencies are therefore not reliable. We therefore calculate phrase translation probabilities based on the word-to-word translation probabilities, as described in Section 2.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Section 3 outlines the architecture of the decoder that combines the translation and language model to generate complete translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The BTEC corpus is a very limited domain corpus and therefore many test sentences are close to one or several sentences seen in the training data. We implemented and tested a simple translation memory component, which will be described in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Finally, in Section 5 we present a series of experiments in the Chinese-to-English and Japanese-to-English translation tasks. The Basic Travel Expression Corpus (BTEC) is used as domain-specific data [Takezawa et al. 2002] . Different data conditions are explored: small in-domain data only, using additional outof-domain data, and using a larger in-domain corpus.", "cite_spans": [ { "start": 200, "end": 222, "text": "[Takezawa et al. 2002]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The ISL translation system uses word-to-word and phrase-to-phrase translations, extracted from the bilingual corpus. Different phrase alignment methods have been explored in the past, like extracting phrase translation pairs from the Viterbi path of a word alignment, or simultaneously splitting source and target sentence into phrases and aligning them in an integrated way [Zhang 2003 ]. For the experiments reported in this paper a new phrase alignment method was explored.", "cite_spans": [ { "start": 375, "end": 386, "text": "[Zhang 2003", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment", "sec_num": "2.1." }, { "text": "Assume we are searching for a good translation for one source phrasef = f 1 ...f k , and that we find a sentence in the bilingual corpus, which contains this phrase. We are now interested in finding a sequence of words\u1ebd = e 1 ...e l in the target sentence, which is an optimal translation of the source phrase. Any sequence of words in the target sentence is a translation candidate, but most of them will not be considered translations of the source phrase at all, whereas some can be considered as partially correct translations, and a small number of candidates will be considered acceptable or good translations. We want to find these good candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment via Constrained Sentence Alignment", "sec_num": "2.2." }, { "text": "The IBM1 word alignment model aligns each source word to all target words with varying probabilities. Typically, only one or two words will have a high alignment probability, which for the IBM1 model is just the lexicon probability. We now modify the IBM1 alignment model by not summing the lexicon probabilities of all target words, but by restricting this summation in the following way:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment via Constrained Sentence Alignment", "sec_num": "2.2." }, { "text": "\u2022 for words inside the source phrase we sum only over the probabilities for words inside the target phrase candidate, and for words outside of the source phrase we sum only over the probabilities for the words outside the target phrase candidates;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment via Constrained Sentence Alignment", "sec_num": "2.2." }, { "text": "\u2022 the position alignment probability, which for the standard IBM1 alignment is 1/I, where I is the number of words in the target sentence, is modified to 1/(l) inside the source phrase and to 1/(I \u2212 l) outside the source phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment via Constrained Sentence Alignment", "sec_num": "2.2." }, { "text": "More formally, we calculate the constrained alignment probability:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment via Constrained Sentence Alignment", "sec_num": "2.2." }, { "text": "p i1,i2 (f |e) = j1\u22121 j=1 i / \u2208(i1..i2) p(f j |e i )\u00d7 j2 j=j1 i2 i=i1 p(f j |e i ) J j=j2+1 i / \u2208(i1..i2) p(f j |e i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment via Constrained Sentence Alignment", "sec_num": "2.2." }, { "text": "and optimize over the target side boundaries i 1 and i 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment via Constrained Sentence Alignment", "sec_num": "2.2." }, { "text": "(i 1 , i 2 ) = argmax i1,i2 {p i1,i2 (f |e)}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment via Constrained Sentence Alignment", "sec_num": "2.2." }, { "text": "It is well know that 'looking from both sides' is better than calculating the alignment only in one direction, as the word alignment models are asymmetric with respect to aligning one to many words. Similar to p i1,i2 (f |e) we can calculate p i1,i2 (e|f ), now summing over the source words and multiplying along the target words:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment via Constrained Sentence Alignment", "sec_num": "2.2." }, { "text": "p i1,i2 (e|f ) = i1\u22121 i=1 j / \u2208(j1...j2) p(e i |f j )\u00d7 i2 i=i1 j2 j=j1 p(e i |f j ) I i=i2+1 j / \u2208(j1...j2) p(e i |f j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment via Constrained Sentence Alignment", "sec_num": "2.2." }, { "text": "To find the optimal target phrase we interpolate both alignment probabilities and take the pair (i 1 , i 2 ) which gives the highest probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment via Constrained Sentence Alignment", "sec_num": "2.2." }, { "text": "(i 1 , i 2 ) = argmax i1,i2 {(1 \u2212 c)p (i1,i2) (f |e) + cp (i1,i2) (f |e)}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment via Constrained Sentence Alignment", "sec_num": "2.2." }, { "text": "Actually, we take not only the best translation candidate, but all candidates, which are within a given margin to the best one. All candidates are then used in the decoder, when also the language model is available to score the translations. The phrase pairs can be either extracted from the bilingual corpus at decoding time or stored and reused during system tuning. It should also be mentioned that single source words are treated in the same way, i.e. just as phrases of length 1. The target translation can then be one or several words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Alignment via Constrained Sentence Alignment", "sec_num": "2.2." }, { "text": "Most phrase pairs (f ,\u1ebd) = (f j1 ...f j2 , e i1 ...e i2 ) are seen only a few times, even in very large corpora. Therefore, probabilities based on occurrence counts have little discriminative power. In our system we calculate phrase translation probabilities based on a statistical lexicon, i.e. on the word translation probabilities (p(f, e):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Translation Probabilities", "sec_num": "2.3." }, { "text": "p(f |\u1ebd) = j i p(f j |e i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase Translation Probabilities", "sec_num": "2.3." }, { "text": "The language model used in the decoder is a standard 3gram language model. We use the SRI language model toolkit [SRI-LM Toolkit] to build language models of different sizes, using the target side of the bilingual data only or using additional monolingual data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Language Model", "sec_num": "2.4." }, { "text": "Different languages have different word order. In the standard word alignment models this is captured by word position models, e.g. absolute positions p(i|j, I, J) in IBM2 alignment model or relative positions p(i|i prev , I) in the HMM alignment model [Vogel et al. 1996] . We use a simplified relative position model in our SMT decoder.", "cite_spans": [ { "start": 253, "end": 272, "text": "[Vogel et al. 1996]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Position Alignment Model", "sec_num": "2.5." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(i|i prev , I) = e \u2212 |i\u2212iprev | c", "eq_num": "(1)" } ], "section": "Position Alignment Model", "sec_num": "2.5." }, { "text": "with a suitably chosen constant c. This constant is essentially a scaling factor for the model when combining it with the other models in the decoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Position Alignment Model", "sec_num": "2.5." }, { "text": "Source sentence and target sentence are typically of different length. However, when using a large bilingual corpus to collect the sentence length statistics, it becomes clear that the probability distribution p(I|J), where J is the number of words in the source sentence and I is the number of words in the target sentence, is rather flat and therefore does not seem to be very helpful. On the other side we observe that the language model typically prefers shorter translation. To compensate for this we use a simple sentence length model, which gives a constant bonus for each word generated. Putting a higher weight on the sentence length model contribution to the overall translation score results in generating translations, which are on average longer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Length Model", "sec_num": "2.6." }, { "text": "Statistical machine translation is based on the noisy channel approach:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e = argmax e p(e|f ) = argmax e p(f |e)p(e)", "eq_num": "(2)" } ], "section": "Decoding", "sec_num": "3." }, { "text": "The components are the language model p(e), for which we use a trigram language model, and the translation model p(f |e), which in our case is composed of the word and phrase translations. The argmax denotes the search algorithm, which finds the best target sentence given those models. Applying the language model requires that the previous words are known. This leads to a search organization which constructs the target sentence in a sequential way. However, to incorporate the different word order of different languages the words in the source sentence have to be covered non-sequentially while the translation is generated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "3." }, { "text": "In the current implementation we allow for phrase-tophrase translation. Decoding proceeds essentially along the source sentence. At each step, however, the next word or phrase to be translated may be selected starting from all positions within a given look-ahead window from the current position. The decoding process works in two stages: First, the word-to-word and phrase-to-phrase translations and, if available, other specific information like named entity translation tables are used to build a translation lattice. This lattice contains the all partial translations as building blocks, from which the complete translation has to be generated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "3." }, { "text": "A standard n-gram language model is then applied to find the best path in this lattice. It is during this search that reordering has to be taken into account, by jumping ahead a few positions, filling in the gap later on. To ensure full coverage of the source sentence each partial translation carries information about the source words already translated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "3." }, { "text": "Standard pruning strategies are employed to keep decoding time within reasonable bounds. The ISL decoder allows for flexible pruning, as the language model history, the translated position and the number of generate words can be used individually and in combination in pruning. Details have been described in ] and especially in [Vogel 2003 ].", "cite_spans": [ { "start": 329, "end": 340, "text": "[Vogel 2003", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "3." }, { "text": "The BTEC data consists of typical phrases used in the tourism and medical domain. The sentences are usually short, on average only 6-7 words, and many have similar patterns, as shown here with Spanish-English sentence pairs from the BTEC corpus: Fiven a test sentence we will often find the same or a very similar sentence in the training corpus. For the 506 sentences in the Chinese-English development test set) 5% or the test sentences were identical to a sentence in the training corpus, 20% of the sentences could be matched with one insertion, deletion, or substitution error only, and another 24% matched with 2 errors. For the close matching sentences the idea is to start from the given translation and to make some simple corrections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Translation Memory Component", "sec_num": "4." }, { "text": "en", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Translation Memory Component", "sec_num": "4." }, { "text": "The translation memory works as follows: For each test sentence S f = f 1 ...f J we find the closest matching source sentence S f = f 1 ...f J in the training corpus. The similarity is measured in terms of edit distance. The translation of S f , which is S e = e 1 ...e I is also extracted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Translation Memory Component", "sec_num": "4." }, { "text": "If there is an exact match, we output Se' as the desired translation of S f . For those sentences with error 1, we decide what type of operation (substitution, deletion or insertion) is required to produce the correct translation. Also identify the words f in S f and f in S f that has to be altered. The repair operations allow for multi-word substitutions, deletions, and insertions on the target side.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Translation Memory Component", "sec_num": "4." }, { "text": "Depending on the type of the operation needed, one of the following operations is performed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Translation Memory Component", "sec_num": "4." }, { "text": "i. Find all possible phrase alignments e in S e for the word f .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitution of f with f :", "sec_num": "1." }, { "text": "ii. Find all possible translations e of word f .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitution of f with f :", "sec_num": "1." }, { "text": "iii. Replace e with e to produce S e . iv. Score the resulting translation (S f , S e ) with the translation and language model. ii. Remove e from S e to produce S e .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitution of f with f :", "sec_num": "1." }, { "text": "iii. Score the resulting translation (S f , S e ) with the translation and language model. iv. Iterate over all e and choose the best S e as the desired translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitution of f with f :", "sec_num": "1." }, { "text": "3. Insertion of word f into S f :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitution of f with f :", "sec_num": "1." }, { "text": "i. Find all possible translations e for word f .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitution of f with f :", "sec_num": "1." }, { "text": "ii. Insert e into a position i in S e to produce S e .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitution of f with f :", "sec_num": "1." }, { "text": "iii. Score the resulting translation (S f , S e ) with the translation and language model. iv. Iterate over all translations e and all word positions i in S e and choose the best S e as the desired translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitution of f with f :", "sec_num": "1." }, { "text": "To find the target phrase which needs to be repaired, or candidate translations used in the repair operations, the phrase alignment method described in SectionPhrases was used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitution of f with f :", "sec_num": "1." }, { "text": "To integrate the results from SMT and the Translation Memory we simply replaced the SMT translation of close matching sentences, with the translation produced by translation memory approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitution of f with f :", "sec_num": "1." }, { "text": "Experiments where performed to study the effect of different training data conditions. As in-domain data the BTEC corpus was used, a corpus created at ATR [Takezawa et al. 2002] and extended with translations into different languages by the CSTAR partners. In the small data track, only a part of the BTEC corpus was used. The so-called additional data track allowed for bilingual and monolingual data available from LDC. In the unrestricted data track the full BTEC corpus could be used.", "cite_spans": [ { "start": 155, "end": 177, "text": "[Takezawa et al. 2002]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5." }, { "text": "We report translation results using the well known Bleu [Papineni 2001] and NIST mteval [MTeval 2002] scores. The the NIST mteval script version 11a was used to calculate both the NIST and the Bleu score. One peculiar feature of the Bleu metric implementation in the NIST mteval v011a script is the calculation of the reference length, which is used to calculate the length penalty. Whereas the original implementation sums the length of the reference translation, which is closest to the length of the system translation, the NIST implementation sums over the length of the shortest reference translation. This leads to very different length penalties in the two metrics. For the Chinese data the reference length for NIST is 3601.7 words, whereas the reference length for Bleu is 2429 words, i.e. about one third shorter. This has, of course, a big effect on the tuning of the system: translations scoring high on the Bleu metric will be much shorter than translations getting high NIST scores. 1", "cite_spans": [ { "start": 56, "end": 71, "text": "[Papineni 2001]", "ref_id": null }, { "start": 88, "end": 101, "text": "[MTeval 2002]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.1." }, { "text": "Results are reported for Chinese-to-English and Japanese-to-English translation tasks. Two test sets were used for each language: one development test set (Dev), which was used to tune the parameters of the translation system and a test set (Test), which was translated using the optimal parameter settings. All test sets were provided by ATR with word segmentation. For evaluation 16 reference translations were used, whereby not all references were created as genuine translations, but as paraphrases. Table 1 gives the details for all four test sets. The number of unknown words differ depending on the training data and will be given in each case below.", "cite_spans": [], "ref_spans": [ { "start": 504, "end": 511, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "The Test Data", "sec_num": "5.2." }, { "text": "The Chinese small data track uses 20,000 sentence pairs, where the Chinese sentences are already word segmented. It has to be assumed that the word segmentation of the training data matches the word segmentation of the test data. In the next sub-section we will see that word segmentation makes a difference and that higher translation quality can be achieved with re-segmenting both training and test data. Table 2 gives the details for the data used in the Chinese small data track evaluation.", "cite_spans": [], "ref_spans": [ { "start": 408, "end": 415, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Chinese Small Data Track", "sec_num": "5.3." }, { "text": "Different setups for the translation system were tested. Results are given in Table 3 . First, the IBM1 lexicons p(f j |e i ) and p(e i |f j ) were used in the phrase alignment step, but the translation probability for the phrase pairs was estimated from the relative frequencies. Next, the phrase translation probability was calculated using the IBM1 lexicon and the HMM lexicon respectively. Each time we see an improvement in translation quality, both when tuned towards high Bleu scores and when tuned towards high NIST scores. Finally, n-best list rescoring with the HMM lexicon was gave a small improvement in Bleu score, but none in NIST score. An improvement of about 1.8 in Bleu score and 0.24 in NIST score is statistically significant on the 95% level. That is to that that the improvements from using relative frequencies to using the IBM1 lexicon for scoring the phrase translations, and then again using the HMM lexicon leads to a statistically significant improvement in Bleu score. For NIST score the step from using the IBM1 lexicon to using the HMM lexicon is statistically significant. We tested the translation memory component for sentences which matched exactly or had only one error. There are 130 sentences in development set for which this condition holds. The parameter setting for the SMT system was set to generate translations, which where somewhat balanced with respect to NIST and Bleu score, leaning somewhat more towards a high NIST score. Replacing the 130 sentences, which were translated by the translation memory module, did not improve Bleu and NIST scores, as can be seen in Table 4 . There is a small, but not significant drop in both scores. But when the translations of the two methods are compared, in many instances, the translation memory (TM) has produced better 'quality' translation. For the unseen test data translation with parameter settings for High Bleu, High NIST, and a more balanced version were generated and evaluated. Results are given in Table 5 . It turned out the the more balanced parameter setting gave a slightly higher NIST score than the parameter setting which gave highest NIST score on the development test set, and at the same time a much higher Bleu score. It can be assumed that the length ratio between source sentences and reference translations is somewhat different between the development and the test set. ", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 85, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 1614, "end": 1621, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 1998, "end": 2005, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Chinese Small Data Track", "sec_num": "5.3." }, { "text": "In this data track additional data could be used to improve translation quality. However, this additional data was restricted to corpora which are distributed through LDC. All Chinese-English bilingual data available was therefore news data, which is to say, out-of-domain data. The question therefore is, if this data will improve translation quality, or rather harm it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Additional Data Track", "sec_num": "5.4." }, { "text": "To use the additional data, first of all a resegmentation of the BTEC training corpus and also test data was necessary. Word segmentation is typically based on a word list and perhaps additional word frequency information. It is clear that using the vocabulary of the small BTEC corpus would not be helpful, as this word list is rather small and would not help to find an adequate segmentation of the news corpora. We therefore applied the same word segmentation to the BTEC training and test data, which was also used to preprocess the additional LDC data. The word list used contains about 45,000 words. The statistics for the resulting corpus is shown in Table 6 . It is interesting to notice that after re-segmenting the BTEC data the number of unknown words reduced significantly, from 160 to 89 for the development set and from 104 to 88 for the test set. To further reduce the number of unknown words, we can use the additional data. Adding just a large out-ofdomain corpus will usually not help, but rather result in a degradation in translation quality. We therefore select from the large bilingual Chinese-English corpus only those sentences, which contain words and phrases occurring in the test data. More specific, for each n-gram in the test data, which occurs there k times, we select up to 10 * k sentences in the training corpus containing this ngram. For the development and test set used in the experiments this resulted in a small corpus (NEWS) of about 1 million words, with a vocabulary of about 24K Chinese resp. 30K English words. This data was then added to the in-domain data and used to train translation and language model. To bias more towards the in-domain data we also trained models on a corpus, where the small BTEC corpus was added 3 times, the NEWS corpus only once.", "cite_spans": [], "ref_spans": [ { "start": 658, "end": 665, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Chinese Additional Data Track", "sec_num": "5.4." }, { "text": "The LM 3-gram perplexity for the 1+1 combination was 106.7, whereas for the 3+1 combination it was 100.5, compared to the 68.6, when using only the in-domain data for building the language model. This increase in perplexity shows that adding additional data goes both ways: reducing the number of unknown words, but also increasing the perplexity of the models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Additional Data Track", "sec_num": "5.4." }, { "text": "Translation results are shown in Table 7 . The resegmentation alone gave already higher Bleu and NIST scores. However, when adding the out-of-domain data the scores went down, indicating worse translation quality. Only after biasing the models more towards the indomain data a small, yet statistically significant improvement could be achieved over using the in-domain data alone.", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 40, "text": "Table 7", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Chinese Additional Data Track", "sec_num": "5.4." }, { "text": "Again, three parameter setting where used to translate the unseen test sentences, using the system trained on Table 8 . When we compare these results with the small data track scores, then we see that both High Bleu score High NIST score are higher when adding the out-of-domain data. Again, these improvements are statistically significant. ", "cite_spans": [], "ref_spans": [ { "start": 110, "end": 117, "text": "Table 8", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Chinese Additional Data Track", "sec_num": "5.4." }, { "text": "This data condition imposes no restrictions on which data to use for training the translation and language models. The most valuable data is, of course, in domain data. As the BTEC corpus contains more than 160k sentence pairs, we can compare the effect of additional in-domain data to using the additional out-of-domain data. The corpus statistics for the BTEC corpus used in this experiment is given in Table 9 . The interesting numbers here are that the full BTEC corpus leads to fewer unknown words, but when adding the sampled news data, the number of unknown words is the same as in the additional data track.", "cite_spans": [], "ref_spans": [ { "start": 405, "end": 412, "text": "Table 9", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Chinese Unrestricted Data Track", "sec_num": "5.5." }, { "text": "The LM perplexity for the reference translations is, on average, higher than when using only the 20,000 sentences to build the LM, increasing from 68.6 to 72.0, despite eight time as many data. This again indicates that these reference translations have are more varied then when generating genuine translations. For the combined corpus the perplexity is now lower, as the larger BTEC corpus gives a stronger bias towards in-domain data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Unrestricted Data Track", "sec_num": "5.5." }, { "text": "Here, we see first of all that more in-domain data boosts translation quality. The Bleu score increased by 5 points, i.e. a 10% relative improvement, and the NIST score increased by 0.9, also a 10% relative improvement. An the other side, additional out-of-domain data did not help to improve translation quality. The benefit of having fewer unknown words is lost by moving out-of-domain with the translation and language model. Perhaps reducing the additional corpus to just those few sentences, which contain words not seen in the in-domain training data could help. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Unrestricted Data Track", "sec_num": "5.5." }, { "text": "For the Japanese small data track the essential question was how fast good translation could be generated, given that a system for Chinese-to-English, which had similar characteristics in terms of corpus and vocabulary size, had already been build and tuned. So, the two IBM1 lexicons were trained and the language model from the 20k English sentences was built. The data could be used without additional preprocessing. Training the models is a matter of minutes. Therefore, the overall effort was rather small; formatting the reference translations for automatic evaluation was probably the most time consuming part. The first translation runs used the parameter setting which gave highest Bleu and NIST scores for the Chinese small data track situation, when using the IBM1 lexicons for phrase pair extraction and phrase pair scoring. Additional tuning was then performed to see how close the ini-tial translation was already to optimal performance. The results are given in Table 12 . We see that the first translation gave already close to optimal results. Overall the effort to train and tune the Japanese-English translation system was less then half a day. In Table ? ? the results for the unseen test set are given. Results are somewhat lower than the scores obtained on the development data. ", "cite_spans": [], "ref_spans": [ { "start": 977, "end": 985, "text": "Table 12", "ref_id": "TABREF1" }, { "start": 1167, "end": 1174, "text": "Table ?", "ref_id": null } ], "eq_spans": [], "section": "Japanese Small Data Track -An Exercise in Language Portability", "sec_num": "5.6." }, { "text": "A new phrase alignment approach has be developed, which is based on finding for a given source phrase the target phrase by optimizing the alignment probability for the entire sentence pair under the restriction that words inside the source phrase align only to word inside the target phrase and words outside of the source phrase align only to words outside of the target phrase. Comparison with previously developed phrase alignment methods has shown that this new approach leads to comparable and even better results, and yet is very simple. A major advantage of this method is that with even using only an IBM1 lexicon, i.e. using only the simplest alignment model, which has the shortest training time, competitive results are possible. It seems likely that other co-occurrence statistics like Dice coefficient, Chi-square or mutual information might lead to similar results. On the other side, however, better lexicons do lead to better phrase alignment and thereby to better translation results. A further advantage is that phrases up to any length can be found when applying the phrase search and alignment during decoding time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary and Future Work", "sec_num": "6." }, { "text": "Future extension will include using higher order word alignment models, like the HMM alignment model or the IBM4 alignment model in the phrase alignment step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary and Future Work", "sec_num": "6." }, { "text": "The translation memory component used in this study was rather simple. There are a number of possibilities how this work could be extended: 1. Allowing more than one mismatch between test sentence and sentence in the training corpus, esp. for longer sentences. 2. Instead of selecting only one of the most similar sentences, selecting the n-best matches and iterate over all of them. 3. Using additional information, like parts of speech, to have a more discriminative matching between sentences. 5. Integrating SMT and translation memory results using better criteria than just on the number of errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary and Future Work", "sec_num": "6." }, { "text": "The experiments presented in this paper have shown that out-of-domain data can be used to improve translation quality when only a small domain specific corpus is available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary and Future Work", "sec_num": "6." }, { "text": "A major problem became apparent in the evaluation with using multiple reference translations, which are not original translations, but at least in part paraphrases of original translations. This make the reference translations less typical as shown by the increased language model perplexity when training the language model on the full BTEC corpus. Also the wide variability in length of the multiple reference translations and the different calculation for the length penalty in Bleu and NIST score calculation results in rather low correlation between these to metrics, and thereby also to low correlation with human evaluation. We observed as typical behavior that the higher the Bleu score the lower the NIST score and vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary and Future Work", "sec_num": "6." }, { "text": "This difference in implementation for the calculation of the length penalty has been pointed out to Mark Przybocki, the implementor of the current mteval version, and also a number of researcher using this script, but it was not considered to be a significant problem. It is clear that this problem arises only with several reference translations and is esp. severe when the test sentences and therefore the reference translations are very short, as is the case with the BTEC data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Bleu: a Method for Automatic Evaluation of Machine Translation", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proceedings of ACL-00", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Och and Ney 2000] Franz Josef Och and Hermann Ney. Improved Statistical Alignment Models. Proceed- ings of ACL-00, pp. 440-447, Hongkong, China. [Papineni 2001] Kishore Papineni, Salim Roukos, Todd Ward and Wei-Jing Zhu. Bleu: a Method for Auto- matic Evaluation of Machine Translation. Technical Report RC22176 (W0109-022), IBM Research Di- vision, T. J. Watson Research Center.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Toward a broad-coverage bilingual corpus for speech translations of travel conversations in the real world", "authors": [ { "first": "", "middle": [], "last": "Takezawa", "suffix": "" } ], "year": 2002, "venue": "LM Toolkit] SRILM -The SRI Language Modeling Toolkit. SRI Speech Technology and Research Laboratory", "volume": "", "issue": "", "pages": "147--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "[SRI-LM Toolkit] SRILM -The SRI Language Mod- eling Toolkit. SRI Speech Technology and Re- search Laboratory. http://www.speech.sri.com/ projects/srilm/ [Takezawa et al. 2002] Toshiyuki Takezawa, Eiichiro Sumita, Fumiaki Sugaya, Hirofumi Yamamoto. To- ward a broad-coverage bilingual corpus for speech translations of travel conversations in the real world. Proc. of Third Int. Conf. on Language Resources and Evaluation (LREC), pp. 147-152, Las Palmas, Spain, May 2002.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "HMM-based Word Alignment in Statistical Translation", "authors": [ { "first": "", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 1996, "venue": "COLING '96: The 16th Int. Conf. on Computational Linguistics", "volume": "", "issue": "", "pages": "836--841", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Vogel et al. 1996] Stephan Vogel, Hermann Ney, and Christoph Tillmann. HMM-based Word Alignment in Statistical Translation. in COLING '96: The 16th Int. Conf. on Computational Linguistics, pp. 836- 841, Copenhagen, August 1996.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The CMU Statistical Translation System", "authors": [ { "first": "", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 1998, "venue": "SMT Decoder Disected: Word Reordering Proc. of International Confrerence on Natural Language Processing and Knowledge Engineering", "volume": "6", "issue": "", "pages": "2775--2778", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Vogel et al. 2003] Stephan Vogel, Ying Zhang, Fei Huang, Alicia Tribble, Ashish Venugopal, Bing Zhao, Alex Waibel. The CMU Statistical Transla- tion System. Proceedings of MT Summit IX, New Orleans, LA, U.S.A., September 2003. [Vogel 2003] Stephan Vogel. SMT Decoder Disected: Word Reordering Proc. of International Confrer- ence on Natural Language Processing and Knowl- edge Engineering (NLP-KE), 2003, Beijing, China. [Wang and Waibel 1998] Yeyi Wang and Alex Waibel. Fast Decoding for Statistical Machine Translation. Proc. ICSLP 98, Vol. 6, pp. 2775-2778, Sidney, Australia, 1998.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Syntax-based Statistical Translation Model", "authors": [ { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2000, "venue": "Proc. of the 39th Annual Meeting of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Yamada and Knight 2000] Kenji Yamada and Kevin Knight. A Syntax-based Statistical Translation Model. in Proc. of the 39th Annual Meeting of ACL, Nancy, France, 2000.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Integrated Phrase Segmentation and Alignment Model for Statistical Machine Translation", "authors": [ { "first": "Ying", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" } ], "year": 2003, "venue": "Proc. of International Confrerence on Natural Language Processing and Knowledge Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Zhang 2003] Ying Zhang, Stephan Vogel and Alex Waibel. Integrated Phrase Segmentation and Align- ment Model for Statistical Machine Translation. Proc. of International Confrerence on Natural Language Processing and Knowledge Engineering (NLP-KE), 2003, Beijing, China.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "v. Iterate over all e and e and choose the best S e as the desired translation." }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "2. Deletion of f in S f : i. Find the possible phrase alignments e in S e for the word f ." }, "TABREF0": { "num": null, "type_str": "table", "content": "", "html": null, "text": "qu\u00e9 tipo de trabajo est\u00e1s interesado ? what kind of job are you interested in ? en qu\u00e9 tipo de cosas est\u00e1s interesado ? what kind of things are you interested in ? en qu\u00e9 tipo de excursiones est\u00e1s interesado ? what kind of tour are you interested in ?" }, "TABREF1": { "num": null, "type_str": "table", "content": "
ChineseJapanese
DevTestDevTest
Sentences506500506500
Words3515 3794 4108 4370
Vocabulary 870893954979
", "html": null, "text": "Translation results for the Chinese small data track." }, "TABREF2": { "num": null, "type_str": "table", "content": "
CHEN
Sentences20,000
Words182,902 188,935
Vocabulary7,6457,181
LM PP-68.6
Unk in Dev160-
Unk in Test104-
", "html": null, "text": "Training and test data statistics Chinese small data track." }, "TABREF3": { "num": null, "type_str": "table", "content": "
Opt. BleuOpt. NIST
Bleu NIST Bleu NIST
IBM1 Lex, Rel Freq41.64.6936.57.58
IBM1 Lex, IBM1 Lex 43.56.0739.57.67
HMM Lex, HMM Lex 46.05.7736.87.94
-n-best rescoring46.74.87--
", "html": null, "text": "Translation results for the Chinese small data track." }, "TABREF4": { "num": null, "type_str": "table", "content": "
Bleu NIST
SMT alone 39.17.90
With TM38.87.84
", "html": null, "text": "Effect of using the translation memory component for the Chinese small data track." }, "TABREF6": { "num": null, "type_str": "table", "content": "
Bleu NIST
High Bleu44.67.31
High NIST 37.98.31
Balanced41.48.34
With TM36.78.16
", "html": null, "text": "Translation results for the Chinese small data track on unseen test data." }, "TABREF7": { "num": null, "type_str": "table", "content": "
BTEC3*BTEC+NEWS
CHENCHEN
Sentences20,000129,209
Words175,284 188,935 1,50m1,65m
Vocabulary7,6177,18125,961 32,658
LM PP-68.6-100.5
Unk in Dev89-5-
Unk in Test88-13-
", "html": null, "text": "Training and test data statistics Chinese additional data track." }, "TABREF8": { "num": null, "type_str": "table", "content": "
Opt. BleuOpt. NIST
Bleu NIST Bleu NIST
Re-segmented48.75.4238.28.16
BTEC + 1m NEWS44.75.0641.16.88
3*BTEC + 1m NEWS 51.05.0939.98.33
the combined data, 3 times the BTEC corpus plus NEWS
corpus once. Results are given in
", "html": null, "text": "Translation results for the development test set in the Chinese additional data track." }, "TABREF9": { "num": null, "type_str": "table", "content": "
Bleu NIST
High Bleu48.55.85
High NIST 40.18.82
Balanced43.08.22
", "html": null, "text": "Translation results for the unseen test data in the additional data track." }, "TABREF10": { "num": null, "type_str": "table", "content": "
BTEC3*BTEC+NEWS
CHENCHEN
Sentences161,307553,130
Words1,13m 1,21m 4,36m4,70m
Vocabulary 12,619 13,358 27,978 36,075
LM PP-72.0-95.1
Unk in Dev48-5-
Unk in Test52-13-
", "html": null, "text": "Training and test data statistics Chinese unrestricted data track." }, "TABREF11": { "num": null, "type_str": "table", "content": "
Opt. BleuOpt. NIST
Bleu NIST Bleu NIST
BTEC53.86.3547.29.09
3*BTEC+NEWS 53.36.6345.99.10
", "html": null, "text": "Translation results for the development test set in the Chinese unrestricted data track." }, "TABREF12": { "num": null, "type_str": "table", "content": "
Bleu NIST
High Bleu57.17.60
High NIST 48.69.66
Balanced52.59.56
With TM51.39.29
", "html": null, "text": "Translation results for the unseen test data in the unrestricted data track ." }, "TABREF13": { "num": null, "type_str": "table", "content": "
Opt. BleuOpt. NIST
Bleu NIST Bleu NIST
With CE Parameters 48.87.0745.49.27
Additional Tuning50.27.3845.89.29
", "html": null, "text": "Translation results for the Japanese small data track development test set, using parameters from optimal Chinese-English translation, and further optimizing for Japanese-English." }, "TABREF14": { "num": null, "type_str": "table", "content": "
Bleu NIST
High Bleu46.36.73
High NIST 41.58.84
Balanced43.08.06
", "html": null, "text": "Translation results for Japanese-English small data track on unseen test data." } } } }