{ "paper_id": "2005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:22:18.694303Z" }, "title": "The RWTH Phrase-based Statistical Machine Translation System", "authors": [ { "first": "Richard", "middle": [], "last": "Zens", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "zens@cs.rwth-aachen.de" }, { "first": "Oliver", "middle": [], "last": "Bender", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "bender@cs.rwth-aachen.de" }, { "first": "Sa\u0161a", "middle": [], "last": "Hasan", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "hasan@cs.rwth-aachen.de" }, { "first": "Shahram", "middle": [], "last": "Khadivi", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "khadivi@cs.rwth-aachen.de" }, { "first": "Evgeny", "middle": [], "last": "Matusov", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "matusov@cs.rwth-aachen.de" }, { "first": "Jia", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "xujia@cs.rwth-aachen.de" }, { "first": "Yuqi", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "yzhang@cs.rwth-aachen.de" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "ney@cs.rwth-aachen.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We give an overview of the RWTH phrase-based statistical machine translation system that was used in the evaluation campaign of the International Workshop on Spoken Language Translation 2005. We use a two pass approach. In the first pass, we generate a list of the N best translation candidates. The second pass consists of rescoring and reranking this N-best list. We will give a description of the search algorithm as well as the models that are used in each pass. We participated in the supplied data tracks for manual transcriptions for the following translation directions: Arabic-English, Chinese-English, English-Chinese and Japanese-English. For Japanese-English, we also participated in the C-Star track. In addition, we performed translations of automatic speech recognition output for Chinese-English and Japanese-English. For both language pairs, we translated the single-best ASR hypotheses. Additionally, we translated Chinese ASR lattices.", "pdf_parse": { "paper_id": "2005", "_pdf_hash": "", "abstract": [ { "text": "We give an overview of the RWTH phrase-based statistical machine translation system that was used in the evaluation campaign of the International Workshop on Spoken Language Translation 2005. We use a two pass approach. In the first pass, we generate a list of the N best translation candidates. The second pass consists of rescoring and reranking this N-best list. We will give a description of the search algorithm as well as the models that are used in each pass. We participated in the supplied data tracks for manual transcriptions for the following translation directions: Arabic-English, Chinese-English, English-Chinese and Japanese-English. For Japanese-English, we also participated in the C-Star track. In addition, we performed translations of automatic speech recognition output for Chinese-English and Japanese-English. For both language pairs, we translated the single-best ASR hypotheses. Additionally, we translated Chinese ASR lattices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We give an overview of the RWTH phrase-based statistical machine translation system that was used in the evaluation campaign of the International Workshop on Spoken Language Translation (IWSLT) 2005.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We use a two pass approach. First, we generate a word graph and extract a list of the N best translation candidates. Then, we apply additional models in a rescoring/reranking approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This work is structured as follows: first, we will review the statistical approach to machine translation and introduce the notation that we will use in the later sections. Then, we will describe the models and algorithms that are used for generating the N -best lists, i.e., the first pass. In Section 4, we will describe the models that are used to rescore and rerank this N -best list, i.e., the second pass. Afterward, we will give an overview of the tasks and discuss the experimental results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In statistical machine translation, we are given a source language sentence f J 1 = f 1 . . . f j . . . f J , which is to be translated into a target language sentence e I 1 = e 1 . . . e i . . . e I . Among all possible target language sentences, we will choose the sentence with the highest probability: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-channel approach to SMT", "sec_num": "1.1." }, { "text": "e\u00ce 1 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source-channel approach to SMT", "sec_num": "1.1." }, { "text": "This decomposition into two knowledge sources is known as the source-channel approach to statistical machine translation [1] . It allows an independent modeling of the target language model P r(e I 1 ) and the translation model P r(f J 1 |e I 1 ) 1 . The target language model describes the well-formedness of the target language sentence. The translation model links the source language sentence to the target language sentence. The argmax operation denotes the search problem, i.e., the generation of the output sentence in the target language.", "cite_spans": [ { "start": 121, "end": 124, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 247, "end": 248, "text": "1", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Source-channel approach to SMT", "sec_num": "1.1." }, { "text": "An alternative to the classical source-channel approach is the direct modeling of the posterior probability P r(e I 1 |f J 1 ). Using a log-linear model [2] , we obtain:", "cite_spans": [ { "start": 153, "end": 156, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Log-linear model", "sec_num": "1.2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P r(e I 1 |f J 1 ) = exp M m=1 \u03bb m h m (e I 1 , f J 1 ) e I 1 exp M m=1 \u03bb m h m (e I 1 , f J 1 )", "eq_num": "(3)" } ], "section": "Log-linear model", "sec_num": "1.2." }, { "text": "The denominator represents a normalization factor that depends only on the source sentence f J 1 . Therefore, we can omit it during the search process. As a decision rule, we obtain:\u00ea\u00ce", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-linear model", "sec_num": "1.2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 = argmax I,e I 1 M m=1 \u03bb m h m (e I 1 , f J 1 )", "eq_num": "(4)" } ], "section": "Log-linear model", "sec_num": "1.2." }, { "text": "This approach is a generalization of the source-channel approach. It has the advantage that additional models h(\u2022) can be easily integrated into the overall system. The model scaling factors \u03bb M 1 are trained according to the maximum entropy principle, e.g., using the GIS algorithm. Alternatively, one can train them with respect to the final translation quality measured by an error criterion [3] . For the IWSLT evaluation campaign, we optimized the scaling factors with respect to a linear interpolation of WER, PER, BLEU and NIST using the Downhill Simplex algorithm from [4] .", "cite_spans": [ { "start": 395, "end": 398, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 577, "end": 580, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Log-linear model", "sec_num": "1.2." }, { "text": "The basic idea of phrase-based translation is to segment the given source sentence into phrases, then translate each phrase and finally compose the target sentence from these phrase translations. This idea is illustrated in Figure 1 . Formally, we define a segmentation of a given sentence pair (f J", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 232, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Phrase-based approach", "sec_num": "1.3." }, { "text": "1 , e I 1 ) into K blocks:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based approach", "sec_num": "1.3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k \u2192 s k := (i k ; b k , j k ), for k = 1 . . . K.", "eq_num": "(5)" } ], "section": "Phrase-based approach", "sec_num": "1.3." }, { "text": "Here, i k denotes the last position of the k th target phrase; we set i 0 := 0. The pair (b k , j k ) denotes the start and end positions of the source phrase that is aligned to the k th target phrase; we set j 0 := 0. Phrases are defined as nonempty contiguous sequences of words. We constrain the segmentations so that all words in the source and the target sentence are covered by exactly one phrase. Thus, there are no gaps and there is no overlap. For a given sentence pair (f J 1 , e I 1 ) and a given segmentation s K 1 , we define the bilingual phrases as: e k := e i k\u22121 +1 . . . e i k (6) f", "cite_spans": [ { "start": 523, "end": 524, "text": "K", "ref_id": null }, { "start": 595, "end": 598, "text": "(6)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-based approach", "sec_num": "1.3." }, { "text": "k := f b k . . . f j k (7) i 3 b 2 j 2 b 1 j 1 b 3 j 3 b 4 j 4 = J i 1 i 2 0 = j 0 0 = i 0 I = i 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based approach", "sec_num": "1.3." }, { "text": "source positions target positions Note that the segmentation s K 1 contains the information on the phrase-level reordering. The segmentation s K 1 is introduced as a hidden variable in the translation model. Therefore, it would be theoretically correct to sum over all possible segmentations. In practice, we use the maximum approximation for this sum. As a result, the models h(\u2022) depend not only on the sentence pair (f J 1 , e I 1 ), but also on the segmentation s K 1 , i.e., we have models h(f J 1 , e I 1 , s K 1 ).", "cite_spans": [ { "start": 63, "end": 64, "text": "K", "ref_id": null }, { "start": 143, "end": 144, "text": "K", "ref_id": null }, { "start": 468, "end": 469, "text": "K", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-based approach", "sec_num": "1.3." }, { "text": "The RWTH phrase-based system supports two alternative search strategies that will be described in this section. Translating a source language word graph. The first search strategy that our system supports takes a source language word graph as input and translates this graph in a monotone way [5] . The input graph can represent different reorderings of the input sentence so that the overall search can generate nonmonotone translations. Using this approach, it is very simple to experiment with various reordering constraints, e.g., the constraints proposed in [6] .", "cite_spans": [ { "start": 293, "end": 296, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 563, "end": 566, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Search algorithms", "sec_num": "2." }, { "text": "Alternatively, we can use ASR lattices as input and translate them without changing the search algorithm, cf. [7] . A disadvantage when translating lattices with this method is that the search is monotone. To overcome this problem, we extended the monotone search algorithm from [5, 7] so that it is possible to reorder the target phrases. We implemented the following idea: while traversing the input graph, a phrase can be skipped and processed later.", "cite_spans": [ { "start": 110, "end": 113, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 279, "end": 282, "text": "[5,", "ref_id": "BIBREF4" }, { "start": 283, "end": 285, "text": "7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Search algorithms", "sec_num": "2." }, { "text": "Source cardinality synchronous search. For singleword based models, this search strategy is described in [8] . The idea is that the search proceeds synchronously with the cardinality of the already translated source positions. Here, we use a phrase-based version of this idea. To make the search problem feasible, the reorderings are constrained as in [9] .", "cite_spans": [ { "start": 105, "end": 108, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 352, "end": 355, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Search algorithms", "sec_num": "2." }, { "text": "Word graphs and N -best lists. The two described search algorithms generate a word graph containing the most likely translation hypotheses. Out of this word graph we extract N -best lists. For more details on word graphs and Nbest list extraction, see [10, 11] .", "cite_spans": [ { "start": 252, "end": 256, "text": "[10,", "ref_id": "BIBREF9" }, { "start": 257, "end": 260, "text": "11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Search algorithms", "sec_num": "2." }, { "text": "We use a log-linear combination of several models (also called feature functions). In this section, we will describe the models that are used in the first pass, i.e., during search. This is an improved version of the system described in [12] . More specifically the models are: a phrase translation model, a word-based translation model, a deletion model, word and phrase penalty, a target language model and a reordering model.", "cite_spans": [ { "start": 237, "end": 241, "text": "[12]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Models used during search", "sec_num": "3." }, { "text": "The phrase-based translation model is the main component of our translation system. The hypotheses are generated by concatenating target language phrases. The pairs of source and corresponding target phrases are extracted from the wordaligned bilingual training corpus. The phrase extraction algorithm is described in detail in [5] . The main idea is to extract phrase pairs that are consistent with the word alignment. Thus, the words of the source phrase are aligned only to words in the target phrase and vice versa. This criterion is identical to the alignment template criterion described in [13] .", "cite_spans": [ { "start": 328, "end": 331, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 597, "end": 601, "text": "[13]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-based model", "sec_num": "3.1." }, { "text": "We use relative frequencies to estimate the phrase translation probabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based model", "sec_num": "3.1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(f |\u1ebd) = N (f ,\u1ebd) N (\u1ebd)", "eq_num": "(8)" } ], "section": "Phrase-based model", "sec_num": "3.1." }, { "text": "Here, the number of co-occurrences of a phrase pair (f ,\u1ebd) that are consistent with the word alignment is denoted as N (f ,\u1ebd). If one occurrence of a target phrase\u1ebd has N > 1 possible translations, each of them contributes to N (f ,\u1ebd) with 1/N . The marginal count N (\u1ebd) is the number of occurrences of the target phrase\u1ebd in the training corpus. The resulting feature function is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based model", "sec_num": "3.1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h Phr (f J 1 , e I 1 , s K 1 ) = log K k=1 p(f k |\u1ebd k )", "eq_num": "(9)" } ], "section": "Phrase-based model", "sec_num": "3.1." }, { "text": "To obtain a more symmetric model, we use the phrase-based model in both directions p(f |\u1ebd) and p(\u1ebd|f ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based model", "sec_num": "3.1." }, { "text": "We are using relative frequencies to estimate the phrase translation probabilities. Most of the longer phrases occur only once in the training corpus. Therefore, pure relative frequencies overestimate the probability of those phrases. To overcome this problem, we use a word-based lexicon model to smooth the phrase translation probabilities. The score of a phrase pair is computed similar to the IBM model 1, but here, we are summing only within a phrase pair and not over the whole target language sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based lexicon model", "sec_num": "3.2." }, { "text": "h Lex (f J 1 , e I 1 , s K 1 ) = log K k=1 j k j=b k i k i=i k\u22121 +1 p(f j |e i ) (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based lexicon model", "sec_num": "3.2." }, { "text": "The word translation probabilities p(f |e) are estimated as relative frequencies from the word-aligned training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based lexicon model", "sec_num": "3.2." }, { "text": "The word-based lexicon model is also used in both directions p(f |e) and p(e|f ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based lexicon model", "sec_num": "3.2." }, { "text": "The deletion model [14] is designed to penalize hypotheses that miss the translation of a word. For each source word, we check if a target word with a probability higher than a given threshold \u03c4 exists. If not, this word is considered a deletion. The feature simply counts the number of deletions. Last year [15] , we used this model during rescoring only, whereas this year, we integrated a within-phrase variant of the deletion model into the search:", "cite_spans": [ { "start": 19, "end": 23, "text": "[14]", "ref_id": "BIBREF13" }, { "start": 308, "end": 312, "text": "[15]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Deletion model", "sec_num": "3.3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h Del (f J 1 , e I 1 , s K 1 ) = K k=1 j k j=b k i k i=i k\u22121 +1 [ p(f j |e i ) < \u03c4 ]", "eq_num": "(11)" } ], "section": "Deletion model", "sec_num": "3.3." }, { "text": "The word translation probabilities p(f |e) are the same as for the word-based lexicon model. We use [\u2022] to denote a true or false statement [16] , i.e., the result is 1 if the statement is true, and 0 otherwise. In general, we use the following convention:", "cite_spans": [ { "start": 140, "end": 144, "text": "[16]", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Deletion model", "sec_num": "3.3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "[ C ] = 1, if condition C is true 0, if condition C is false", "eq_num": "(12)" } ], "section": "Deletion model", "sec_num": "3.3." }, { "text": "In addition, we use two simple heuristics, namely word penalty and phrase penalty:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word and phrase penalty model", "sec_num": "3.4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h WP (f J 1 , e I 1 , s K 1 ) = I (13) h PP (f J 1 , e I 1 , s K 1 ) = K", "eq_num": "(14)" } ], "section": "Word and phrase penalty model", "sec_num": "3.4." }, { "text": "These two models affect the average sentence and phrase lengths. The model scaling factors can be adjusted to prefer longer sentences and longer phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word and phrase penalty model", "sec_num": "3.4." }, { "text": "We use the SRI language modeling toolkit [17] to train a standard n-gram language model. The smoothing technique we apply is the modified Kneser-Ney discounting with interpolation. The order of the language model depends on the translation direction. For most tasks, we use a trigram model, except for Chinese-English, where we use a fivegram language model. The resulting feature function is:", "cite_spans": [ { "start": 41, "end": 45, "text": "[17]", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Target language model", "sec_num": "3.5." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h LM (f J 1 , e I 1 , s K 1 ) = log I i=1 p(e i |e i\u22121 i\u2212n+1 )", "eq_num": "(15)" } ], "section": "Target language model", "sec_num": "3.5." }, { "text": "We use a very simple reordering model that is also used in, for instance, [13, 15] . It assigns costs based on the jump width:", "cite_spans": [ { "start": 74, "end": 78, "text": "[13,", "ref_id": "BIBREF12" }, { "start": 79, "end": 82, "text": "15]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Reordering model", "sec_num": "3.6." }, { "text": "h RM (f J 1 , e I 1 , s K 1 ) = K k=1 |b k \u2212 j k\u22121 \u2212 1| + J \u2212 j k (16)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reordering model", "sec_num": "3.6." }, { "text": "The usage of N -best lists in machine translation has several advantages. It alleviates the effects of the huge search space which is represented in word graphs by using a compact excerpt of the N best hypotheses generated by the system. Especially for small tasks, such as the IWSLT supplied data track, rather small N -best lists are already sufficient to obtain good oracle error rates, i.e., the error rate of the best hypothesis with respect to an error measure (such as WER or BLEU). N -best lists are suitable for easily applying several rescoring techniques because the hypotheses are already fully generated. In comparison, word graph rescoring techniques need specialized tools which traverse the graph appropriately. Additionally, because a node within a word graph allows for many histories, one can only apply local rescoring techniques, whereas for N -best lists, techniques can be used that consider properties of the whole target sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rescoring models", "sec_num": "4." }, { "text": "In the next sections, we will present several rescoring techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rescoring models", "sec_num": "4." }, { "text": "One of the first ideas in rescoring is to use additional language models that were not used in the generation procedure. In our system, we use clustered language models based on regular expressions [18] . Each hypothesis is classified by matching it to regular expressions that identify the type of the sentence. Then, a cluster-specific (or sentence-type-specific) language model is interpolated into a global language model to compute the score of the sentence:", "cite_spans": [ { "start": 198, "end": 202, "text": "[18]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Clustered language models", "sec_num": "4.1." }, { "text": "h CLM (f J 1 , e I 1 ) = (17) log c R c (e I 1 ) \u03b1 c p c (e I 1 ) + (1 \u2212 \u03b1 c )p g (e I 1 ) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustered language models", "sec_num": "4.1." }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustered language models", "sec_num": "4.1." }, { "text": "p g (e I 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustered language models", "sec_num": "4.1." }, { "text": "is the global language model, p c (e I 1 ) the cluster-specific language model, and R c (e I 1 ) denotes the true-or-false statement (cf. Equation 12) which is 1 if the c th regular expression R c (\u2022) matches the target sentence e I 1 and 0 otherwise. 2 ", "cite_spans": [ { "start": 252, "end": 253, "text": "2", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Clustered language models", "sec_num": "4.1." }, { "text": "IBM model 1 rescoring rates the quality of a sentence by using the probabilities of one of the easiest single-word based translation models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IBM model 1", "sec_num": "4.2." }, { "text": "h IBM1 (f J 1 , e I 1 ) = log \uf8eb \uf8ed 1 (I + 1) J J j=1 I i=0 p(f j |e i ) \uf8f6 \uf8f8 (18)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IBM model 1", "sec_num": "4.2." }, { "text": "Despite its simplicity, this model achieves good improvements [14] .", "cite_spans": [ { "start": 62, "end": 66, "text": "[14]", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "IBM model 1", "sec_num": "4.2." }, { "text": "During the IBM model 1 rescoring step, we make use of another rescoring technique that benefits from the IBM model 1 lexical probabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IBM1 deletion model", "sec_num": "4.3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h Del (f J 1 , e I 1 ) = J j=1 I i=0 [ p(f j |e i ) < \u03c4 ]", "eq_num": "(19)" } ], "section": "IBM1 deletion model", "sec_num": "4.3." }, { "text": "We call this the IBM1 deletion model. It counts all source words whose lexical probability given each target word is below a threshold \u03c4 . In the experiments, \u03c4 was chosen between 10 \u22121 and 10 \u22124 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IBM1 deletion model", "sec_num": "4.3." }, { "text": "The next step after IBM model 1 rescoring is HMM rescoring. We use the HMM to compute the log-likelihood of a 2 The clusters are disjunct, thus only one regular expression matches.", "cite_spans": [ { "start": 110, "end": 111, "text": "2", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Hidden Markov alignment model", "sec_num": "4.4." }, { "text": "sentence pair (f J 1 , e I 1 ):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hidden Markov alignment model", "sec_num": "4.4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h HMM (f J 1 , e I 1 ) = log a J 1 J j=1 p(a j |a j\u22121 , I) \u2022 p(f j |e a j )", "eq_num": "(20)" } ], "section": "Hidden Markov alignment model", "sec_num": "4.4." }, { "text": "In our experiments, we use a refined alignment probability p(a j \u2212 a j\u22121 |G(e aj ), I) that conditions the jump widths of the alignment positions a j \u2212 a j\u22121 on the word class G(e aj ). This is the so-called homogeneous HMM [19] .", "cite_spans": [ { "start": 224, "end": 228, "text": "[19]", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Hidden Markov alignment model", "sec_num": "4.4." }, { "text": "Several word penalties are used in the rescoring step:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word penalties", "sec_num": "4.5." }, { "text": "h WP (f J 1 , e I 1 ) = \uf8f1 \uf8f2 \uf8f3 I (a) I/J (b) 2|I \u2212 J|/(I + J) (c) (21)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word penalties", "sec_num": "4.5." }, { "text": "The word penalties are heuristics that affect the generated hypothesis length. In general, sentences that are too short should be avoided.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word penalties", "sec_num": "4.5." }, { "text": "In the experiments on coupling speech recognition and machine translation, we used the phrase-based MT system described in Section 2 to translate ASR lattices. In addition to the models described in Section 3, we use the acoustic model and the source language model of the ASR system in the loglinear model. These models are integrated into the search and the scaling factors are also optimized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating ASR and MT", "sec_num": "5." }, { "text": "A significant obstacle for integrating speech recognition and translation is the mismatch between the vocabularies of the ASR and MT system. For the Chinese-English task, the number of out-of-vocabulary (OOV) words was rather high. Ideally, the vocabulary of the recognition system should be a subset of the translation system source vocabulary. In the IWSLT evaluation, we had no control over the recognition experiments. For this reason, the reported improvements might have been larger with a proper handling of the vocabularies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating ASR and MT", "sec_num": "5." }, { "text": "The experiments were carried out on the Basic Travel Expression Corpus (BTEC) task [20] . This is a multilingual speech corpus which contains tourism-related sentences similar to those that are found in phrase books. The corpus statistics are shown in Table 1 . For the supplied data track, 20 000 sentences training corpus and two test sets (C-Star'03 and IWSLT'04) were made available for each language pair. As additional training resources for the C-Star track, we used the full BTEC for Japanese-English and the Spoken Language DataBase (SLDB) [21] , which consists of transcriptions of spoken dialogs in the domain of hotel reservations 3 . For the Japanese-English supplied data track, the number of OOVs in the IWSLT'05 test set is rather high, both in comparison with the C-Star'03 and IWSLT'04 test sets and in comparison with the number of OOVs for the other language pairs. As for any data-driven approach, the performance of our system deteriorates due to the high number of OOVs. Using the additional corpora in the C-Star track, we are able to reduce the number of OOVs to a noncritical number.", "cite_spans": [ { "start": 83, "end": 87, "text": "[20]", "ref_id": "BIBREF19" }, { "start": 549, "end": 553, "text": "[21]", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 252, "end": 259, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Tasks and corpora", "sec_num": "6." }, { "text": "As the BTEC is a rather clean corpus, the preprocessing consisted mainly of tokenization, i.e., separating punctuation marks from words. Additionally, we replaced contractions such as it's or I'm in the English corpus and we removed the case information. For Arabic, we removed the diacritics and we split common prefixes: Al, w, f, b, l. There was no special preprocessing for the Chinese and the Japanese training corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tasks and corpora", "sec_num": "6." }, { "text": "We used the C-Star'03 corpus as development set to optimize the system, for instance, the model scaling factors and the GIZA++ [19] parameter settings. The IWSLT'04 test set was used as a blind test corpus. After the optimization, we added the C-Star'03 and the IWSLT'04 test sets to the training corpus and retrained the whole system.", "cite_spans": [ { "start": 127, "end": 131, "text": "[19]", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Tasks and corpora", "sec_num": "6." }, { "text": "We performed speech translation experiments on the Chinese-English and Japanese-English supplied data tracks. For Japanese-English we translated the single-best ASR hypotheses only, whereas for Chinese-English we also translated ASR lattices. The preprocessing and postprocessing steps are the same as for text translation. Table 2 contains the Chinese ASR word lattice statistics for the three test sets. The ASR WER and the graph error rate (GER) were measured at the word level (and not at the character level). The GER is the minimum WER among all paths through the lattice.", "cite_spans": [], "ref_spans": [ { "start": 324, "end": 331, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Tasks and corpora", "sec_num": "6." }, { "text": "The automatic evaluation criteria are computed using the IWSLT 2005 evaluation server. For all the experiments, we report the two accuracy measures BLEU [22] and NIST [23] as well as the two error rates WER and PER. For the primary submissions, we also report the two accuracy measures Meteor [24] and GTM [25] . All those criteria are computed with respect to multiple references (with the exception of English-Chinese where only one reference is available).", "cite_spans": [ { "start": 153, "end": 157, "text": "[22]", "ref_id": "BIBREF21" }, { "start": 167, "end": 171, "text": "[23]", "ref_id": "BIBREF22" }, { "start": 293, "end": 297, "text": "[24]", "ref_id": "BIBREF23" }, { "start": 306, "end": 310, "text": "[25]", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental results", "sec_num": "7." }, { "text": "Research Laboratories, Kyoto, Japan. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental results", "sec_num": "7." }, { "text": "The translation results of the RWTH primary submissions are summarized in Table 3 . Note that for English-Chinese, only one reference was used. Therefore the scores are in a different range.", "cite_spans": [], "ref_spans": [ { "start": 74, "end": 81, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Primary submissions", "sec_num": "7.1." }, { "text": "In Table 4 , we compare the translation performance of the RWTH 2004 system [15] and our current system. The evaluation is done on the IWSLT'04 test set for the supplied data track using the IWSLT 2005 evaluation server. Note that the reported numbers for the 2004 system differ slightly from the numbers in [15] due to a somewhat different computation. We observe significant improvements for all evaluation criteria and for both language pairs. For the Chinese-English system, for instance, the BLEU score increases by 4.9% and the WER decreases by 5%. Similar improvements are obtained for the Japanese-English system. In Table 5 , we present some translation examples for Japanese-English. As already mentioned in the previous section, our data-driven approach suffers from the high number of OOVs for the supplied data track. This becomes apparent when looking at the translation hypotheses. Furthermore, the incorporation of additional training data improves the translation quality significantly, not only in terms of the official results (cf. Table 3 ) but also when considering the examples in Table 5 . In all three examples, the C-Star data track system is able to produce one of the reference translations. On the other hand, the output of the supplied data track system is of much lower quality. In the first example, we see the effect of a single unknown word. In the second example, the word choice is more or less correct, but the fluency of the output is very poor. The translation in the final example is entirely incomprehensible for the supplied data track system.", "cite_spans": [ { "start": 76, "end": 80, "text": "[15]", "ref_id": "BIBREF14" }, { "start": 308, "end": 312, "text": "[15]", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": "TABREF2" }, { "start": 625, "end": 632, "text": "Table 5", "ref_id": "TABREF5" }, { "start": 1051, "end": 1058, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1103, "end": 1110, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results for text input", "sec_num": "7.2." }, { "text": "The effects of the N -best list rescoring for the IWSLT'04 test set are summarized in Table 6 . On the development set (C-Star'03), which was used to optimize the model scaling factors, all models gradually help to enhance the overall performance of the system, e.g., BLEU is improved from 45.5% to 47.4%. For the IWSLT'04 blind test set, the results are not as smooth, but still the overall system (using all models that were described in Section 4) achieves improvements in Table 7 , we show some examples where the impact of the rescoring models can be seen.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 93, "text": "Table 6", "ref_id": "TABREF6" }, { "start": 476, "end": 483, "text": "Table 7", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Results for text input", "sec_num": "7.2." }, { "text": "The translation results for the IWSLT'05 test set for ASR input in the Chinese-English supplied data track are summa- Table 8 .", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 125, "text": "Table 8", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Results for ASR input", "sec_num": "7.3." }, { "text": "We report the results for the two search strategies described in Section 2. Using the first strategy (Graph), we are able to translate ASR lattices. We observe significant improvements in translation quality over the translations of the single-best (1-Best) recognition results. This is true for the monotone search (Mon) as well as for the version which allows for reordering of target phrases (Skip). The improvements are consistent among all evaluation criteria. Using the second search strategy (SCSS), we are limited to the single-best ASR hypotheses as input. This is the same system that is used to translate the manual transcriptions. Despite the limitation to the single-best hypotheses, this system performs best in terms of the automatic evaluation measures (except for the NIST score).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results for ASR input", "sec_num": "7.3." }, { "text": "The RWTH Chinese-English primary systems for ASR did not include rescoring. After the evaluation, we applied the rescoring techniques (described in Section 4) to the primary system. The improvements from rescoring are similar to the text system, e.g., 1.9% for the BLEU score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results for ASR input", "sec_num": "7.3." }, { "text": "Even if our primary system did not use lattices, a subjective comparison of the two systems showed positive effects when translating lattices for a large number of sentences. Recognition errors that occur in the single-best ASR hypotheses are often corrected when lattices are used. Some translation examples for improvements with lattices are shown in Table 9 . ", "cite_spans": [], "ref_spans": [ { "start": 353, "end": 360, "text": "Table 9", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Results for ASR input", "sec_num": "7.3." }, { "text": "We have described the RWTH phrase-based statistical machine translation system that was used in the evaluation campaign of the IWSLT 2005. We use a two pass approach. In the first pass, we use a dynamic programming beam search algorithm to generate an N -best list. The second pass consists of rescoring and reranking of this N -best list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "One important advantage of our data-driven machine translation systems is that virtually the same system can be used for the different translation directions. Only a marginal portion of the overall performance can be attributed to language-specific methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "We have shown significant improvements compared to the RWTH system of 2004 [15] .", "cite_spans": [ { "start": 75, "end": 79, "text": "[15]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "We have shown that the translation of ASR lattices can yield significant improvements over the translation of the ASR single-best hypotheses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8." }, { "text": "The notational convention will be as follows: we use the symbol P r(\u2022) to denote general probability distributions with (nearly) no specific assumptions. In contrast, for model-based probability distributions, we use the generic symbol p(\u2022).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Japanese-English training corpora (BTEC, SLDB) that we used in the C-Star track were kindly provided by ATR Spoken Language Translation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was partly funded by the DFG (Deutsche Forschungsgemeinschaft) under the grant NE572/5-1, project \"Statistische Text\u00fcbersetzung\" and by the European Union under the integrated project TC-Star (Technology and Corpora for Speech to Speech Translation, IST-2002-FP6-506738, http://www.tc-star.org).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "9." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A statistical approach to machine translation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [], "last": "Cocke", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" }, { "first": "P", "middle": [ "S" ], "last": "Roossin", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "16", "issue": "2", "pages": "79--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. F. Brown, J. Cocke, S. A. Della Pietra, V. J. Della Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin, \"A statistical approach to machine translation,\" Computational Linguistics, vol. 16, no. 2, pp. 79-85, June 1990.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Discriminative training and maximum entropy models for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "295--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och and H. Ney, \"Discriminative training and maximum entropy models for statistical machine translation,\" in Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, PA, July 2002, pp. 295-302.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proc. of the 41th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och, \"Minimum error rate training in statistical machine translation,\" in Proc. of the 41th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), Sapporo, Japan, July 2003, pp. 160-167.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Flannery, Numerical Recipes in C++", "authors": [ { "first": "W", "middle": [ "H" ], "last": "Press", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Teukolsky", "suffix": "" }, { "first": "W", "middle": [ "T" ], "last": "Vetterling", "suffix": "" }, { "first": "B", "middle": [ "P" ], "last": "", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flan- nery, Numerical Recipes in C++. Cambridge, UK: Cam- bridge University Press, 2002.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Phrase-based statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "25th German Conf. on Artificial Intelligence (KI2002), ser. Lecture Notes in Artificial Intelligence (LNAI), M. Jarke", "volume": "2479", "issue": "", "pages": "18--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zens, F. J. Och, and H. Ney, \"Phrase-based statistical ma- chine translation,\" in 25th German Conf. on Artificial Intel- ligence (KI2002), ser. Lecture Notes in Artificial Intelligence (LNAI), M. Jarke, J. Koehler, and G. Lakemeyer, Eds., vol. 2479. Aachen, Germany: Springer Verlag, September 2002, pp. 18-32.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Novel reordering approaches in phrase-based statistical machine translation", "authors": [ { "first": "S", "middle": [], "last": "Kanthak", "suffix": "" }, { "first": "D", "middle": [], "last": "Vilar", "suffix": "" }, { "first": "E", "middle": [], "last": "Matusov", "suffix": "" }, { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2005, "venue": "43rd Annual Meeting of the Assoc. for Computational Linguistics: Proc. Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond", "volume": "", "issue": "", "pages": "167--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Kanthak, D. Vilar, E. Matusov, R. Zens, and H. Ney, \"Novel reordering approaches in phrase-based statistical ma- chine translation,\" in 43rd Annual Meeting of the Assoc. for Computational Linguistics: Proc. Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond, Ann Arbor, MI, June 2005, pp. 167-174.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Phrase-based translation of speech recognizer word lattices using loglinear model combination", "authors": [ { "first": "E", "middle": [], "last": "Matusov", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2005, "venue": "Proc. IEEE Automatic Speech Recognition and Understanding Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Matusov and H. Ney, \"Phrase-based translation of speech recognizer word lattices using loglinear model combination,\" in Proc. IEEE Automatic Speech Recognition and Under- standing Workshop, Cancun, Mexiko, Nov/Dec 2005, to ap- pear.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Word reordering and a dynamic programming beam search algorithm for statistical machine translation", "authors": [ { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "97--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Tillmann and H. Ney, \"Word reordering and a dynamic programming beam search algorithm for statistical machine translation,\" Computational Linguistics, vol. 29, no. 1, pp. 97- 133, March 2003.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Reordering constraints for phrase-based statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "T", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2004, "venue": "COLING '04: The 20th Int. Conf. on Computational Linguistics", "volume": "", "issue": "", "pages": "205--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zens, H. Ney, T. Watanabe, and E. Sumita, \"Reordering constraints for phrase-based statistical machine translation,\" in COLING '04: The 20th Int. Conf. on Computational Lin- guistics, Geneva, Switzerland, August 2004, pp. 205-211.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Word graphs for statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2005, "venue": "43rd Annual Meeting of the Assoc. for Computational Linguistics: Proc. Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond", "volume": "", "issue": "", "pages": "191--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zens and H. Ney, \"Word graphs for statistical machine translation,\" in 43rd Annual Meeting of the Assoc. for Com- putational Linguistics: Proc. Workshop on Building and Us- ing Parallel Texts: Data-Driven Machine Translation and Be- yond, Ann Arbor, MI, June 2005, pp. 191-198.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Generation of word graphs in statistical machine translation", "authors": [ { "first": "N", "middle": [], "last": "Ueffing", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proc. of the Conf. on Empirical Methods for Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "156--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Ueffing, F. J. Och, and H. Ney, \"Generation of word graphs in statistical machine translation,\" in Proc. of the Conf. on Em- pirical Methods for Natural Language Processing (EMNLP), Philadelphia, PA, July 2002, pp. 156-163.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Improvements in phrase-based statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proc. of the Human Language Technology Conf. (HLT-NAACL)", "volume": "", "issue": "", "pages": "257--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zens and H. Ney, \"Improvements in phrase-based statis- tical machine translation,\" in Proc. of the Human Language Technology Conf. (HLT-NAACL), Boston, MA, May 2004, pp. 257-264.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Improved alignment models for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1999, "venue": "Proc. Joint SIG-DAT Conf. on Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "20--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och, C. Tillmann, and H. Ney, \"Improved alignment models for statistical machine translation,\" in Proc. Joint SIG- DAT Conf. on Empirical Methods in Natural Language Pro- cessing and Very Large Corpora, University of Maryland, College Park, MD, June 1999, pp. 20-28.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Syntax for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" }, { "first": "A", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "K", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "A", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "S", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "L", "middle": [], "last": "Shen", "suffix": "" }, { "first": "D", "middle": [], "last": "Smith", "suffix": "" }, { "first": "K", "middle": [], "last": "Eng", "suffix": "" }, { "first": "V", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Z", "middle": [], "last": "Jin", "suffix": "" }, { "first": "D", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2003, "venue": "Johns Hopkins University 2003 Summer Workshop on Language Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och, D. Gildea, S. Khudanpur, A. Sarkar, K. Yamada, A. Fraser, S. Kumar, L. Shen, D. Smith, K. Eng, V. Jain, Z. Jin, and D. Radev, \"Syntax for statistical machine trans- lation,\" Johns Hopkins University 2003 Summer Workshop on Language Engineering, Center for Language and Speech Processing, Baltimore, MD, Tech. Rep., August 2003.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Alignment Templates: the RWTH SMT System", "authors": [ { "first": "O", "middle": [], "last": "Bender", "suffix": "" }, { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "E", "middle": [], "last": "Matusov", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proc. of the Int. Workshop on Spoken Language Translation (IWSLT)", "volume": "", "issue": "", "pages": "79--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "O. Bender, R. Zens, E. Matusov, and H. Ney, \"Alignment Tem- plates: the RWTH SMT System,\" in Proc. of the Int. Work- shop on Spoken Language Translation (IWSLT), Kyoto, Japan, September 2004, pp. 79-84.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Concrete Mathematics", "authors": [ { "first": "R", "middle": [ "L" ], "last": "Graham", "suffix": "" }, { "first": "D", "middle": [ "E" ], "last": "Knuth", "suffix": "" }, { "first": "O", "middle": [], "last": "Patashnik", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. L. Graham, D. E. Knuth, and O. Patashnik, Concrete Math- ematics, 2nd ed. Reading, Mass.: Addison-Wesley Publish- ing Company, 1994.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "SRILM -an extensible language modeling toolkit", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proc. Int. Conf. on Spoken Language Processing", "volume": "2", "issue": "", "pages": "901--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Stolcke, \"SRILM -an extensible language modeling toolkit,\" in Proc. Int. Conf. on Spoken Language Processing, vol. 2, Denver, CO, 2002, pp. 901-904.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Clustered language models based on regular expressions for SMT", "authors": [ { "first": "S", "middle": [], "last": "Hasan", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2005, "venue": "Proc. of the 10th Annual Conf. of the European Association for Machine Translation (EAMT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Hasan and H. Ney, \"Clustered language models based on regular expressions for SMT,\" in Proc. of the 10th Annual Conf. of the European Association for Machine Translation (EAMT), Budapest, Hungary, May 2005.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. Och and H. Ney, \"A systematic comparison of vari- ous statistical alignment models,\" Computational Linguistics, vol. 29, no. 1, pp. 19-51, March 2003.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world", "authors": [ { "first": "T", "middle": [], "last": "Takezawa", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "F", "middle": [], "last": "Sugaya", "suffix": "" }, { "first": "H", "middle": [], "last": "Yamamoto", "suffix": "" }, { "first": "S", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2002, "venue": "Proc. of the Third Int. Conf. on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "147--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Takezawa, E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto, \"Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world,\" in Proc. of the Third Int. Conf. on Language Resources and Evaluation (LREC), Las Palmas, Spain, May 2002, pp. 147- 152.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A speech and language database for speech translation research", "authors": [ { "first": "T", "middle": [], "last": "Morimoto", "suffix": "" }, { "first": "N", "middle": [], "last": "Uratani", "suffix": "" }, { "first": "T", "middle": [], "last": "Takezawa", "suffix": "" }, { "first": "O", "middle": [], "last": "Furuse", "suffix": "" }, { "first": "Y", "middle": [], "last": "Sobashima", "suffix": "" }, { "first": "H", "middle": [], "last": "Iida", "suffix": "" }, { "first": "A", "middle": [], "last": "Nakamura", "suffix": "" }, { "first": "Y", "middle": [], "last": "Sagisaka", "suffix": "" }, { "first": "N", "middle": [], "last": "Higuchi", "suffix": "" }, { "first": "Y", "middle": [], "last": "Yamazaki", "suffix": "" } ], "year": 1994, "venue": "Proc. of the 3rd Int. Conf. on Spoken Language Processing (ICSLP'94)", "volume": "", "issue": "", "pages": "1791--1794", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Morimoto, N. Uratani, T. Takezawa, O. Furuse, Y. Sobashima, H. Iida, A. Nakamura, Y. Sagisaka, N. Higuchi, and Y. Yamazaki, \"A speech and language database for speech translation research,\" in Proc. of the 3rd Int. Conf. on Spo- ken Language Processing (ICSLP'94), Yokohama, Japan, September 1994, pp. 1791-1794.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W.-J", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, \"Bleu: a method for automatic evaluation of machine translation,\" in Proc. of the 40th Annual Meeting of the Association for Com- putational Linguistics (ACL), Philadelphia, PA, July 2002, pp. 311-318.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics", "authors": [ { "first": "G", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "Proc. ARPA Workshop on Human Language Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Doddington, \"Automatic evaluation of machine translation quality using n-gram co-occurrence statistics,\" in Proc. ARPA Workshop on Human Language Technology, 2002.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "authors": [ { "first": "S", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "A", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2005, "venue": "43rd Annual Meeting of the Assoc. for Computational Linguistics: Proc. Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Banerjee and A. Lavie, \"METEOR: An automatic met- ric for MT evaluation with improved correlation with human judgments,\" in 43rd Annual Meeting of the Assoc. for Compu- tational Linguistics: Proc. Workshop on Intrinsic and Extrin- sic Evaluation Measures for MT and/or Summarization, Ann Arbor, MI, June 2005.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Evaluation of machine translation and its evaluation", "authors": [ { "first": "J", "middle": [ "P" ], "last": "Turian", "suffix": "" }, { "first": "L", "middle": [], "last": "Shen", "suffix": "" }, { "first": "I", "middle": [ "D" ], "last": "Melamed", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. P. Turian, L. Shen, and I. D. Melamed, \"Evaluation of ma- chine translation and its evaluation,\" Computer Science De- partment, New York University, Tech. Rep. Proteus technical report 03-005, 2003.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Illustration of the phrase segmentation." }, "TABREF1": { "html": null, "type_str": "table", "content": "
Test SetWER [%] GER [%] Density
C-Star'0341.416.913
IWSLT'0444.520.213
IWSLT'0542.018.214
", "text": "Statistics for the Chinese ASR lattices of the three test sets.", "num": null }, "TABREF2": { "html": null, "type_str": "table", "content": "
Translation System BLEU NIST WER PER
Direction[%][%][%]
Chin.-Engl. 200440.48.5952.4 42.2
200546.38.7347.4 39.7
Jap.-Engl.200444.89.4150.0 37.7
200549.89.5246.5 36.8
", "text": "Progress over time: comparison of the RWTH systems of the years 2004 and 2005 for the supplied data track on the IWSLT'04 test set.", "num": null }, "TABREF3": { "html": null, "type_str": "table", "content": "
Supplied Data TrackC-Star Track
Arabic Chinese Japanese EnglishJapaneseEnglish
TrainSentences20 000240 672
Running Words 180 075 176 199198 453 189 927 1 951 311 1 775 213
Vocabulary15 3718 6879 2776 87026 03614 120
Singletons8 3194 0064 4312 8888 9753 538
C-Star'03Sentences506
Running Words3 5523 6304 1303 8234 1303 823
OOVs (Running Words)133114616534-
IWSLT'04Sentences500
Running Words3 5973 6814 1313 8374 1313 837
OOVs (Running Words)14283715836-
IWSLT'05Sentences506
Running Words3 5623 9184 2263 9094 2263 909
OOVs (Running Words)146902936910-
", "text": "Corpus statistics after preprocessing.", "num": null }, "TABREF4": { "html": null, "type_str": "table", "content": "
DataInputTranslationAccuracy MeasuresError Rates
TrackDirectionBLEU [%] NIST Meteor [%] GTM [%] WER [%] PER [%]
Supplied Manual Arabic-English54.79.7870.865.637.131.9
Chinese-English51.19.5766.560.142.835.8
English-Chinese20.05.0912.655.261.252.7
Japanese-English40.87.8658.648.653.644.4
ASRChinese-English38.37.3954.048.856.547.2
Japanese-English42.78.5362.049.651.241.2
C-StarManual Japanese-English77.612.9185.478.724.318.6
", "text": "Official results for the RWTH primary submissions on the IWSLT'05 test set.", "num": null }, "TABREF5": { "html": null, "type_str": "table", "content": "
Translation examples for the Japanese-English sup-
plied and C-Star data tracks.
Data Track Translation
SuppliedWhat would you like
C-StarWhat would you like for the main course
ReferenceWhat would you like for the main course
SuppliedIs that flight two seats available
C-StarAre there two seats available on that flight
ReferenceAre there two seats available on that flight
SuppliedHave a good I anything new
C-StarI prefer something different
ReferenceI prefer something different
all evaluation criteria. In
", "text": "", "num": null }, "TABREF6": { "html": null, "type_str": "table", "content": "
Rescoring: effect of successively adding models for
the Chinese-English IWSLT'04 test set.
SystemBLEU NIST WER PER
[%][%][%]
Baseline45.18.5648.9 40.1
+CLM45.98.2448.6 40.7
+IBM145.98.4847.8 39.7
+WP45.48.9147.8 39.4
+Del46.08.7147.8 39.6
+HMM46.38.7347.4 39.7
rized in
", "text": "", "num": null }, "TABREF7": { "html": null, "type_str": "table", "content": "
SystemTranslation
BaselineYour coffee or tea
+Rescoring Would you like coffee or tea
ReferenceWould you like coffee or tea
BaselineA room with a bath
+Rescoring I would like a twin room with a bath
ReferenceA twin room with bath
BaselineHow much is that will be that room
+Rescoring How much is that room including tax
ReferenceHow much is the room including tax
BaselineOnions
+Rescoring I would like onion
ReferenceI would like onions please
", "text": "Translation examples for the Chinese-English supplied data track: effect of rescoring.", "num": null }, "TABREF8": { "html": null, "type_str": "table", "content": "
: late
", "text": "Translation results for ASR input in the Chinese-English supplied data track on the IWSLT'05 test set (", "num": null }, "TABREF9": { "html": null, "type_str": "table", "content": "
InputTranslation
1-BestIs there a pair of room with a bath
LatticeI would like a twin room with a bath
Reference A double room including a bath
1-BestPlease take a picture of our
LatticeMay I take a picture here
Reference Am I permitted to take photos here
1-BestI'm in a does the interesting
LatticeI'm in an interesting movie
Reference A good movie is on
", "text": "Translation examples for ASR input in the Chinese-English supplied data track.", "num": null } } } }