{ "paper_id": "2005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:22:42.570837Z" }, "title": "Tuning a phrase-based statistical translation system for the IWSLT 2005 Chinese to English and Arabic to English tasks", "authors": [ { "first": "Jos\u00e9", "middle": [ "A R" ], "last": "Fonollosa", "suffix": "", "affiliation": { "laboratory": "", "institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya", "location": { "settlement": "Barcelona" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Nowadays, most of the statistical translation systems are based on phrases (i.e. groups of words). We describe a phrase-based system using a modified method for the phrase extraction which deals with larger phrases while keeping a reasonable number of phrases. Also, different alignments to extract phrases are allowed and additional features are used which lead to a clear improvement in the performance of translation. Finally, the system manages to do reordering. We report results in terms of translation accuracy by using the BTEC corpus in the tasks of Chinese to English and Arabic to English, in the framework of IWSLT'05 evaluation.", "pdf_parse": { "paper_id": "2005", "_pdf_hash": "", "abstract": [ { "text": "Nowadays, most of the statistical translation systems are based on phrases (i.e. groups of words). We describe a phrase-based system using a modified method for the phrase extraction which deals with larger phrases while keeping a reasonable number of phrases. Also, different alignments to extract phrases are allowed and additional features are used which lead to a clear improvement in the performance of translation. Finally, the system manages to do reordering. We report results in terms of translation accuracy by using the BTEC corpus in the tasks of Chinese to English and Arabic to English, in the framework of IWSLT'05 evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "From the initial word-based translation models [3] , research on statistical machine translation has been strongly improved. At the end of the last decade the use of context in the translation model (phrase-based approach) supposed a clear improvement in translation quality ( [17] , [16] , [8] ).", "cite_spans": [ { "start": 47, "end": 50, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 277, "end": 281, "text": "[17]", "ref_id": "BIBREF16" }, { "start": 284, "end": 288, "text": "[16]", "ref_id": "BIBREF15" }, { "start": 291, "end": 294, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Statistical Machine Translation (SMT) is based on the assumption that every sentence e in the target language is a possible translation of a given sentence f in the source language. The main difference between two possible translations of a given sentence is a probability assigned to each, which has to be learned from a bilingual text corpus. Thus, the translation of a source sentence f can be formulated as the search of the target sentence e that maximizes the translation probability P (e|f ), e = argmax e P (e|f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "If we use Bayes rule to reformulate the translation probability, we obtain, e = argmax e P (f |e)P (e)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This translation model is known as the source-channel approach [2] and it consists on a language model P (e) and a separate translation model P (f |e) [6] .", "cite_spans": [ { "start": 63, "end": 66, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 151, "end": 154, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In the last few years, new systems tend to use sequences of words, commonly called phrases [7] , aiming at introducing word context in the translation model. As alternative to the source-channel approach the decision rule can be modeled through a log-linear maximum entropy framework.", "cite_spans": [ { "start": 91, "end": 94, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e = argmax e M m=1 \u03bb m h m (e, f )", "eq_num": "(3)" } ], "section": "Introduction", "sec_num": "1." }, { "text": "The features functions, h m , are the system models (translation model, language model and others) and weights, \u03bb i , are typically optimized to maximize a scoring function [12] . It is derived from the Maximum Entropy approach as shown in [1] and has the advantage that additional features functions can be easily integrated in the overall system. This paper addresses a modification of the phraseextraction algorithm in [13] and results in Chinese to English and Arabic to English tasks are reported. It also combines several alignments before extracting phrases and interesting features. It is organized as follows. Section 2 explains the SMT system: the phrase extraction, its modification and shows the different features which have been taken into account and, briefly, the decoding; section 3 presents the evaluation framework and the results in Chinese to English and Arabic to English tasks are reported; and the final section shows some conclusions on the experiments and in the evaluation of IWSLT'05.", "cite_spans": [ { "start": 173, "end": 177, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 240, "end": 243, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 422, "end": 426, "text": "[13]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "As explained in the introduction, the SMT system which is presented is modeled through a log-linear maximum entropy framework. In this section, we explain the models, the feature functions and the decoding that build this system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SMT system", "sec_num": "2." }, { "text": "The Translation Model is based on bilingual phrase (or phrases). A bilingual unit consists of two monolingual fragments, where each one is supposed to be the translation of its counterpart. During training, the system learns a dictionary of these bilingual fragments, the actual core of the translation systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SMT system", "sec_num": "2." }, { "text": "The basic idea of phrase-based translation is to segment the given source sentence into phrases, then translate each phrase and finally compose the target sentence from these phrase translations [18] .", "cite_spans": [ { "start": 195, "end": 199, "text": "[18]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Translation Model", "sec_num": "2.1." }, { "text": "Given a sentence pair, we use GIZA++ [10] to align each of them word-to-word. We can train in both translation directions and we obtain: (1) the alignment in the source to target direction (s2t); and (2) the alignment in the target to source direction. If we compose the union of both alignments (sU t), we get a higher recall and a lower precision of the combined alignment.", "cite_spans": [ { "start": 37, "end": 41, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Word alignment", "sec_num": "2.1.1." }, { "text": "Phrases are extracted from sentence pairs and theirs corespondents word alignments following the criterion in [13] and the modification in phrase length in [4] . A phrase is any pair of m source words and n target words that satisfies two basic constraints: It is unfeasible to build a dictionary with all the phrases. That is why we limit the maximum size of any given phrase. Also, the huge increase in computational and storage cost of including longer phrases does not provide a significant improve in quality [7] as the probability of reappearance of larger phrases decreases.", "cite_spans": [ { "start": 110, "end": 114, "text": "[13]", "ref_id": "BIBREF12" }, { "start": 156, "end": 159, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 514, "end": 517, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-extraction", "sec_num": "2.1.2." }, { "text": "In our system we considered two length limits.The length of a monolingual phrase is defined as its number of words. The length of a phrase is the greatest of the lengths of its monolingual phrases. We first extract all the phrases of length X or less. Then, we also add phrases up to length Y (Y greater than X) if they cannot be generated by smaller phrases. Basically, we select additional phrases with source words that otherwise would be missed because of cross or long alignments [4] .", "cite_spans": [ { "start": 485, "end": 488, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-extraction", "sec_num": "2.1.2." }, { "text": "Given the collected phrase pairs, we estimate the phrase translation probability distribution by relative frequency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-extraction", "sec_num": "2.1.2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (f |e) = N (f, e) N (e)", "eq_num": "(4)" } ], "section": "Phrase-extraction", "sec_num": "2.1.2." }, { "text": "where N(f,e) means the number of times the phrase f is translated by e. If a phrase e has N > 1 possible translations, then each one contributes as 1/N [18] .", "cite_spans": [ { "start": 152, "end": 156, "text": "[18]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Phrase-extraction", "sec_num": "2.1.2." }, { "text": "\u2022 Firstly, we consider the target language model. It actually consists of an n-gram model, in which the probability of a translation hypothesis is approximated by the product of word n-gram probabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "p(T k ) \u2248 k n=1 p(w n |...w n\u22123 , w n\u22122 , w n\u22121 ) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "where T k refers to the partial translation hypothesis and w n to the n th word in it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "\u2022 As translation model we use the conditional probability. Note that no smoothing is performed, which may cause an overestimation of the probability of rare phrases. This is specially harmful given a bilingual phrase where the source part has a big frequency of appearance but the target part appears rarely. That is why we use the posterior phrase probability, we compute again the relative frequency but replacing the count of the target phrase by the count of the source phrase [11] .", "cite_spans": [ { "start": 481, "end": 485, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (e|f ) = N (f, e) N (f )", "eq_num": "(6)" } ], "section": "Additional features", "sec_num": "2.2." }, { "text": "where N'(f,e) means the number of times the phrase e is translated by f. If a phrase f has N > 1 possible translations, then each one contributes as 1/N.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "Adding this feature function we reduce the number of cases in which the overall probability is overestimated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "\u2022 The following two feature functions correspond to a forward and backward lexicon models. These models provides lexicon translation probabilities for each tuple based on the word-to-word IBM model 1 probabilities [11] . These lexicon models are computed according to the following equation:", "cite_spans": [ { "start": 214, "end": 218, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "p((t, s) n ) = 1 (I + 1) J J j=1 I i=0 p IBM1 (t i n |s j n ) (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "where s j n and t i n are the j th and i th words in the source and target sides of tuple (t, s) n , being J and I the corresponding total number words in each side of it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "For computing the forward lexicon model, IBM model 1 probabilities from GIZA++ source-to-target alignments are used. In the case of the backward lexicon model, GIZA++ target-to-source alignments are used instead.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "\u2022 We consider the widely used word penalty model. This function introduces a sentence length penalization in order to compensate the system preference for short output sentences. This penalization depends on the total number of words contained in the partial translation hypothesis, and it is computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "wp(T k ) = exp(number of words in T k ) (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "where, again, T k refers to the partial translation hypothesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "\u2022 Finally, the last feature is the phrase penalty [18] which is a constant cost per produced phrase.", "cite_spans": [ { "start": 50, "end": 54, "text": "[18]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "Here, a negative weight, which means reducing the costs per phrase, results in a preference for adding phrases. Alternatively, by using a positive scaling factors, the system will favor less phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional features", "sec_num": "2.2." }, { "text": "In SMT decoding, translated sentences are built incrementally from left to right in form of hypotheses, allowing for discontinuities in the source sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "2.3." }, { "text": "A Beam search algorithm with pruning is used to find the optimal path. The search is performed by building partial translations (hypotheses), which are stored in several lists. These lists are pruned out according to the accumulated probabilities of their hypotheses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "2.3." }, { "text": "Worst hypotheses with minor probabilities are discarded to make the search feasible. Also the decoder allows reordering. The use of the reordering strategies suppose a necessary trade-off between quality and efficiency. That is why two reordering strategies are used:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "2.3." }, { "text": "\u2022 A distortion limit (m). A source word (phrase or tuple) is only allowed to be reordered if it does not exceed a distortion limit, measured in words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "2.3." }, { "text": "\u2022 A reorderings limit (j). Any translation path is only allowed to perform j reordering jumps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "2.3." }, { "text": "See [5] for further details.", "cite_spans": [ { "start": 4, "end": 7, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "2.3." }, { "text": "Experiments have been carried out in two tasks of the IWSLT'05 evaluation 1 : Chinese to English (BTEC Corpus [15] ) and Arabic to English. The BTEC is a small corpus translation task. Table 1 shows the main statistics of the used data, namely number of sentences, words, vocabulary, and mean sentence lengths for each language. 1 At the same time, Table 2 shows the same statistics, but for the Arabic to English task. The Arabic', which is also showed in the statistics, has been preprocessed. The preprocessing stage was only performed on the Arabic side of the corpus, and apart from standard punctuation marks, it aims at separating prefixes (such as the article) that highly increase the vocabulary size. In detail, we produce a hard separation of all words starting by and (as + ), in order to separate articles from words. Note that this process is neither informed (it does not use any tagging software) nor complete (several other Arabic particles are usually attached to words). However, it already produces a significative vocabulary reduction leading to improved performance.", "cite_spans": [ { "start": 110, "end": 114, "text": "[15]", "ref_id": "BIBREF14" }, { "start": 329, "end": 330, "text": "1", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 185, "end": 192, "text": "Table 1", "ref_id": null }, { "start": 349, "end": 356, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Corpus Statistics", "sec_num": "3.1." }, { "text": "We used GIZA++ to perform the word alignment of the whole training corpus. We use the union alignment and, as an improvement, we add the alignment in the source to target direction to the union alignment (hereinafter, sAt), which seems to reach better accuracy in translation (as we will see in the following subsection). In fact, the algorithm of phrase-extraction obtains a higher vocabulary when using the source to target alignment (see Table 3 ) as there are less cross words and more phrases follow the rule of not having aligned words out of the phrase.", "cite_spans": [], "ref_spans": [ { "start": 441, "end": 448, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Units", "sec_num": "3.2." }, { "text": "In the Chinese to English task, we experiment with the phrases' length as seen in Table 4 . We compare them by building the baseline with each set of phrases. The models in the baseline are: translation model, language model, word penalty, phrase penalty, IBM1 in both directions and reordering (using m = and j = 3). We reach the best result in BLEU while extracting phrases up to length 4 (X) and, in addition, those phrases up to length 7 (Y) which could not be generated by smaller phrases.", "cite_spans": [], "ref_spans": [ { "start": 82, "end": 89, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Units", "sec_num": "3.2." }, { "text": "We observe that the number of phrases when using both lengths (X and Y) does not grow up as quickly as when using only one length. In fact, it keeps similar to the size of the smaller length (X), while the accuracy in translation has been improved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Units", "sec_num": "3.2." }, { "text": "In the case of Arabic to English, Table 5 shows the equivalent comparison. Here, we extract phrases up to length 5 (X) and, in addition, the phrases up to length 7 (Y) which could not be generated by smaller phrases.", "cite_spans": [], "ref_spans": [ { "start": 34, "end": 41, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Units", "sec_num": "3.2." }, { "text": "As default language model feature, we use a standard word-based 4gram language model generated with smoothing Kneser-Ney and interpolation of higher and lower order ngrams (by using SRILM [14] ).", "cite_spans": [ { "start": 188, "end": 192, "text": "[14]", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Units", "sec_num": "3.2." }, { "text": "The evaluation in the BTEC task has been carried out using references and translations in lowercase and without punctuation marks. We applied the widely used algorithm SIMPLEX to optimize the different weights (using the development set) [9] . Results in the test set with 16 references are reported.", "cite_spans": [ { "start": 238, "end": 241, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3.3." }, { "text": "The experiments in Table 6 correspond to the Chinese to English translation task under the phrase-based SMT system. The baseline considers the models and the phrase lengths mentioned in the subsection above. The improved system considers both the phrases extracted from the source to target alignment and the union alignment, and, also, adds the posterior probability feature. Here, the posterior probability seems not to add anything to the system with only sAt.", "cite_spans": [], "ref_spans": [ { "start": 19, "end": 26, "text": "Table 6", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Experiments", "sec_num": "3.3." }, { "text": "The experiments in Table 7 correspond to the Arabic to English translation task under the phrase-based SMT system. The baseline considers again the models and the phrase lengths mentioned in the subsection above. Note that in this case the posterior probability feature function combined with the inclusion of the phrases from the additional alignment, makes the translation more accurate. The inclusion of posterior probability provides a significant increase in performance in this case because the P (f |e) tends to be more overestimated in phrases that come from the source to target alignment.", "cite_spans": [], "ref_spans": [ { "start": 19, "end": 26, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "3.3." }, { "text": "We reported a phrase-based system. The translation model is set in the log-linear maximum entropy framework, and uses several features functions. Finally, the decoder which is based on a beam search allows for distortion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "4." }, { "text": "This phrase-based system has been improved in different ways: the alignment (sAt) used outperforms the union alignment when using the additional feature of posterior probability; and the variation in phrase length allows better results while keeping reasonable the number of phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "4." }, { "text": "As future work, we will analyze the difference in behaviors between both tasks in order to propose a more accurate optimizer and a more complex combination of features functions (instead of the linearity). Table 7 : Results for the Arabic to English translation task using the phrase-based translation model and different features. The last row shows the best system", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 213, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Conclusions", "sec_num": "4." } ], "back_matter": [ { "text": "This work has been partially funded by the European Union under the integrated project TC-STAR -Technology and Corpora for Speech to Speech Translation -(IST-2002-FP6-506738, http://www.tc-star.org), the Spanish government, under grant TIC-2002-04447-C02 (Aliado Project), Universitat Polit\u00e8cnica de Catalunya and the TALP Research Center under TALP-UPC-RECERCA grant.The authors want to thank Josep M. Crego, Jos\u00e9 B. Mari\u00f1o, Adri\u00e0 de Gispert, Patrik Lambert and Rafael E. Banchs (members of the TALP Research Center) for their contribution to this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "5." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "A", "middle": [], "last": "Berger", "suffix": "" }, { "first": "S", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [ "Della" ], "last": "Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "39--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Berger, S. Della Pietra, and V. Della Pietra. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39- 72, March 1996.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A statistical approach to machine translation", "authors": [ { "first": "P", "middle": [], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [], "last": "Cocke", "suffix": "" }, { "first": "S", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "R", "middle": [], "last": "Mercer", "suffix": "" }, { "first": "P", "middle": [ "S" ], "last": "Roossin", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "16", "issue": "2", "pages": "79--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Brown, J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, J.D. Lafferty, R. Mercer, and P.S. Roossin. A statistical approach to machine trans- lation. Computational Linguistics, 16(2):79-85, 1990.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The mathematics of statistical machine translation", "authors": [ { "first": "P", "middle": [], "last": "Brown", "suffix": "" }, { "first": "S", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "V", "middle": [ "Della" ], "last": "Pietra", "suffix": "" }, { "first": "R", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Brown, S. Della Pietra, V. Della Pietra, and R. Mercer. The mathematics of statistical machine translation. Computational Linguistics, 19(2):263- 311, 1993.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Improving the phrase-based statistical translation by modifying phrase extraction and including new features", "authors": [ { "first": "M", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Rodriguez Fonollosa", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. R. Costa-juss\u00e0 and J.A.Rodriguez Fonollosa. Improving the phrase-based statistical translation by modifying phrase extraction and including new features. Proceedings of the ACL Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond, June 2005.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An ngram-based statistical machine translation decoder", "authors": [ { "first": "J", "middle": [ "M" ], "last": "Crego", "suffix": "" }, { "first": "J", "middle": [], "last": "Mari\u00f1o", "suffix": "" }, { "first": "A", "middle": [], "last": "De Gispert", "suffix": "" } ], "year": 2005, "venue": "EUROSPEECH 05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. M. Crego, J. Mari\u00f1o, and A. de Gispert. An ngram-based statistical machine translation de- coder. EUROSPEECH 05, September 2005.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Traducci\u00f3n autom\u00e1tica estad\u00edstica: modelos de traducci\u00f3n basados en m\u00e1xima entrop\u00eda y algoritmos de b\u00fasqueda", "authors": [ { "first": "I", "middle": [], "last": "", "suffix": "" }, { "first": "Garc\u00eda", "middle": [], "last": "Varea", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Garc\u00eda Varea. Traducci\u00f3n autom\u00e1tica estad\u00edstica: modelos de traducci\u00f3n basados en m\u00e1xima entrop\u00eda y algoritmos de b\u00fasqueda. PhD Thesis in Infor- matics, Dep. de Sistemes Inform\u00e0tics i Computaci\u00f3, Universitat Polit\u00e8cnica de Val\u00e8ncia, 2003.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Statistical phrase-based translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proc. of the Human Language Technology Conference, HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn, F.J. Och, and D. Marcu. Statistical phrase-based translation. Proc. of the Human Lan- guage Technology Conference, HLT-NAACL'2003, May 2003.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A phrase-based, joint probability model for statistical machine translation", "authors": [ { "first": "D", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "W", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2002, "venue": "Proc. of the Conf. on Empirical Methods in Natural Language Processing, EMNLP'02", "volume": "", "issue": "", "pages": "133--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Marcu and W. Wong. A phrase-based, joint probability model for statistical machine transla- tion. Proc. of the Conf. on Empirical Methods in Natural Language Processing, EMNLP'02, pages 133-139, July 2002.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A simplex method for function minimization", "authors": [ { "first": "J", "middle": [ "A" ], "last": "Nelder", "suffix": "" }, { "first": "R", "middle": [], "last": "Mead", "suffix": "" } ], "year": 1965, "venue": "The Computer Journal", "volume": "7", "issue": "", "pages": "308--313", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.A. Nelder and R. Mead. A simplex method for function minimization. The Computer Journal, 7:308-313, 1965.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Giza++ software", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F.J. Och. Giza++ software. http://www- i6.informatik.rwth-aachen.de/\u02dcoch/ soft- ware/giza++.html. 2003.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A smorgasbord of features for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" }, { "first": "A", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "K", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "A", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "S", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "L", "middle": [], "last": "Shen", "suffix": "" }, { "first": "D", "middle": [], "last": "Smith", "suffix": "" }, { "first": "K", "middle": [], "last": "Eng", "suffix": "" }, { "first": "V", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Z", "middle": [], "last": "Jin", "suffix": "" }, { "first": "D", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2004, "venue": "Proc. of the Human Language Technology Conference, HLT-NAACL", "volume": "", "issue": "", "pages": "161--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "F.J. Och, D. Gildea, S. Khudanpur, A. Sarkar, K. Yamada, A. Fraser, S. Kumar, L. Shen, D. Smith, K. Eng, V. Jain, Z. Jin, and D. Radev. A smorgas- bord of features for statistical machine translation. Proc. of the Human Language Technology Con- ference, HLT-NAACL'2004, pages 161-168, May 2004.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Discriminative training and maximum entropy models for statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "295--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "F.J. Och and H. Ney. Discriminative training and maximum entropy models for statistical machine translation. 40th Annual Meeting of the Associa- tion for Computational Linguistics, pages 295-302, July 2002.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The alignment template approach to statistical machine translation", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "417--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "F.J. Och and H. Ney. The alignment template ap- proach to statistical machine translation. Computa- tional Linguistics, 30(4):417-449, December 2004.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Srilm -an extensible language modeling toolkit", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 7th Int. Conf. on Spoken Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Stolcke. Srilm -an extensible language model- ing toolkit. Proc. of the 7th Int. Conf. on Spoken Language Processing, ICSLP'02, September 2002.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Toward a broad-coverage bilingual curpus for speech translation of travel conversations in the real world", "authors": [ { "first": "T", "middle": [], "last": "Takezawa", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "F", "middle": [], "last": "Sugaya", "suffix": "" }, { "first": "S", "middle": [], "last": "Yamamoto", "suffix": "" }, { "first": "", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "147--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Takezawa, E. Sumita, F. Sugaya, H Yamamoto, and S. Yamamoto. Toward a broad-coverage bilin- gual curpus for speech translation of travel conver- sations in the real world. LREC 2002, pages 147- 152, May 2002.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A syntax-based statistical translation model", "authors": [ { "first": "K", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2001, "venue": "39th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Yamada and K. Knight. A syntax-based statisti- cal translation model. 39th Annual Meeting of the Association for Computational Linguistics, pages 523-530, July 2001.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Phrase-based statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Advances in artificial intelligence", "volume": "2479", "issue": "", "pages": "18--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zens, F.J. Och, and H. Ney. Phrase-based statis- tical machine translation. In M. Jarke, J. Koehler, and G. Lakemeyer, editors, KI -2002: Advances in artificial intelligence, volume LNAI 2479, pages 18-32. Springer Verlag, September 2002.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Improvements in phrase-based statistical machine translation", "authors": [ { "first": "R", "middle": [], "last": "Zens", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Proc. of the Human Language Technology Conference, HLT-NAACL", "volume": "", "issue": "", "pages": "257--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zens, F.J. Och, and H. Ney. Improvements in phrase-based statistical machine translation. Proc. of the Human Language Technology Conference, HLT-NAACL'2004, pages 257-264, May 2004.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Words are consecutive along both sides of the bilingual phrase, No word on either side of the phrase is aligned to a word out of the phrase." }, "TABREF0": { "content": "
BTECChinese English
Training Sentences20 k20 k
Words176.2 k 182.3 k
Vocabulary8.7 k7.3 k
Development Sentences 10061006
Words7.3 k6 k
Vocabulary1.4 k1.3 k
Test Sentences506506
Words3.7 k-
Vocabulary963-
Table 1: Chinese to English task. BTEC Corpus: Train-
ing, Development and Test data sets. The Development
data set has 16 references, (k stands for thousands)
BTECArabic Arabic' English
Training Sentences20 k20 k20 k
Words131.7 k 180.5 k 182.3 k
Vocabulary25.2 k16 k7.3 k
Development Sentences 100610061006
Words5.3 k7.2 k6 k
Vocabulary2.4 k1.9 k1.3 k
Test Sentences506506506
Words2.6 k3.6 k-
Vocabulary1.4 k1.2 k-
", "html": null, "text": "www.slt.atr.jp/IWSLT2005", "num": null, "type_str": "table" }, "TABREF1": { "content": "", "html": null, "text": "", "num": null, "type_str": "table" }, "TABREF3": { "content": "
", "html": null, "text": "Vocabulary of phrases for each alignment (source to target, union and the addition of both) and for each task. The phrases parameters are X=4 and Y=7 for Chinese phrases and X=5 and Y=7 for Arabic sentences (this parameters are studied in next subsection).", "num": null, "type_str": "table" }, "TABREF4": { "content": "
X YSIZEmWER BLEU NIST PER
3 3 220.7 k48.1143.26 8.312 38.65
4 4 268.7 k47.8843.46 8.33739
5 5309 k48.1143.51 8.491 39.16
4 7 275.6 k47.7543.47 8.356 38.89
X YSIZEmWER BLEU NIST PER
4 4 285.7 k38.0552.95 9.093 33.02
5 5 337.4 k38.0153.04 9.124 32.96
6 6 381.6 k37.8353.46 9.154 32.93
5 7340 k37.8653.61 9.098 32.75
", "html": null, "text": "Analysis of the parameters of phrase length (X, Y) in the Chinese to English task using the union alignment. Each option shows its size (number of phrases extracted) and its bleu optimized and evaluated in the development set (considering the baseline)", "num": null, "type_str": "table" }, "TABREF5": { "content": "
Phrase-basedmWER BLEU NISTPER
Baseline (X=4, Y=7)47.7543.478.356 38.89
Baseline (X=4, Y=7) + P(e|f)46.7344.22 7.6602 37.80
Baseline (X=4, Y=7) + sAt45.6945.68 7.9603 37.88
Baseline (X=4, Y=7) + (P(e|f) + sAt)45.9145.237.974 37.96
", "html": null, "text": "Analysis of the parameters of phrase length (X, Y) in the Arabic to English task using the union alignment. Each option shows its size (number of phrases extracted) and its bleu optimized and evaluated in the development set (considering the baseline)", "num": null, "type_str": "table" }, "TABREF6": { "content": "
Phrase-basedmWER BLEU NISTPER
Baseline (X=5, Y=7)37.8653.619.098 32.75
Baseline (X=5, Y=7) + P(e|f)38.0453.58 9.1601 32.33
Baseline (X=5, Y=7) + sAt36.6455.87 9.5561 30.62
Baseline (X=5, Y=7) + (P(e|f) + sAt)35.057.269.331 30.30
", "html": null, "text": "Results for the Chinese to English translation task using different features. The last row shows the best system", "num": null, "type_str": "table" } } } }