ACL-OCL / Base_JSON /prefixI /json /iwslt /2005.iwslt-1.25.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:21:05.095259Z"
},
"title": "The TALP Ngram-based SMT System for IWSLT'05",
"authors": [
{
"first": "Josep",
"middle": [
"M"
],
"last": "Crego",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya",
"location": {
"settlement": "Barcelona"
}
},
"email": ""
},
{
"first": "Adri\u00e0",
"middle": [],
"last": "De Gispert",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya",
"location": {
"settlement": "Barcelona"
}
},
"email": ""
},
{
"first": "Jos\u00e9",
"middle": [
"B"
],
"last": "Mari\u00f1o",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya",
"location": {
"settlement": "Barcelona"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper provides a description of TALP-Ngram, the tuple-based statistical machine translation system developed at the TALP Research Center of the UPC (Universitat Polit\u00e8cnica de Catalunya). Briefly, the system performs a log-linear combination of a translation model and additional feature functions. The translation model is estimated as an N-gram of bilingual units called tuples, and the feature functions include a target language model, a word penalty, and lexical features, depending on the language pair and task. The paper describes the participation of the system in the second international workshop on spoken language translation (IWSLT) held in Pittsburgh, October 2005. Results on Chinese-to-English and Arabic-to-English tracks using supplied data are reported.",
"pdf_parse": {
"paper_id": "2005",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper provides a description of TALP-Ngram, the tuple-based statistical machine translation system developed at the TALP Research Center of the UPC (Universitat Polit\u00e8cnica de Catalunya). Briefly, the system performs a log-linear combination of a translation model and additional feature functions. The translation model is estimated as an N-gram of bilingual units called tuples, and the feature functions include a target language model, a word penalty, and lexical features, depending on the language pair and task. The paper describes the participation of the system in the second international workshop on spoken language translation (IWSLT) held in Pittsburgh, October 2005. Results on Chinese-to-English and Arabic-to-English tracks using supplied data are reported.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "During the last several years, statistical machine translation (SMT) has gained much attention within the research community. This is mainly due to its relatively easy development in terms of human effort, its robustness in face of non-grammatical input data (such as recognised speech), and its good results against rule-based and transfer-based approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and overview of the system",
"sec_num": "1."
},
{
"text": "The statistical approach to machine translation is based on the assumption that every sentence t in the target language is a possible translation of a given sentence s in the source language, and the main difference between two translation hypotheses is a probability assigned to each, which is to be learned from a bilingual corpus. The first SMT systems were based on the noisy channel approach on a word-based basis, modeling the translation of a target language sentence t given a source language sentence t as a translation model probability p(s|t) times a target language model probability p(t) [1] .",
"cite_spans": [
{
"start": 601,
"end": 604,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and overview of the system",
"sec_num": "1."
},
{
"text": "Recently, word-based translation models have been replaced by phrase-based translation models [2, 3] , which are estimated from aligned bilingual corpora by using relative frequencies.",
"cite_spans": [
{
"start": 94,
"end": 97,
"text": "[2,",
"ref_id": "BIBREF1"
},
{
"start": 98,
"end": 100,
"text": "3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and overview of the system",
"sec_num": "1."
},
{
"text": "On the other hand, according to the maximum entropy framework [4] , we can define the translation hypothesis t given a source sentence s, as the target sen-tence maximizing a log-linear combination of feature functions, as described in the following equation:",
"cite_spans": [
{
"start": 62,
"end": 65,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and overview of the system",
"sec_num": "1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t I 1 = arg max t I 1 M m=1 \u03bb m h m (s J 1 , t I 1 )",
"eq_num": "(1)"
}
],
"section": "Introduction and overview of the system",
"sec_num": "1."
},
{
"text": "where \u03bb m correspond to the weighting coefficients of the log-linear combination, and the feature functions h m (s, t) to a logarithmic scaling of the probabilities of each model. Following this approach, the translation system described in this paper implements a log-linear combination of one translation model and four additional feature models. In contrast with standard phrase-based approaches, our translation model is expressed in tuples as bilingual units. Given a word alignment, tuples define a unique and monotonic segmentation of each bilingual sentence, building up a much smaller set of units than with phrases and allowing N-gram estimation to account for the history of the translation process [5, 6] . This approach has its origins in SMT by using finite state transducers [7, 8, 9] .",
"cite_spans": [
{
"start": 710,
"end": 713,
"text": "[5,",
"ref_id": "BIBREF4"
},
{
"start": 714,
"end": 716,
"text": "6]",
"ref_id": "BIBREF5"
},
{
"start": 790,
"end": 793,
"text": "[7,",
"ref_id": "BIBREF6"
},
{
"start": 794,
"end": 796,
"text": "8,",
"ref_id": "BIBREF7"
},
{
"start": 797,
"end": 799,
"text": "9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and overview of the system",
"sec_num": "1."
},
{
"text": "The organization of the paper is as follows. Section 2 describes in detail the tuple n-gram translation model, while section 3 introduces the additional features used in the system. Section 4 provides a brief overview of the decoding tool and search strategy used. Next, sections 5 and 6 report and discuss results on IWSLT'05 Chineseto-English and Arabic-to-English tracks, respectively. Finally, Section 7 concludes and outlines future research lines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction and overview of the system",
"sec_num": "1."
},
{
"text": "The tuple N-gram translation model is a language model of a particular language composed by bilingual units which are referred to as tuples. This model approximates the joint probability between source and target languages by using N-grams as described by the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Tuple N-gram translation model",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(s J 1 , t I 1 ) = \u2022 \u2022 \u2022 = (2) K i=1 p((s, t) i |(s, t) i\u2212N +1 , ..., (s, t) i\u22121 )",
"eq_num": "(3)"
}
],
"section": "The Tuple N-gram translation model",
"sec_num": "2."
},
{
"text": "where (s, t) i refers to the i th tuple of a given bilingual sentence pair, which is segmented into K tuples. It is important to notice that, since both languages are linked up in tuples, the context information provided by this translation model is bilingual.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Tuple N-gram translation model",
"sec_num": "2."
},
{
"text": "Tuples are extracted from a word-to-word aligned corpus according to the following constraints [10] :",
"cite_spans": [
{
"start": 95,
"end": 99,
"text": "[10]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Tuple N-gram translation model",
"sec_num": "2."
},
{
"text": "\u2022 a monotonic segmentation of each bilingual sentence pair is produced",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Tuple N-gram translation model",
"sec_num": "2."
},
{
"text": "\u2022 no word inside the tuple is aligned to words outside the tuple",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Tuple N-gram translation model",
"sec_num": "2."
},
{
"text": "\u2022 no smaller tuples can be extracted without violating the previous constraints",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Tuple N-gram translation model",
"sec_num": "2."
},
{
"text": "As a consequence of these constraints, only one segmentation is possible for a given parallel sentence pair and a word alignment. Usually, automatic word-to-word alignments are generated in both source-to-target and target-to-source directions by using GIZA++ [11] , and tuples are usually extracted from the union set of alignments. However, in section 5 results are also reported when extracting tuples with the alignment from sourceto-target direction. Figure 1 presents a simple example illustrating the tuple extraction process. Once tuples have been extracted, the tuple vocabulary can be pruned by using histogram counts, thus keeping the N most frequent tuples sharing the same source side. Given the reduced size of the supplied IWSLT data, this pruning was not found necessary. Then, the tuple N-gram model can be trained by using any Language Modeling toolkit.",
"cite_spans": [
{
"start": 260,
"end": 264,
"text": "[11]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 456,
"end": 464,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Tuple N-gram translation model",
"sec_num": "2."
},
{
"text": "An important issue regarding tuple definition and extraction is the fact that some words linked to NULL end up producing tuples with NULL source sides, as with tuple t 3 from figure 1. Since no NULL is actually expected to occur in translation inputs, this kind of tuple cannot be allowed. Therefore, the target side of the tuple is attached to either the previous or the next tuple in the tuple sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuples with NULL in source",
"sec_num": "2.1."
},
{
"text": "In order to decide to which tuple is attached the 'source-nulled' tuple, as a baseline option we link the tuple to the following tuple. However, an improved technique has been developed which incorporates IBM-1 probabilities, deciding for the segmentation with higher probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuples with NULL in source",
"sec_num": "2.1."
},
{
"text": "This technique incorporates both segmentations (where the target word of the source-nulled tuple is attached to the previous and the next tuple).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuples with NULL in source",
"sec_num": "2.1."
},
{
"text": "In order to score each segmentation, both tuples (next and previous) are taken into account, computing the sum of an IBM-1 weight for each tuple. This weight is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuples with NULL in source",
"sec_num": "2.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 I J j=1 I i=0 p IBM 1 (t i |s j )p IBM 1 (t i |s j )",
"eq_num": "(4)"
}
],
"section": "Tuples with NULL in source",
"sec_num": "2.1."
},
{
"text": "where s and t are the source and target tuple sides, I and J their length in words and IBM 1 stands for the reversed IBM model 1. Finally, the sum with the best score defines the best segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuples with NULL in source",
"sec_num": "2.1."
},
{
"text": "Another important issue regarding the tuple-based translation model is the existence of embedded words. Given the constraints and the sequentiality defining the tuples, it may happen that a certain amount of single-word translation probabilities are left out of the model. This occurs for those words always appearing embedded into tuples containing two or more words. Consider for example the word \"ice-cream\" from figure 1. As seen from the figure, \"ice-cream\" appears embedded into tuple t 6 . If a similar situation is encountered for all occurrences of \"icecream\" in the training corpus then no translation probability for an independent occurrence of such word will exist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedded words",
"sec_num": "2.2."
},
{
"text": "To overcome this problem, the tuple N-gram model is enhanced by incorporating 1-gram translation probabilities for all the embedded words detected during the tuple extraction step [9] . These 1-gram translation probabilities are computed from the intersection of both sourceto-target and target-to-source alignments.",
"cite_spans": [
{
"start": 180,
"end": 183,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedded words",
"sec_num": "2.2."
},
{
"text": "When dealing with pairs of languages with very nonmonotonic alignments, such as Chinese and English, the sequentiality contraint may lead to an unpractical tuple length and excessive amount of embedded words. In this case, it is more reasonable to allow for a certain reordering in the training data. This means that the tuples are broken into smaller tuples, and these are sequenced in the order of the target words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuple unfolding",
"sec_num": "2.3."
},
{
"text": "In order not to lose the information on the correct order, the decoder performs then a reordered search, which is guided by the N-gram model of the unfolded tuples and the additional feature models. On the other hand, the tuple unfolding process highly reduces the effect of embedded words [12] . Figure 2 shows an example of tuple unfolding compared to the monotonic extraction. The unfolding technique produces a different bilingual N-gram language model with reordered source words. ",
"cite_spans": [
{
"start": 290,
"end": 294,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 297,
"end": 305,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Tuple unfolding",
"sec_num": "2.3."
},
{
"text": "As additional feature functions to better guide the translation process, TALP incorporates the following models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional feature models",
"sec_num": "3."
},
{
"text": "\u2022 a target language model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional feature models",
"sec_num": "3."
},
{
"text": "\u2022 a word penalty model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional feature models",
"sec_num": "3."
},
{
"text": "\u2022 a source-to-target lexicon model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional feature models",
"sec_num": "3."
},
{
"text": "\u2022 a target-to-source lexicon model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional feature models",
"sec_num": "3."
},
{
"text": "The first of these feature functions is a standard target language model, estimated as an N-gram over the target words, as expressed by this equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target language model",
"sec_num": "3.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p LM (t k ) \u2248 k n=1 p(w n |w n\u22122 , w n\u22121 )",
"eq_num": "(5)"
}
],
"section": "Target language model",
"sec_num": "3.1."
},
{
"text": "where t k refers to the partial translation hypothesis and w n to the n th word in it. Although this model could be trained from a larger monolingual data set, this has not been done for IWSLT'05 experiments, which use as target text the same amount of data used as parallel text. As with the tuple translation model, the SRI Language Modeling toolkit was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target language model",
"sec_num": "3.1."
},
{
"text": "Usually, this feature function is accompanied by a word penalty model. This model introduces a sentence length penalty in order to compensate the system's preference for short target sentences, caused by the presence of the previous target language model. This penalization depends on the total number of words contained in the partial translation hypothesis, and it is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target language model",
"sec_num": "3.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p W P (t k ) = exp(number of words in t k )",
"eq_num": "(6)"
}
],
"section": "Target language model",
"sec_num": "3.1."
},
{
"text": "where, again, t k refers to the partial translation hypothesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target language model",
"sec_num": "3.1."
},
{
"text": "Finally, the third and fourth feature functions correspond to source-to-target and target-to-source lexicon models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon models",
"sec_num": "3.2."
},
{
"text": "These models use IBM model 1 translation probabilities to compute a lexical weight for each tuple, which accounts for the statistical consistency of the pairs of words inside the tuple. These lexicon models are computed according to the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon models",
"sec_num": "3.2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p IBM 1 ((t, s) n ) = 1 (I + 1) J J j=1 I i=0 p(t i n |s j n )",
"eq_num": "(7)"
}
],
"section": "Lexicon models",
"sec_num": "3.2."
},
{
"text": "where s j n and t i n are the j th and i th words in the source and target sides of tuple (t, s) n , being J and I the corresponding total number words in each side of it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon models",
"sec_num": "3.2."
},
{
"text": "To compute the forward lexicon model, IBM model 1 lexical parameters from GIZA++ source-to-target alignments are used. In the case of the backward lexicon model, GIZA++ target-to-source alignments are used instead.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon models",
"sec_num": "3.2."
},
{
"text": "For decoding given the combination of models presented above, we used MARIE, a decoder implemeting a beam search strategy with distortion (or reordering) capabilities developed at the TALP Research Center [13] . For efficient pruning of the search space, several pruning techniques are used, such as:",
"cite_spans": [
{
"start": 205,
"end": 209,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Decoding",
"sec_num": "4."
},
{
"text": "\u2022 Threshold pruning: Hypotheses with lower scores than a certain threshold are eliminated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Decoding",
"sec_num": "4."
},
{
"text": "\u2022 Histogram pruning: Only the K-best ranked hypotheses are kept at each search list of states (covering the same words of the input sentence).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Decoding",
"sec_num": "4."
},
{
"text": "\u2022 Hypothesis recombination: At each step of the search, two or more hypotheses are recombined if they agree in both the present tuple and the tuple N-gram history.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Decoding",
"sec_num": "4."
},
{
"text": "When allowing for reordering, the pruning strategies are not enough to reduce the combinatory explosion without an important loss in translation performance. For this purpose, two reordering strategies are used:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Decoding",
"sec_num": "4."
},
{
"text": "\u2022 A distortion limit (m): Any source word (phrase or tuple) is only allowed to be reordered if it does not exceed a distortion limit, measured in words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Decoding",
"sec_num": "4."
},
{
"text": "\u2022 A reordering limit (j): Any translation path is only allowed to perform j reordering jumps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Decoding",
"sec_num": "4."
},
{
"text": "The use of reordering strategies implies a necessary trade-off between quality and efficiency. Further details of these reordering strategies are given in the experiments reported in section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram based Decoding",
"sec_num": "4."
},
{
"text": "The presented system has been evaluated in the framework of the second International Workshop on Spoken Language Translation (IWSLT'05). In the workshop, an Evaluation Campaign has been conducted for five translation directions. Moreover, four different tracks per direction have been proposed, namely using only the supplied corpus (supplied) and allowing the use of NLP tools, additional public data and additional proprietary data, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IWSLT'05 Experiments",
"sec_num": "5."
},
{
"text": "TALP has participated in the Chinese-to-English and Arabic-to-English supplied tracks. Next, details on these experiments are presented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IWSLT'05 Experiments",
"sec_num": "5."
},
{
"text": "Preprocessing is an optional and language-dependent stage, according to the availability of resources. A minor preprocessing step was carried out in both translation tasks. As evaluation is performed without punctuation marks, we experimented with training without punctuation, but this was discarded as results were equal to or worse than leaving punctuation until a final output postprocessing. Tables 1 and 2 show the main statistics of the supplied data, namely number of sentences, words, vocabulary, and maximum and average sentence lengths for each language, respectively. A development set of 1006 sentences was also supplied, together with 16 reference English translations (CSTAR03 plus IWSLT04 test sets). Note that Arabic refers to the statistics of the re-tokenized Arabic corpus as explained in Section 5.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 411,
"text": "Tables 1 and 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Corpus and preprocessing",
"sec_num": "5.1."
},
{
"text": "The training of the system is comprised of several stages with the objective of building the four models used by the system. The histograms in Figure 3 show the number of tuples found in the corpus over the tuple size for both translation tasks. The preprocessing stage was only performed on the Arabic side of the corpus, and apart from standard punctuation marks, it aims at separating prefixes (such as the article) that would highly increase the vocabulary size if considered as parts of words. In detail, we produce a hard separation of all words starting with and (as + ), in order to separate articles from words. Note that this process is neither guided by tagging information (it does not use any tagging software) nor complete (several other Arabic particles are usually attached to words). However, it already produces a significative vocabulary reduction, leading to improved performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 151,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Training details",
"sec_num": "5.2."
},
{
"text": "During word alignment, IBM model 1 tables are used directly to compute the lexicon feature. Finally, in order to learn the target and the tuples language models we used SRILM [14] . All models were learnt using interpolation of higher and lower order n-grams with Knesser-Ney [15] smoothing.",
"cite_spans": [
{
"start": 175,
"end": 179,
"text": "[14]",
"ref_id": "BIBREF13"
},
{
"start": 276,
"end": 280,
"text": "[15]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training details",
"sec_num": "5.2."
},
{
"text": "Several configurations were tested on the development set optimizing BLEU, namely baseline and three alternatives. Results are shown in table 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development work",
"sec_num": "5.3."
},
{
"text": "The baseline configuration system is built using:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development work",
"sec_num": "5.3."
},
{
"text": "\u2022 The union alignment [11] to extract unfolded tuples and the intersection to solve embedded words.",
"cite_spans": [
{
"start": 22,
"end": 26,
"text": "[11]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Development work",
"sec_num": "5.3."
},
{
"text": "\u2022 All source-nulled tuples are linked the the target word of the next tuple.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development work",
"sec_num": "5.3."
},
{
"text": "\u2022 The order of the target and the translation Ngram language models is set to 4 and 3, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development work",
"sec_num": "5.3."
},
{
"text": "\u2022 The reordering parameters of the decoder are fixed to m = 5 and j = 3 for the Chinese-to-English task, and m = 3, j = 3 for the Arabic-to-English task. This settings suppose a necessary trade-off between quality and efficiency. As reordering is not so critical in the Arabic task and does not produce any big improvement in quality, a smaller distortion distance limit is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development work",
"sec_num": "5.3."
},
{
"text": "\u2022 Three alternative configurations have been studied. In 4grBM the order of the translation Ngram language model is increased to 4. In NULLibm, the 4grBM configuration is improved by solving source-nulled tuples following the method described in 2.1. Finally, the NULLibm configuration is further extended in sAt, where the source-to-target alignment is also used for tuple extraction (together with the union). This way, the tuple language model is learnt from the concatenation of those tuples extracted from the union alignment and those from the source-to-target alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development work",
"sec_num": "5.3."
},
{
"text": "Even though the use of 4grBM does not seem to produce any change in quality, we decided to include this in our experiments based on previous development work with a different BLEU score implementation (used in IWSLT'04), where significant improvements were obtained when compared to the baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development work",
"sec_num": "5.3."
},
{
"text": "In the Chinese-to-English task (zh2en), the best BLEU results are obtained when using the sAt configuration, which is built using all the additional features (4-grams in the bilingual LM, solving source-nulled tuples using the IBM-1 lexicon model, and making use of the additional source-to-target alignment).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development work",
"sec_num": "5.3."
},
{
"text": "On the contrary, the Arabic-to-English task (ar2en) does not seem to take advantage from any of the additional features except for the introduction of the sourceto-target alignment in sAt.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development work",
"sec_num": "5.3."
},
{
"text": "We observe a clear contradiction regarding BLEU and the other scores when adding the source-to-target alignemnt sAt in both translation tasks (see the increase in mWER for zh2en and decrease in NIST for ar2en). Trying to understand this situation, we performed a mWER optimization using two configurations, sAt and NULLibm. Results When optimizing mWER (see Table 4 ), the Chineseto-English task shows a clear improvement when using sAt (measured in mWER and BLEU) at the cost of a lower NIST scores. While in the Arabic-to-English task, a very slight improvement is achieved (measured in mWER) while worst scores are obtained for both BLEU and NIST.",
"cite_spans": [
{
"start": 324,
"end": 331,
"text": "Results",
"ref_id": null
}
],
"ref_spans": [
{
"start": 358,
"end": 365,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Development work",
"sec_num": "5.3."
},
{
"text": "To outline this contradiction, four different test set runs were submitted for each language pair, namely the optimizations of BLEU and mWER for both the NULLibm and sAt configurations. As primary submission, we selected the sAt configuration with weights op-timized maximizing BLEU. The secondary submission consists of the NULLibm configuration with weights optimized minimizing mWER.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development work",
"sec_num": "5.3."
},
{
"text": "The optimizations were performed using an in-house developed tool based on the simplex method [16] .",
"cite_spans": [
{
"start": 94,
"end": 98,
"text": "[16]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Development work",
"sec_num": "5.3."
},
{
"text": "The evaluation scores of the TALP-Ngram system (primary and secondary submissions), obtained in both translation tasks are shown in table 5. In Table 6 As it can be observed, the BLEU and NIST scores are correlated for both dev and test sets in the zh2en task, both improving in the primary run. However, they are incorrelated for both dev and test sets in the ar2en task.",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 151,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Test set results",
"sec_num": "5.4."
},
{
"text": "When studying the test results, we can note that the Chinese test set seems to be 'easier' to translate than the development (obtaining higher scores), whereas the effect is opposite in the case of Arabic. This behaviour could easily be explained by the nature of the data. However, when comparing the two TALP systems which competed in the same tracks and under the same conditions (TALP-Ngram and TALP-Phrase [17] ), a surprisingly different behaviour between development and test can be found. Regarding development results, the TALP-Ngram system improves the performance of the TALP-Phrase system (table 6) in the Chinese-to-English task (0.384 > 0.373), while it achieves the same score in the Arabic-to-English task (0.573 \u2248 0.572), both measured in BLEU. However, regarding the test set, the TALP-Ngram system is clearly beaten by the TALP-Phrase system in both tasks (0.444 < 0.452 in Chinese-to-English, and 0.533 < 0.573 in Arabic-to-English). Experiments have been conducted in order to find out the reason explaining this different behaviour.",
"cite_spans": [
{
"start": 411,
"end": 415,
"text": "[17]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "The results obtained by both systems are shown (primary submissions are only discussed) in table 6 .",
"cite_spans": [
{
"start": 97,
"end": 98,
"text": "6",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "The same decoder (MARIE [13] ), optimization tool [16] 1 lexicon model, reordering model, word penalty) were used in both systems. Furthermore, the same additional tokenization was performed on the Arabic source side of the corpus. Differences are found on the bilingual units used (tuples versus phrases), their translation models (Ngram LM versus relative frequencies), and two additional models used by the TALP-Phrase system (a phrase penalty and a relative frequency translation model computed from target to source).",
"cite_spans": [
{
"start": 24,
"end": 28,
"text": "[13]",
"ref_id": "BIBREF12"
},
{
"start": 50,
"end": 54,
"text": "[16]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "Comparing the model weights obtained by both systems after the optimization (shown in table 7), we can see how the TALP-Ngram system does not make use of any of the IBM-1 lexicon models in the Arabic-to-English translation task. The percentage of these tuples which are unigrams (uncontextual, and usually leading to errors) is also similar. Therefore, no conclusion can be drawn. Arabic test, being the figures approximately the double as with the dev set. Additionally, Table 11 presents the number of output words produced by the TALP-Ngram and TALP-Phrase systems, as our experience is that differences in length may produce differences in BLEU score. However, whereas the TALP-Phrase always produces shorter outputs in Chinese-to-English, the behaviour is opposite in Arabic, without inconsistencies between dev and test.",
"cite_spans": [],
"ref_spans": [
{
"start": 472,
"end": 480,
"text": "Table 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "Therefore, we have not yet found a reason to explain the difference in performance regarding development and test sets (perhaps just an artifact of the corpora?). Further research should be conducted to explain such a behaviour.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "Another point of discussion is the optimization procedure. It seems to be a weak point of current SMT systems. The use of optimization algorithms like simplex [16] showed to be effective when applied over spaces with two or three dimensions. Current SMT systems are built using more than four additional models which have to be optimized at the same time.",
"cite_spans": [
{
"start": 159,
"end": 163,
"text": "[16]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "Optimization over spaces with many dimensions conveys a lot of local maxima, which are typically solved through a limited number of restarts. This situation makes the final optimization highly dependent of the initial point, which is very often chosen almost randomly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "In this paper we have presented the TALP Ngram-based statistical machine translation system (TALP-Ngram). Description and training details have been shown for the IWSLT'05 evaluation workshop, consisting of a Chinese-to-English and an Arabic-to-English translation tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and further work",
"sec_num": "7."
},
{
"text": "Two configurations have been submitted for each translation task in order to outline the contradiction between BLEU and mWER, and the contradiction between BLEU and NIST (clearly unexpected as both account for a weighted match of word Ngrams).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and further work",
"sec_num": "7."
},
{
"text": "Results have been presented, highlighting the strong differences in behaviour found between the development and test sets, when compared to another participating system (TALP-Phrase).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and further work",
"sec_num": "7."
},
{
"text": "Future work is necessary to overcome problems such as the occurrence of NULL words in the translation units and the optimization process with high dimensional spaces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and further work",
"sec_num": "7."
}
],
"back_matter": [
{
"text": "This work has been partially funded by the European Union under the integrated project TC-STAR -Technology and Corpora for Speech to Speech Translation -(IST-2002-FP6-506738, http://www.tc-star.org), by the Spanish Government under grant TIC2002-04447-C02 (ALIADO project), by the Dep.of Universities, Research and Information Society (Generalitat de Catalunya) and by the Universitat Polit\u00e8cnica de Catalunya under grant UPC-RECERCA.The authors want to thank Marta Ruiz Costa-juss\u00e0 (member of the TALP Research Center) for her valuable contribution to the comparison with the TALP-Phrase system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "8."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The mathematics of statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Brown, S. Della Pietra, V. Della Pietra, and R. Mercer, \"The mathematics of statistical machine translation,\" Computational Linguistics, vol. 19, no. 2, pp. 263-311, 1993.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Phrase-based statistical machine translation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "KI -2002: Advances in artificial intelligence",
"volume": "2479",
"issue": "",
"pages": "18--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Zens, F. Och, and H. Ney, \"Phrase-based statis- tical machine translation,\" in KI -2002: Advances in artificial intelligence, M. Jarke, J. Koehler, and G. Lakemeyer, Eds. Springer Verlag, September 2002, vol. LNAI 2479, pp. 18-32.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Statistical phrasebased translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the Human Language Technology Conference, HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, F. Och, and D. Marcu, \"Statistical phrase- based translation,\" Proc. of the Human Language Technology Conference, HLT-NAACL'2003, May 2003.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Berger, S. Della Pietra, and V. Della Pietra, \"A maximum entropy approach to natural language processing,\" Computational Linguistics, vol. 22, no. 1, pp. 39-72, March 1996.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Statistical machine translation of euparl data by using bilingual n-grams",
"authors": [
{
"first": "R",
"middle": [
"E"
],
"last": "Banchs",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Crego",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gispert",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Lambert",
"suffix": ""
},
{
"first": "J",
"middle": [
"B"
],
"last": "Mari\u00f1o",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of the ACL Workshop on Building and Using Parallel Texts (ACL'05/Wkshp)",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. E. Banchs, J. M. Crego, A. de Gispert, P. Lam- bert, and J. B. Mari\u00f1o, \"Statistical machine trans- lation of euparl data by using bilingual n-grams,\" Proc. of the ACL Workshop on Building and Us- ing Parallel Texts (ACL'05/Wkshp), pp. 67-72, June 2005.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bilingual n-gram statistical machine translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mari\u00f1o",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Banchs",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Crego",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gispert",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Lambert",
"suffix": ""
},
{
"first": "M",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fonollosa",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of the MT Summit X",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Mari\u00f1o, R. Banchs, J. Crego, A. de Gispert, P. Lambert, M. R. Costa-juss\u00e0, and J. Fonollosa, \"Bilingual n-gram statistical machine translation,\" Proc. of the MT Summit X, September 2005.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Finite-state speech-to-speech translation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Vidal",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of 1997 IEEE Int. Conf. on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "111--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Vidal, \"Finite-state speech-to-speech transla- tion,\" Proc. of 1997 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pp. 111-114, 1997.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Using X-grams for speech-to-speech translation",
"authors": [
{
"first": "A",
"middle": [],
"last": "De Gispert",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mari\u00f1o",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the 7th Int. Conf. on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. de Gispert and J. Mari\u00f1o, \"Using X-grams for speech-to-speech translation,\" Proc. of the 7th Int. Conf. on Spoken Language Processing, ICSLP'02, September 2002.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Xgram-based spoken language translation system",
"authors": [
{
"first": "--",
"middle": [],
"last": "Talp",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the Int. Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "85--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "--, \"Talp: Xgram-based spoken language trans- lation system,\" Proc. of the Int. Workshop on Spoken Language Translation, IWSLT'04, pp. 85-90, Octo- ber 2004.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Finitestate-based and phrase-based statistical machine translation",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Crego",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mari\u00f1o",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "De Gispert",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the 8th Int. Conf. on Spoken Language Processing, ICSLP'04",
"volume": "",
"issue": "",
"pages": "37--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. M. Crego, J. Mari\u00f1o, and A. de Gispert, \"Finite- state-based and phrase-based statistical machine translation,\" Proc. of the 8th Int. Conf. on Spoken Language Processing, ICSLP'04, pp. 37-40, Octo- ber 2004.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "F",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "38th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Och and H. Ney, \"Improved statistical alignment models,\" 38th Annual Meeting of the Association for Computational Linguistics, pp. 440-447, Octo- ber 2000.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Reordered search and tuple unfolding for ngram-based smt",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Crego",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mari\u00f1o",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gispert",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of the MT Summit X",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. M. Crego, J. Mari\u00f1o, and A. Gispert, \"Reordered search and tuple unfolding for ngram-based smt,\" Proc. of the MT Summit X, September 2005.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An ngram-based statistical machine translation decoder",
"authors": [],
"year": 2005,
"venue": "Proc. of the 9th European Conference on Speech Communication and Technology, Interspeech'05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "--, \"An ngram-based statistical machine transla- tion decoder,\" Proc. of the 9th European Conference on Speech Communication and Technology, Inter- speech'05, September 2005.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Srilm -an extensible language modeling toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the 7th Int. Conf. on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke, \"Srilm -an extensible language model- ing toolkit,\" Proc. of the 7th Int. Conf. on Spoken Language Processing, ICSLP'02, September 2002.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An Empirical Study of Smoothing techniques for Language Modeling",
"authors": [
{
"first": "S",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of 34th ACL",
"volume": "",
"issue": "",
"pages": "310--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Chen and J. Goodman, \"An Empirical Study of Smoothing techniques for Language Modeling,\" in Proceedings of 34th ACL, San Francisco, July 1996, pp. 310-318.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A simplex method for function minimization",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nelder",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mead",
"suffix": ""
}
],
"year": 1965,
"venue": "The Computer Journal",
"volume": "7",
"issue": "",
"pages": "308--313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nelder and R. Mead, \"A simplex method for func- tion minimization,\" The Computer Journal, vol. 7, pp. 308-313, 1965.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Tuning a phrase-based statistical translation system for the iwslt 2005 chinese to english and arabic to english tasks",
"authors": [
{
"first": "M",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fonollosa",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. R. Costa-juss\u00e0 and J. Fonollosa, \"Tuning a phrase-based statistical translation system for the iwslt 2005 chinese to english and arabic to english tasks,\" IWSLT05, October 2005.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Example of tuple extraction from an aligned bilingual sentence pair.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Example of tuple and unfolded (targetreordered) tuple extraction.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Number of tuples found in training over the tuple size for each translation task.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td>supplied</td><td>sent.</td><td>words</td><td colspan=\"3\">voc. Lmax Lavg</td></tr><tr><td>Train set</td><td/><td/><td/><td/><td/></tr><tr><td>Chinese English</td><td colspan=\"3\">176,199 8,687 20,000 182,257 7,316</td><td>68 75</td><td>8.81 9.11</td></tr><tr><td colspan=\"2\">Development set</td><td/><td/><td/><td/></tr><tr><td>Chinese</td><td>1006</td><td>7,309</td><td>1,384</td><td>62</td><td>7.27</td></tr><tr><td>Test set</td><td/><td/><td/><td/><td/></tr><tr><td>Chinese</td><td>506</td><td>3,743</td><td>963</td><td>56</td><td>7.4</td></tr><tr><td>supplied</td><td>sent.</td><td>words</td><td>voc.</td><td colspan=\"2\">Lmax Lavg</td></tr><tr><td>Train set</td><td/><td/><td/><td/><td/></tr><tr><td>Arabic</td><td/><td colspan=\"2\">131,712 25,186</td><td>50</td><td>6.59</td></tr><tr><td colspan=\"4\">Arabic' 20,000 180,477 15,956</td><td>70</td><td>9.02</td></tr><tr><td>English</td><td/><td colspan=\"2\">182,257 7,316</td><td>75</td><td>9.11</td></tr><tr><td colspan=\"2\">Development set</td><td/><td/><td/><td/></tr><tr><td>Arabic Arabic'</td><td>1006</td><td>5,291 7,217</td><td>2,353 1,884</td><td>50 68</td><td>5.26 7.17</td></tr><tr><td>Test set</td><td/><td/><td/><td/><td/></tr><tr><td>Arabic Arabic'</td><td>506</td><td>2,607 3,632</td><td>1,387 1,179</td><td>46 57</td><td>5.13 7.15</td></tr></table>",
"text": "Chi-Eng supplied corpus statistics. There are 257 and 155 unseen words in the dev and test sets.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF2": {
"content": "<table/>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"content": "<table/>",
"text": "Evaluation results (development set) when optimizing BLEU in both translation tasks.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF6": {
"content": "<table/>",
"text": "Evaluation results (development set) when optimizing mWER in both translation tasks.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF8": {
"content": "<table><tr><td>obtained by the TALP-Ngram system in</td></tr><tr><td>both translation tasks. Two runs were submitted for each</td></tr><tr><td>task.</td></tr></table>",
"text": "Results",
"type_str": "table",
"html": null,
"num": null
},
"TABREF10": {
"content": "<table><tr><td>obtained by the two TALP systems par-</td></tr><tr><td>ticipating in IWSLT'05 (on dev and test sets) in both</td></tr><tr><td>translation tasks. Note that mWER and PER scores</td></tr><tr><td>are computed using different implementations in devel-</td></tr><tr><td>opment and test. Test scores are all computed using the</td></tr><tr><td>IWSLT'05 official scores.</td></tr></table>",
"text": "Results",
"type_str": "table",
"html": null,
"num": null
},
"TABREF12": {
"content": "<table><tr><td>weights used by the TALP Phrase and</td></tr><tr><td>Ngram systems in primary runs. Bilingual model weights</td></tr><tr><td>are always set to 1, and the rest of weights are (from top</td></tr><tr><td>to bottom): target LM, word penalty, reordering model,</td></tr><tr><td>IBM-1 lexicon models (source-to-target and target-to-</td></tr><tr><td>source), target-to-source bilingual model (computed us-</td></tr><tr><td>ing relative frequencies) and phrase penalty.</td></tr></table>",
"text": "ModelFor cross-validation, the development was divided into two subsets (dev1, ie. 500 CSTAR'03 sentences and dev2, ie. 506 IWSLT'04 sentences), and optimiza-tions were performed with each dev subset, evaluating on the test set. Results are shown in table 8. As it can be seen, the tendency remains the same when optimizing with dev1, dev2 or fulldev, leading to a surprising decrease in Arabic-to-English performance in the test set.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF13": {
"content": "<table><tr><td colspan=\"3\">zh2en tpl2NULL %1gr</td></tr><tr><td>dev</td><td>1396</td><td>38.9</td></tr><tr><td>test</td><td>643</td><td>36.4</td></tr><tr><td colspan=\"3\">ar2en tpl2NULL %1gr</td></tr><tr><td>dev</td><td>1554</td><td>38.4</td></tr><tr><td>test</td><td>833</td><td>36.9</td></tr></table>",
"text": "BLEU score computed over different sets optimizing with different dev sets.A further comparison was performed in terms of the units used when translating development and test sets. The experience of the authors is that in many cases, translation errors are related to tuples with NULL in the target side. Therefore, table 9 studies the number of these units used in translating the dev and test sets. However, no relevant difference can be observed. As the development size is approximately double the test size, the same happens with tuples to NULL.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF14": {
"content": "<table/>",
"text": "Number of translation units used with NULL in the target side, and the percentage of these units translated as 1grams.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF15": {
"content": "<table><tr><td>shows the number of tuples used as 1-</td></tr><tr><td>grams, 2-grams, 3-grams and 4-grams by the TALP-</td></tr><tr><td>Ngram in both translation tasks regarding development</td></tr><tr><td>and test sets. Again, no special difference is found in the</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF16": {
"content": "<table><tr><td colspan=\"3\">used by the TALP-Ngram system in</td></tr><tr><td colspan=\"3\">both translation tasks when translating the development</td></tr><tr><td>and test sets.</td><td/><td/></tr><tr><td/><td colspan=\"2\">zh2en ar2en</td></tr><tr><td>system set</td><td colspan=\"2\">words words</td></tr><tr><td>Ngram dev</td><td>5581</td><td>4983</td></tr><tr><td>Phrase dev</td><td>5325</td><td>5647</td></tr><tr><td>Ngram test</td><td>2913</td><td>2421</td></tr><tr><td>Phrase test</td><td>2810</td><td>2750</td></tr></table>",
"text": "Ngrams",
"type_str": "table",
"html": null,
"num": null
},
"TABREF17": {
"content": "<table/>",
"text": "Number of words output by the TALP-Ngram and TALP-Phrase systems in both translation tasks when translating the development and test sets.",
"type_str": "table",
"html": null,
"num": null
}
}
}
}