ACL-OCL / Base_JSON /prefixI /json /iwslt /2005.iwslt-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:20:08.735172Z"
},
"title": "Using Multiple Recognition Hypotheses to Improve Speech Translation",
"authors": [
{
"first": "Ruiqiang",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "ATR Spoken Language Translation Research Laboratories",
"institution": "",
"location": {
"addrLine": "2-2 Hikaridai, Seiika-cho, Soraku-gun",
"postCode": "619-0288",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "ruiqiang.zhang@atr.jp"
},
{
"first": "Genichiro",
"middle": [],
"last": "Kikui",
"suffix": "",
"affiliation": {
"laboratory": "ATR Spoken Language Translation Research Laboratories",
"institution": "",
"location": {
"addrLine": "2-2 Hikaridai, Seiika-cho, Soraku-gun",
"postCode": "619-0288",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "genichiro.kikui@atr.jp"
},
{
"first": "Hirofumi",
"middle": [],
"last": "Yamamoto",
"suffix": "",
"affiliation": {
"laboratory": "ATR Spoken Language Translation Research Laboratories",
"institution": "",
"location": {
"addrLine": "2-2 Hikaridai, Seiika-cho, Soraku-gun",
"postCode": "619-0288",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "hirofumi.yamamoto@atr.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our recent work on integrating speech recognition and machine translation for improving speech translation performance. Two approaches are applied and their performance are evaluated in the workshop of IWSLT 2005. The first is direct N-best hypothesis translation, and the second, a pseudo-lattice decoding algorithm for translating word lattice, can dramatically reduce computation cost incurred by the first approach. We found in the experiments that both of these approaches could improve speech translation significantly.",
"pdf_parse": {
"paper_id": "2005",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our recent work on integrating speech recognition and machine translation for improving speech translation performance. Two approaches are applied and their performance are evaluated in the workshop of IWSLT 2005. The first is direct N-best hypothesis translation, and the second, a pseudo-lattice decoding algorithm for translating word lattice, can dramatically reduce computation cost incurred by the first approach. We found in the experiments that both of these approaches could improve speech translation significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "At least two components are involved in speech to speech translation: automatic speech recognizer and machine translation. Unlike plain text translation, the performance of speech translation may be degraded due to the speech recognition errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Several approaches have been proposed to compensate for the loss of recognition accuracy. [1] proposed N -best recognition hypothesis translation, which translates all the top N hypotheses and then outputs the highest scored translations by ways of weighing all the translations using a log-linear model. [2] used word lattices to improve translations. [3] used finite state transducers (FST) to convey the features from acoustic analysis and source target translation models. All these approaches realized an integration between speech recognition modules and machine translation modules so that information from speech recognition, such as acoustic model score and language model score, can be exploited in the translation module to achieve the maximum performance over the single-best translation.",
"cite_spans": [
{
"start": 90,
"end": 93,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 305,
"end": 308,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 353,
"end": 356,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In the field of machine translation, the phrase-based statistical machine translation approach is widely accepted at present. The related literature can be found in [4] [5] . But previously, word-based statistical machine translation, pioneered by IBM Models 1 to 5 [6] , were used widely. In the evaluation, we used both the word-based and phrase-based systems. However, the purpose of this work is not to compare performance of word-based with phrase-based translation. We used two system for different translations. The phrase-based SMT is used in Chinese-English translation while the word-based SMT is used in Japanese-English translation.",
"cite_spans": [
{
"start": 165,
"end": 168,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 169,
"end": 172,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 266,
"end": 269,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper we describe two speech translation structures. The first is a direct N-best hypothesis translation system that uses a text-based machine translation engine to translate each of the hypotheses, and the results are rescored by a log-linear model. The second is a pseudolattice translation system, merging the N -best hypotheses into a compact pseudo-lattice which serves as an input to our proposed decoding algorithm for lattice translation. This algorithm runs much faster than the first approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In the following, Section 2 describes the direct N-best hypothesis translation. Section 3 describes the pseudolattice translation. Section 4 introduces the experimental process and translation results in the evaluation of IWSLT2005. Section 5 presents our conclusions concerning the techniques, and some final remarks are given.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The structure of the direct N-best hypothesis translation is illustrated in Fig. 1 , where there are three modules, an automatic speech recognizer(ASR), a statistical machine translation(SMT), and a log-linear model rescore(Rescore). This structure is used in Chinese to English translation in the evaluation.",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 82,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Direct N-best hypothesis translation",
"sec_num": "2."
},
{
"text": "ASR functions as a decoder to retrieve the source transcript from input speech. The input is a speech signal, X. The output is a source sentence, J. The mechanism of ASR is based on HMM pattern recognition. The acoustic models and language models of the source language are required in the decoding. Because speech recognition errors are unavoidable, ASR outputs multiple hypotheses, the top N-best, to increase the accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ASR: automatic speech recognition",
"sec_num": "2.1."
},
{
"text": "The SMT module is to translate the source language, J, into target language, E. A phrase-based statistical machine translation decoder was used in the evaluation. The decoding process is carried out in three steps: First, a word graph is created by beam-search where phrase translation models and trigram models are used to extend beams. Second, A* search is used to find the top N-best paths in the word graph. Finally, long-range(> 3) language models are used to rescore the N-best candidates and output the best one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SMT: statistical machine translation",
"sec_num": "2.2."
},
{
"text": "In order to collect source-target translation pairs, we used GIZA++ to do bi-directional alignment, similar to [5] . In one direction alignment, one source word is aligned to multiple target words; In the other direction, one target word is aligned to multiple source words. Finally, the bidirectional alignment are merged and the phrase pairs are extracted from the overlapping alignments.",
"cite_spans": [
{
"start": 111,
"end": 114,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SMT: statistical machine translation",
"sec_num": "2.2."
},
{
"text": "The translation probability of translation pairs were computed by relative frequency, counting the co-occurrences of the pairs in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SMT: statistical machine translation",
"sec_num": "2.2."
},
{
"text": "Loglinear models are applied to rescore the translations which are produced by SMT. The model integrates features from both ASR and SMT. We used three features from ASR and 10 features from SMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "The log-linear model used in our speech translation process, P (E|X), is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "P \u039b (E|X) = exp( M i=1 \u03bb i f i (X, E)) E exp( M i=1 \u03bb i f i (X, E )) \u039b = {\u03bb M 1 }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "(1) Features from ASR include acoustic model score, source language model score, and posterior probability calculated as below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "P (X|J k )P (J k ) Ji P (X|J i )P (J i ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "Features from SMT include target word language model score, class language model score, target phrase language model, phrase translation model, distortion model, length model (defined as the number of words in the target), deletion model (defined as the NULL word alignment), lexicon model (obtained from GIZA++), and size model (representing the size of jump between two phrases.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "For the optimal value of \u03bb, our goal is to minimize the translation \"distortion\" between the reference translations, R, and the translated sentences, E.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb M 1 = optimize D( E, R)",
"eq_num": "(3)"
}
],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "E = { E 1 , \u2022 \u2022 \u2022 , E L } is a set of translations of all utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "The translation E l of the l-th utterance is produced by Eq. 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "Let R = {R 1 , \u2022 \u2022 \u2022 , R L }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "be the set of translation references for all utterances. Human translators paraphrased 16 reference sentences for each utterance, i.e., R l contains 16 reference candidates for the l-th utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "D( E, R) is a translation \"distortion\", that is, an objective translation assessment. A basket of automatic evaluation metrics can be used, such as BLEU, NIST, mWER, mPER and GTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "Because the distortion function, D( E, R), is not a smoothed function, we used Powell's search method to find a solution [7] .",
"cite_spans": [
{
"start": 121,
"end": 124,
"text": "[7]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "The experimental results in [1] have shown that minimizing the translation distortion in development data is an effective method to improve translation qualities of test data.",
"cite_spans": [
{
"start": 28,
"end": 31,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring: log-linear model rescoring",
"sec_num": "2.3."
},
{
"text": "The N -best hypothesis translation improved speech translation significantly, as found in [1] . However, the approach is inefficient, computationally expensive and time consuming.",
"cite_spans": [
{
"start": 90,
"end": 93,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-lattice translation",
"sec_num": "3."
},
{
"text": "We proposed a new decoding algorithm, pseudo-lattice decoding, to improve on the direct N-best translation. This approach can also translate the N-best hypotheses, and the processing time is shorten dramastically because the same word IDs appearing in the N-best hypotheses are translated fewer times than the direct N-best translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-lattice translation",
"sec_num": "3."
},
{
"text": "We start from the word lattice minimization produced by ASR to describe the approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-lattice translation",
"sec_num": "3."
},
{
"text": "Because we use HMM-based ASR to generate the raw source word lattice(SWL), the same word ID can be recognized repeatedly in slightly different frames. As a result, the same word ID may appear in multiple edges in the SWL. Hence, when N -best hypotheses are generated from the word lattice, the same word ids may appear in multiple hypotheses. Fig. 2 shows an example of lattice downsizing. The word IDs are shown in the parentheses. We use the following steps to minimize the raw SWL by removing the repeated edges. First, from the raw SWL we generate Nbest hypotheses as a sequence of edge numbers. We list the word IDs of all the edges in the hypotheses, remove the duplicate words, and index the remainders with new edge IDs. The number of new edges is fewer than that in the raw SWL. Next, we replace the edge sequence in each hypothesis with a new edge ID. If more than one edge shares the same word ID in one hypothesis, we add a new edge ID for the word again and replace the edge with the new ID. Finally, we generate a new word lattice with a new word list as its edges, consisting of the N -best hypotheses only. The raw SWL becomes the downsized SWL, which is much smaller than the raw SWL. On av-erage, the word lattice is reduced by 50% in our experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 343,
"end": 349,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Minimizing the source word lattice(SWL)",
"sec_num": "3.1."
},
{
"text": "As shown in Fig. 2 , one hypothesis is removed after minimization.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 18,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Minimizing the source word lattice(SWL)",
"sec_num": "3.1."
},
{
"text": "Sometimes the downsized SWL cannot form a lattice, but the N -best ASR hypotheses with newly assigned edge IDs. So we denote the downsized SWL as a pseudolattice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimizing the source word lattice(SWL)",
"sec_num": "3.1."
},
{
"text": "We use beam search followed by a A* search in pseudolattice translation. This approach has been used in text translation by [8] . We extend the approach to speech translation in this work. It is a two-pass decoding process. The first pass uses a simple model to generate a word graph to save the most likely hypotheses. It amounts to converting the pseudo word lattice into a target language word graph (TWG). Edges in the SWL are aligned to the edges in the TWG. Although the SWL is a faked lattice, the generated TWG has a true graph structure. The second pass uses a complicated model to output the best hypothesis by traversing the target word graph.",
"cite_spans": [
{
"start": 124,
"end": 127,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-lattice decoding algorithm",
"sec_num": "3.2."
},
{
"text": "We describe the two-pass WLT algorithm in the following two sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-lattice decoding algorithm",
"sec_num": "3.2."
},
{
"text": "The bottom of Fig. 3 shows an example of a translation word graph, which corresponds to the recognition word lattice in the top. Each edge in the TWG is a target language word which is a translation of a source word in the SWL. The edges that have the same structure(including alignment and target context) are merged into a node. The node has one element indicating the source word coverage up to the current node. The coverage is a binary vector with size equal to the number of edges in the SWL, indicating the number of translated source edges. If the j-th source word was translated, the j-th element is set to 1; otherwise it equals 0. If a node covers all the edges of a full path in the SWL, it connects to the last node, the terminal node, in the TWG.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 20,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "First pass -from SWL to TWG",
"sec_num": "3.2.1."
},
{
"text": "There are two main operations in expanding a node into edges: DIRECT and ALIGN. DIRECT extends the hypothesis with a target word by translating an uncovered source word. The target word is chosen based on current target N -gram context and possible translations of the uncovered source word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "First pass -from SWL to TWG",
"sec_num": "3.2.1."
},
{
"text": "ALIGN extends the hypothesis by aligning one more uncovered source word to the current node to increase fertilities of target word, where the target word is a translation of multiple source words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "First pass -from SWL to TWG",
"sec_num": "3.2.1."
},
{
"text": "The edge is not extended if the resulted hypothesis does not align to any hypothesis in the SWL. If the node has covered a full path in the SWL, this node is connected to the end node. When there is no nodes available for possible extension, the conversion is completed. A simple example of conversion algorithm is shown in Algorithm 1. The whole process equals to growing a graph. The graph can be indexed in time slices because the new nodes are created based on the old nodes of the last nearest time slice. New nodes are created by DIRECT or ALIGN to cover the uncovered source edge and connect to the old nodes. The new generated nodes are sorted in the graph buffer and merged if they share the same structure: the same coverage, the same translations, and the same N -gram sequence. If the node covers a full hypothesis in the SWL, the node connects to the terminal node. If no nodes need to be expanded, the conversion terminates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "First pass -from SWL to TWG",
"sec_num": "3.2.1."
},
{
"text": "In the first pass, we incorporate a simpler translation model into the log-linear model: only the lexical model, IBM model 1. The ASR posterior probabilities P pp are calculated by partial hypothesis from the start to the current node. P pp uses the highest value among all the ASR hypotheses under the current context. The first pass serves to keep the most likely hypotheses in the translation word graph, and leave the job of finding the optimal translation to the second pass.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "First pass -from SWL to TWG",
"sec_num": "3.2.1."
},
{
"text": "An A* search traverses the TWG generated in the last section -the best first approach. All partial hypotheses generated are pushed into a priority queue with the top hypothesis popping first out of the queue for the next extension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Second pass -by an A* search to find the best output from the TWG",
"sec_num": "3.3."
},
{
"text": "To execute the A* search, the hypothesis score, D(h, n), of a node n is evaluated in two parts: the forward score, F (h, n), and the heuristic estimation, H(h, n), D(h, n) = F (h, n) + H(h, n). The calculation of F (h, n) begins from the start node and accumulates all nodes' scores belonging to the hypothesis until the current node, n. The H(h, n) is defined as the accumulated maximum probability of the models from the end node to the current node In the second pass we incorporated the features of IBM Model 4 into the log-linear model. However, we cannot use IBM Model 4 directly because the calculations of the two models, P (\u03a6 0 |E) and D(E, J), require the source sentence, but in fact this is unknown. Hence, the probability of P (\u03a6 0 |E) and D(E, J) cannot be calculated precisely in decoding. Our method to resolve this problem is to use the maximum over all possible hypotheses. For the above two models, we calculated the scores for all the possible ASR hypotheses under the current context. The maximum value was used as the model's probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Second pass -by an A* search to find the best output from the TWG",
"sec_num": "3.3."
},
{
"text": "There are five languages involved in the evaluation: English, Chinese, Japanese, Korean and Arabic. The available translation directions are: Chinese to English, English to Chinese, Japanese to English, Korean to English, and Arabic to English. Of these choices we participated in two tasks: Chinese to English translation and Japanese to English translation. Regarding the data for training the translation engine, the participants must conform to four data tracks: supplied data provided by the organizer; supplied data+tools, which allows the participant to make word segmentation and morphlogical analysis of the supplied data; unrestricted data, any public data from public sources like LDC or webs; C-STAR data, with no restraints on the data, including the full BTEC corpus and proprietary data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments in the IWSLT2005 evaluation",
"sec_num": "4."
},
{
"text": "In this evaluation, we took part in two data tracks: supplied data+tools and the C-STAR track.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments in the IWSLT2005 evaluation",
"sec_num": "4."
},
{
"text": "In the first track, we used our in-house part-of-speech tagging tool to make a morphological analysis of the supplied data. In the second track, we used the BTEC corpus; but, for Chinese to English translation, we used only the BTEC1 data. For Japanese to English translation we used the BTEC1-BETC4 data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments in the IWSLT2005 evaluation",
"sec_num": "4."
},
{
"text": "As described in the previous sections, we used the phrase-based statistical machine translation for Chineseto-English translation and word-based SMT for Japaneseto-English translation. We trained the phrase-based translation model by carrying out bi-directional alignment first and then extracted the phrase translation pairs. The phrase translation probability was calculated by counting the phrase pair co-occurrences. Additional models used in the phrasebased approach consist of N-gram language models, distortion models and lexicon models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments in the IWSLT2005 evaluation",
"sec_num": "4."
},
{
"text": "For training the word-based pseudo-lattice translation models, we used GIZA++ to train an IBM Model1 and Model4. The IBM Model1 is used in the first pass of pseudo-lattice decoding and IBM Model4 used in the A* search. In addition, some models such as language models, jump size models, and target length models are inte-grated with the IBM Model4 log-linearly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments in the IWSLT2005 evaluation",
"sec_num": "4."
},
{
"text": "Some statistical properties of the experimental data and models are shown in Table 1 , where language pair indicates Chinese to English translation (C/E) and Japanese to English translation (J/E). \"Data size\" shows the sentence numbers in the training pairs. \"t-table\" shows the size of source and target pairs in the translation model. Phrase-based and word-based translation models were used for Chinese-to-English and Japanese-to-English translation respectively. \"Ngram\" shows the number of consequent words in English language model, extracted from the training data. \"perplexity\" shows the source language model's perplexity in the test set and target language model's perplexity in the development data.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments in the IWSLT2005 evaluation",
"sec_num": "4."
},
{
"text": "Shown in table 2 and 3 are the results of development data and test data, respectively. \"direct N-best\" and \"pseudolattice\" mean that the speech translation are made by a direct N-best translation approach or pseudo-lattice translation approach. The development data results are of development set2, IWSLT2004, containing 500 sentences while the test data contain 506 sentences. For the Chinese ASR translation task, the organizer provides three sets of ASR output. The translations of ASR output presented in table 2 were made using the third set, the word accuracy ratio from 87.3%, single-best , to 94.5%, N-best.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation results of development data and test data",
"sec_num": "4.1."
},
{
"text": "After analyzing the experimental results, we can make the following conclusions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation results of development data and test data",
"sec_num": "4.1."
},
{
"text": "\u2022 Undoubtedly, the translations in the C-star track are better than those of the supplied data track ,regardless of C/E or J/E, because more training data are used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation results of development data and test data",
"sec_num": "4.1."
},
{
"text": "\u2022 Comparing the translation results of manual transcription, N-best, pseudo-lattice, and single-best, we found that ASR word error worsen the translations greatly because the single-best's results are much worse than the plain text's. However, using N-best hypotheses can counteract ASR word errors. N-best hypothesis translation improves singlebest translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation results of development data and test data",
"sec_num": "4.1."
},
{
"text": "\u2022 In most cases, N-best translations are better than the single-best translations. The improvement by N-best translations is significant for C-star track.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation results of development data and test data",
"sec_num": "4.1."
},
{
"text": "\u2022 There are some inconsistence to the above analysis. The NIST score of manual transcription in the J/E supplied track is worse than the single-best's. We guess that this is because our log-linear model was optimized on the BLEU score, therefore, the NIST score was not improved. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation results of development data and test data",
"sec_num": "4.1."
},
{
"text": "This section highlights the comparison of pseudo-lattice translation and direct N-best translation. As shown in Table 3 , we found in the testset evaluation both direct N-best translation and pseudo-lattice translation improved on the single-best translation. The pseudo-lattice translation is slightly worse than the direct N-best translation. A twin paper [9] describes the details of our lattice decoding algorithm. We used confidence measure to filter the ASR hypotheses with low confidence. We used the same decoding parameters as the direct N-best translation, such as beam size and threshold for pruning. And also, we applied model approximations in lattice decoding. While all these methods resulted in the improvement of the singlebest translation, they made the lattice translation worse than the direct N-best translation. However, the pseudolattice translation is much faster. The total running time for lattice translation is only 20% of that in the direct Nbest translation for the results shown in Table 3 . We will continue to improve pseudo-lattice translation in future work.",
"cite_spans": [
{
"start": 358,
"end": 361,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 112,
"end": 119,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 1013,
"end": 1020,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Comparison of pseudo-lattice translation and direct N-best translation",
"sec_num": "4.2."
},
{
"text": "Integration of speech recognition and machine translation is a promising research theme in speech translation. In addition to our approaches, finite state transducers (FST) was used in [3] . However, the speech translation performance produced by FST integration structure was reported lower than that by the single-best serial structure. A latest work in FST integration [10] carried out an Italian-English speech translation task, where a significant improvement was observed for grammartically closed languages. Our main purpose in taking part in this year's evaluation is to verify our work in speech translation, seeking an effective solution for integrating speech recognition and machine translation. In this work we proposed two approach: direct N-best hypothesis translation and pseudolattice translation. Both approaches achieved satisfactory improvement over single-best translation. In some cases the improvement can reach 50% of that achieved with correct manual transcription translation.",
"cite_spans": [
{
"start": 185,
"end": 188,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 372,
"end": 376,
"text": "[10]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
}
],
"back_matter": [
{
"text": "We would like to thank those who have gave us their sincere assistance in this work, especially, Dr.Michael Paul, Dr.Wai-kit Lo, Dr.Xinhui Hu and Mr.Teruaki Hayashi.The research reported here was supported in part by a contract with the National Institute of Information and Communications Technology of Japan entitled \"A study of speech dialogue translation technology based on a large corpus\".We also thank the reviewers for the comments and editorial corrections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "6."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A unified approach in speech-to-speech translation: Integrating features of speech recognition and machine translation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kikui",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Soong",
"suffix": ""
},
{
"first": "W",
"middle": [
"K"
],
"last": "Lo",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of Coling",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Zhang, G. Kikui, H. Yamamoto, T. Watanabe, F. Soong, and W. K. Lo, \"A unified approach in speech-to-speech translation: Integrating features of speech recognition and machine translation,\" in Proc. of Coling 2004, Geneva, 2004.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Using word lattice information for a tighter coupling in speech translation systems",
"authors": [
{
"first": "S",
"middle": [],
"last": "Saleem",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Jou",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of IC-SLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Saleem, S. chen Jou, S. Vogel, and T. Schultz, \"Using word lattice information for a tighter cou- pling in speech translation systems,\" in Proc. of IC- SLP 2004, Jeju, Korea, 2004.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Some approaches to statistical and finite-state speech-to-speech translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Casacuberta",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Vidal",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Vilar",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Barrachina",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Garcia-Varea",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Llorens",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Martinez",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Molau",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Nevada",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pastor",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pico",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sanchis",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 2004,
"venue": "Computer Speech and Language",
"volume": "",
"issue": "",
"pages": "25--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Casacuberta, H.Ney, F.J.Och, E.Vidal, J.M.Vilar, S.Barrachina, I.Garcia-Varea, D.Llorens, C.Martinez, S.Molau, F.Nevada, M.Pastor, D.Pico, A.Sanchis, and C.Tillmann, \"Some approaches to statistical and finite-state speech-to-speech translation,\" in Computer Speech and Language, 2004, pp. 25-47.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A phrase-based, joint probability model for statistical machine translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of EMNLP-2002",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marcu and W. Wong, \"A phrase-based, joint probability model for statistical machine transla- tion,\" in Proc. of EMNLP-2002, Philadelphia, PA, July 2002.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "HLT/NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, F. J. Och, and D. Marcu, \"Statistical phrase-based translation,\" in HLT/NAACL, Edmon- ton, Canada, 2003.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "V",
"middle": [
"J D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "S",
"middle": [
"A D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, V. J. D. Pietra, S. A. D. Pietra, and R. L. Mercer, \"The mathematics of statistical ma- chine translation: Parameter estimation,\" Computa- tional Linguistics, vol. 19, no. 2, pp. 263-311, 1993.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Generation of word graphs in statistical machine translation",
"authors": [
{
"first": "N",
"middle": [],
"last": "Ueffing",
"suffix": ""
},
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the Conference on Empirical Methods for Natural Language Processing (EMNLP02)",
"volume": "",
"issue": "",
"pages": "156--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Ueffing, F. J. Och, and H. Ney, \"Generation of word graphs in statistical machine translation,\" in Proc. of the Conference on Empirical Meth- ods for Natural Language Processing (EMNLP02), Philadelphia, PA, July 2002, pp. 156-163.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A decoding algorithm for word lattice translation in speech translation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kikui",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 2005,
"venue": "IWSLT'2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.Zhang, G.Kikui, H.Yamamoto, and W.Lo, \"A decoding algorithm for word lattice translation in speech translation,\" in IWSLT'2005, Pittsburgh, PA, 2005.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "On the integration of speech recognition and statistical machine translation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kanthak",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2005,
"venue": "Eurospeech'2005",
"volume": "",
"issue": "",
"pages": "3177--3181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E.Matusov, S.Kanthak, and H.Ney, \"On the integra- tion of speech recognition and statistical machine translation,\" in Eurospeech'2005, Lisbon, Portugal, 2005, pp. 3177-3181.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "N-best hypothesis translation",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "An example of word lattice reduction Source language word lattice (top) and target language word graph (bottom) n.",
"uris": null,
"num": null
},
"TABREF0": {
"text": "FOR EACH node n=0,1,..., #(G[t]) DO 4:IF (n cover A FULL PATH) NEXT 5:FOR EACH edge l=0,1,...,#(EDGES) DO",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Algorithm 1 Conversion Algorithm from SWL to TWG</td></tr><tr><td colspan=\"2\">1: Initialize graph buffer G[0]=0; t=0</td></tr><tr><td>2: DO</td><td/></tr><tr><td>3:</td><td/></tr><tr><td>6:</td><td>IF (n cover l) NEXT</td></tr><tr><td>7:</td><td>IF (n not cover ANY SWL PATH) NEXT</td></tr><tr><td>8:</td><td>generate new node and push to G[t+1]</td></tr><tr><td>9:</td><td>merge and prune nodes in G[t+1]</td></tr><tr><td>10:</td><td>t= t+1</td></tr><tr><td colspan=\"2\">11:WHILE (G[t] is empty)</td></tr></table>"
},
"TABREF1": {
"text": "",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"4\">Properties of experimental data and models</td></tr><tr><td>language pair</td><td colspan=\"2\">data track</td><td colspan=\"4\">data size t-table Ngram</td><td>perplexity</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>testset(source language) dev.data(target language)</td></tr><tr><td>C/E</td><td colspan=\"2\">supplied+tools</td><td colspan=\"2\">20,000</td><td>1.8M</td><td>97K</td><td>65.4</td><td>53.8</td></tr><tr><td/><td colspan=\"2\">C-star</td><td colspan=\"2\">172,170</td><td>5.0M</td><td>961K</td><td>69.3</td><td>52.2</td></tr><tr><td>J/E</td><td colspan=\"2\">supplied+tools</td><td colspan=\"2\">20,000</td><td>64K</td><td>55K</td><td>54.9</td><td>53.7</td></tr><tr><td/><td colspan=\"2\">C-star</td><td colspan=\"3\">463,365 506K</td><td>354K</td><td>22.5</td><td>31.6</td></tr><tr><td/><td/><td colspan=\"6\">Table 2: Translation results for development set2 (IWSLT2004)</td></tr><tr><td colspan=\"2\">translation pair</td><td colspan=\"2\">data track</td><td/><td colspan=\"2\">translation type</td><td>BLEU NIST WER PER METEOR</td></tr><tr><td>C/E</td><td/><td colspan=\"6\">supplied+tools manual transcription 0.409</td><td>8.37 0.537 0.433</td><td>0.634</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">direct N-best</td><td>0.374</td><td>7.29 0.563 0.473</td><td>0.576</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">single-best</td><td>0.370</td><td>7.47 0.579 0.481</td><td>0.578</td></tr><tr><td/><td/><td colspan=\"2\">C-star</td><td colspan=\"4\">manual transcription 0.548</td><td>9.34 0.428 0.350</td><td>0.70</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">direct N-best</td><td>0.508</td><td>7.71 0.463 0.408</td><td>0.637</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">single-best</td><td>0.474</td><td>7.88 0.502 0.428</td><td>0.625</td></tr><tr><td>J/E</td><td/><td colspan=\"6\">supplied+tools manual transcription 0.433</td><td>5.06 0.509 0.470</td><td>0.564</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">pseudo-lattice</td><td>0.430</td><td>4.70 0.514 0.476</td><td>0.557</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">single-best</td><td>0.428</td><td>4.85 0.517 0.477</td><td>0.556</td></tr><tr><td/><td/><td colspan=\"2\">C-star</td><td colspan=\"4\">manual transcription 0.623</td><td>9.16 0.351 0.306</td><td>0.737</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">pseudo-lattice</td><td>0.607</td><td>9.06 0.372 0.321</td><td>0.719</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">single-best</td><td>0.596</td><td>9.02 0.377 0.328</td><td>0.716</td></tr></table>"
},
"TABREF2": {
"text": "",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td colspan=\"3\">Translation results for test data (IWSLT2005)</td></tr><tr><td>translation pair</td><td>data track</td><td>translation type</td><td colspan=\"3\">BLEU NIST WER PER METEOR GTM</td></tr><tr><td>C/E</td><td colspan=\"3\">supplied+tools manual transcription 0.305</td><td>7.20 0.518 0.422</td><td>0.573</td><td>0.471</td></tr><tr><td/><td/><td>direct N-best</td><td>0.267</td><td>6.19 0.645 0.546</td><td>0.506</td><td>0.421</td></tr><tr><td/><td/><td>single-best</td><td>0.251</td><td>5.93 0.683 0.581</td><td>0.479</td><td>0.395</td></tr><tr><td/><td>C-star</td><td colspan=\"2\">manual transcription 0.421</td><td>8.17 0.518 0.422</td><td>0.642</td><td>0.547</td></tr><tr><td/><td/><td>direct N-best</td><td>0.375</td><td>6.80 0.561 0.486</td><td>0.560</td><td>0.493</td></tr><tr><td/><td/><td>single-best</td><td>0.340</td><td>6.76 0.619 0.525</td><td>0.531</td><td>0.461</td></tr><tr><td>J/E</td><td colspan=\"3\">supplied+tools manual transcription 0.388</td><td>4.39 0.563 0.519</td><td>0.520</td><td>0.431</td></tr><tr><td/><td/><td>direct N-best</td><td>0.383</td><td>4.27 0.574 0.530</td><td>0.513</td><td>0.422</td></tr><tr><td/><td/><td>pseudo-lattice</td><td>0.378</td><td>4.18 0.578 0.534</td><td>0.511</td><td>0.420</td></tr><tr><td/><td/><td>single-best</td><td>0.366</td><td>4.50 0.576 0.527</td><td>0.508</td><td>0.412</td></tr><tr><td/><td>C-star</td><td colspan=\"3\">manual transcription 0.727 10.94 0.289 0.243</td><td>0.80</td><td>0.716</td></tr><tr><td/><td/><td>direct N-best</td><td colspan=\"2\">0.679 10.04 0.324 0.281</td><td>0.760</td><td>0.670</td></tr><tr><td/><td/><td>pseudo-lattice</td><td>0.670</td><td>9.86 0.329 0.289</td><td>0.763</td><td>0.665</td></tr><tr><td/><td/><td>single-best</td><td>0.646</td><td>9.68 0.352 0.304</td><td>0.741</td><td>0.645</td></tr></table>"
}
}
}
}