{ "paper_id": "O04-1028", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:00:19.919905Z" }, "title": "A New Two-Layer Approach for Spoken Language Translation", "authors": [ { "first": "Jhing-Fa", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "wangjf@csie.ncku.edu.tw" }, { "first": "Shun-Chieh", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "" }, { "first": "Hsueh-Wei", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This study proposes a new two-layer approach for spoken language translation. First, we develop translated examples and transform them into speech signals. Second, to properly retrieve a translated example by analyzing speech signals, we expand the translated example into two layers: an intention layer and an object layer. The intention layer is used to examine intention similarity between the speech input and the translated example. The object layer is used to identify the objective components of the examined intention. Experiments were conducted with the languages of Chinese and English. The results revealed that our proposed approach achieves about 86% and 76% understandable translation rate for the Chinese-to-English and the English-to-Chinese translations, respectively.", "pdf_parse": { "paper_id": "O04-1028", "_pdf_hash": "", "abstract": [ { "text": "This study proposes a new two-layer approach for spoken language translation. First, we develop translated examples and transform them into speech signals. Second, to properly retrieve a translated example by analyzing speech signals, we expand the translated example into two layers: an intention layer and an object layer. The intention layer is used to examine intention similarity between the speech input and the translated example. The object layer is used to identify the objective components of the examined intention. Experiments were conducted with the languages of Chinese and English. The results revealed that our proposed approach achieves about 86% and 76% understandable translation rate for the Chinese-to-English and the English-to-Chinese translations, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With the growing of globalization, people now often meet and do business with those who speak different languages, on-demand spoken language translation (SLT) has become increasingly important (See JANUS III [6] , Verbmobil [9] , EUTRANS [3] and ATR-MATRIX [1]). Currently, there are two main architectures of SLT: conventional sequential architecture and fully integrated architecture [1] . For the sequential architecture, a spoken language translation is composed by a speech recognition system followed by a linguistic (or non-linguistic) text-to-text translation system. In the integrated architecture, acoustic-phonetic models are integrated into translation models in the similar way as for speech recognition. Recently, an integrated architecture based on stochastic finite-state transducer (SFST) has been presented in [3, 4] . The SFST approach integrated three models in a single network where the search process takes place. The three models are Hidden Markov Models for the acoustic part, language models for the source language and finite state transducers for the transfer between the source and target language. The output of this search process is the target word sequence associated to the optimal path. Fig. 1 shows an example of the SFST approach.", "cite_spans": [ { "start": 208, "end": 211, "text": "[6]", "ref_id": "BIBREF4" }, { "start": 224, "end": 227, "text": "[9]", "ref_id": "BIBREF7" }, { "start": 238, "end": 241, "text": "[3]", "ref_id": "BIBREF1" }, { "start": 386, "end": 389, "text": "[1]", "ref_id": null }, { "start": 828, "end": 831, "text": "[3,", "ref_id": "BIBREF1" }, { "start": 832, "end": 834, "text": "4]", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 1222, "end": 1228, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u03bb denotes the empty string. The source sentence \"una habitaci\u00f3n doble\" can be translated to either \"a double room\" or \"a room with two beds\". The most probable translation is the first one with probability of 0.09.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, when the training data of SFST is insufficient, the results obtained by the sequential architecture are better than the results obtained by the integrated architecture [4] . In addition, word reordering is still a thorny problem in SFST which is based on statistical-based translation methods [5] . Therefore, we propose adopting example-based approaches for better integration. Such the adopted approach does not require the database to be as large as in SFST and can utilize word mappings between source-target language of a chosen translated example for word reordering [2, 8] . In this paper, we further propose a new two-layer approach for the example-based spoken language translation. First, we develop translated examples and transform them into speech signals. Second, to properly retrieve a translated example by analyzing speech signals, we expand the translated example into two layers: an intention layer and an object layer. The intention layer is used to examine intention similarity between the speech input and the translated example. The object layer is used to identify the objective components of the examined intention.", "cite_spans": [ { "start": 177, "end": 180, "text": "[4]", "ref_id": "BIBREF2" }, { "start": 302, "end": 305, "text": "[5]", "ref_id": "BIBREF3" }, { "start": 582, "end": 585, "text": "[2,", "ref_id": "BIBREF0" }, { "start": 586, "end": 588, "text": "8]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Fig. 1. Examples of the stochastic finite-state transducer", "sec_num": null }, { "text": "The rest of this paper is organized as follows. Section 2 discusses the proposed two-layer approach. Score normalization is presented in Section 3. The experimental results are given in Section 4. Concluding remarks are finally made in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fig. 1. Examples of the stochastic finite-state transducer", "sec_num": null }, { "text": "The Proposed Two-Layer Approach", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "Referring to Fig. 2 , the first step of the proposed two-layer approach is to expand translated examples, which have intention components and object components. After expanding the translated examples, the second step is to adapt the two-layer search plan composed of an intention layer and an object layer. At last, measurement modification is used to modify similarity measurement between the intention layer and the object layer. This study further discusses translated example expansion, two-layer search plan adaptation, and measurement modification. ", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 19, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "2", "sec_num": null }, { "text": "The process of translated example expansion is to group similar translated examples and compare their differences for expanding objects. Table 1 shows an example of fours pairs of grouped translated examples. For these grouped translated examples, the similar constitutes \"Is \u2026 still available for \u2026\" \u2194 \"\u2026 \u9084 \u6709 \u2026 \u55ce\uff02 are defined into an intention sequence translation, which would conduct the meaning of a translation. And the differences compared with the intention sequence are regarded as expanded objects. where ExTrans comprises an intention translation, and six object translations. The six object translations are \"room service\u2194\u5ba2\u623f \u670d\u52d9,\" \"breakfast\u2194\u65e9\u9910,\" \"laundry service\u2194\u6d17\u6ecc \u670d\u52d9,\" \"a single room\u2194\u4e00\u9593 \u55ae\u4eba \u623f,\" \"tomorrow\u2194\u660e\u5929,\" and \"tonight\u2194\u4eca\u665a\".", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 144, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Translated Example Expansion", "sec_num": "2.1" }, { "text": "After expanding translated examples, each translated example has two parts: an intention part and an object part. While measuring the speech signals of i-th translated example i v , the speech signals of i v need to be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Layer Search Plan Adaptation of Expanded Translated Examples", "sec_num": "2.2" }, { "text": "redefined two layers } , { i i i v v v \u2032 \u2032 \u2032 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Layer Search Plan Adaptation of Expanded Translated Examples", "sec_num": "2.2" }, { "text": ", where i v\u2032 is an intention layer component of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Layer Search Plan Adaptation of Expanded Translated Examples", "sec_num": "2.2" }, { "text": "i v and i v \u2032 \u2032 is an object layer component of i v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Layer Search Plan Adaptation of Expanded Translated Examples", "sec_num": "2.2" }, { "text": ". Each two-layer searching plan is generated by the translated example and the speech input and the object layer is used to identify the objective components of the examined intention. In terms of searching for an optimal path of states through the two-layer search plan, the issue now is to measure the pair (s, i v\u2032 ) of a fixed number, says ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Layer Search Plan Adaptation of Expanded Translated Examples", "sec_num": "2.2" }, { "text": "N i , of i v\u2032 . i v\u2032 i v \u2032 \u2032 : i v 1 i v \u2032 \u2032 1 i v\u2032 2 i v\u2032 : : i iN v\u2032 ) ( i N N i v \u2212 \u2032 \u2032 S :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Layer Search Plan Adaptation of Expanded Translated Examples", "sec_num": "2.2" }, { "text": "After adapting the two-layer search plan, another problem is how to measure the similarity of pair (s, i v\u2032 ) while adjudging the object frames of i v \u2032 \u2032 for identifying the other object patterns. Referring to Fig 4. , given two similarity measurement scores of pair (s, i :", "cite_spans": [], "ref_spans": [ { "start": 211, "end": 217, "text": "Fig 4.", "ref_id": null } ], "eq_spans": [], "section": "Measurement Modification", "sec_num": null }, { "text": "1 i v\u2032 \u2032 2 i v\u2032 \u2032 1 i v\u2032 2 i v\u2032 : : i in v\u2032 j v\u2032 j v\u2032 \u2032 : 1 j v\u2032 \u2032 1 j v\u2032 : : j jn v\u2032 i v\u2032 i v\u2032 \u2032 i v j v ) ( i N N i v \u2212 \u2032 \u2032 ) ( i N N i v \u2212 \u2032 \u2032 * i D \u2032 * j D \u2032 S : S : Fig. 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measurement Modification", "sec_num": null }, { "text": "For the modification of similarity measurement between the intention layer and the object layer, there are two additional types of search paths in this research: 1) paths between i v\u2032 and i v \u2032 \u2032 and 2) paths within i v\u2032 or i v \u2032 \u2032 . For the paths between i v\u2032 and i v \u2032 \u2032 , a search block Z in the object layer, which will be referred to a score skip level block, contains more than one path connected by ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ". Search results of various translated examples", "sec_num": null }, { "text": "The intention sequence in the translated example is an important identification part, where the intention sequence would conduct the meaning of a translation. Therefore, the dissimilarity measurement of the part of the intention sequence is used to rank all the translated examples. However, the cumulative measured dissimilarity score is propagated to the length of the intention sequence. In this study, a length-conditioned weight concept is adopted to compensate this defect. The normalized measured dissimilarity ( ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score Normalization", "sec_num": "3" }, { "text": "i v s \u2032 \u2206 , )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score Normalization", "sec_num": "3" }, { "text": "is determined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score Normalization", "sec_num": "3" }, { "text": "s i v w i v s , ) , ( \u2202 = \u2032 \u2206 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score Normalization", "sec_num": "3" }, { "text": "where \u2202 is a weight factor,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score Normalization", "sec_num": "3" }, { "text": "( ) 1 , \u2212 \u2032 \u22c5 \u2212 \u2032 = s s v w i s v i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Score Normalization", "sec_num": "3" }, { "text": "The weight \u2202 is decided by an interval [1.0, 2.0]. Fig. 6 indicates that the interval \u2202 , which yields the most accurate retrieval", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 57, "text": "Fig. 6", "ref_id": null } ], "eq_spans": [], "section": "Score Normalization", "sec_num": "3" }, { "text": "results, is [ ] \u03b4 \u03b4 + \u2212 3 . 1 , 3 . 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental analysis shown in", "sec_num": null }, { "text": ". Therefore, the \u2202 is set to 1.3 in this study. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental analysis shown in", "sec_num": null }, { "text": "This study built a collection of English sentences and their Chinese translations that frequently appear in phrasebooks for foreign tourists. Because the translations were made on a sentence-by-sentence basis, the corpus was sentence-aligned after being collated. Table 3 lists a summary of the corpus used in the experiments. The corpus comprises two parts: a training set of 11,885 translated examples for the training phase, and a test set of 105 translated examples for the translation phase (the test set differs from the training set). In order to evaluate the system performance, a collection of 1,050 utterances from the 11,885 examples were speaker-dependent trained, and 105 additional utterances of each language were collected by using one male speaker (Sp1) for inside testing and by using two bilingual male speakers (Sp2 and Sp3) for outside testing. All the utterances were sampled at an 8 kHz sampling rate with 16-bit precision on a Pentium \u00ae IV 1.8GHz, 1GB RAM, Windows \u00ae XP PC.", "cite_spans": [], "ref_spans": [ { "start": 264, "end": 271, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "The Task and the Corpus", "sec_num": "4.1" }, { "text": "For the spoken language translation system, we found that the recognition performance of 39-dimension MFCCs and 10-dimension LPCCs was close. Therefore, we adopted 10-dimension LPCCs due to their advantages of faster operation. Speech feature analysis of recognition was performed using 10 linear prediction coefficient cepstrums (LPCCs) on a 32ms frame that overlapped every 8ms. When input speech is being translated, a major sub-problem in speech processing is determining the presence or absence of a voice component in a given signal, especially the beginnings and endings of voice segments. Therefore, the energy-based approach, which is a classic one and works well under high SNR conditions, was applied to eliminate unvoiced components in this research. The measurement results were divided into four parts: the dissimilarity measurement of linear prediction coefficient cepstrum (LPCC)-based (baseline), the baseline with unvoiced elimination (+unVE), the baseline with the score normalization (+ScN), and the combination of unVE and ScN considerations with the baseline (All). A given translated example is called a match when it contained the same intention as the speech input. The reason for adopting this strategy was that objects could be confirmed again while a dialogue was being processed, while wrong intentions could cause endless iterations of dialogue. The experimental results for proper translated example retrieval are shown in Table 4 and Table 5 . Based on the developed translated examples, when the example or vocabulary size increases, more examples would possibly lead to more feature models and more similarities in speech recognition, thus causing false recognition results and lower retrieval accuracy. Additionally, multiple speaker dependent results were obtained using three speakers. The first speaker's feature models were used to perform tests on the other two speakers, and the results are shown in Table 6 . The experimental results show that although the feature models were trained by Sp1, the retrieval accuracy of Sp2 and Sp3 was only reduced by 10 to 15 percent. A bilingual evaluator was used to classify the target generation results into three categories [10] : Good, Understandable, and Bad. A Good generation needed to have no syntactic errors, and its meaning had to be correctly understood. Understandable generations could have some syntactic errors and variable translation errors, but the source speech had to be conveyed without misunderstanding. Otherwise, the target generations were classified as Bad. With this subjective measure, the percentage of Good or Understandable generations for the Top 5 was 86% for English-to-Chinese (E2C) translation and 76% for Chinese-to-English (C2E) translation. The percentage of Good generations for the Top 1 was 60% for E2C translation, compared to 56% for C2E translation. We examined the translated examples in a specific domain and found that 100% translation accuracy could be achieved. In other words, translation errors occurred only as a result of speech recognition errors, such as word recognition errors and segmentation errors. Besides, these results also indicate that C2E performed worse than E2C. This difference may occur because Chinese is tonal, whereas English is not; thus, it is harder for C2E translation to obtain an appropriate translated example.", "cite_spans": [ { "start": 2206, "end": 2210, "text": "[10]", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 1454, "end": 1461, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 1466, "end": 1473, "text": "Table 5", "ref_id": null }, { "start": 1941, "end": 1948, "text": "Table 6", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Translation Evaluations", "sec_num": "4.2" } ], "back_matter": [ { "text": "In this work, we have proposed a new two-layer approach for example-based spoken language translation. According to the proposed approach, the translated example can be properly retrieved by measuring the speech signals on the intention layer and the object layer. Experiments using Chinese and English were performed on Pentium\u00ae PCs. The experimental results reveal that our system can achieve an average understandable translation rate of about 81%. By collecting more speech databases, the system also applies speaker-dependent or speaker-independent HMM to the proposed two-layer approach for more robust speech translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Inducing Translation pattern for Example-Based Machine Translation", "authors": [ { "first": "M", "middle": [], "last": "Carl", "suffix": "" } ], "year": 1999, "venue": "Proc. of the 7 th Machine Translation Summit", "volume": "", "issue": "", "pages": "617--624", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Carl. Inducing Translation pattern for Example-Based Machine Translation. In Proc. of the 7 th Machine Translation Summit, pp.617-624, 1999.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Speech-to-Speech Translation Based on Finite-State Transducers", "authors": [ { "first": "F", "middle": [], "last": "Casacuberta", "suffix": "" }, { "first": "D", "middle": [], "last": "Llorens", "suffix": "" }, { "first": "C", "middle": [], "last": "Martinez", "suffix": "" }, { "first": "S", "middle": [], "last": "Molau", "suffix": "" }, { "first": "F", "middle": [], "last": "Nevado", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "M", "middle": [], "last": "Pastor", "suffix": "" }, { "first": "D", "middle": [], "last": "Pico", "suffix": "" }, { "first": "A", "middle": [], "last": "Sanchis", "suffix": "" }, { "first": "E", "middle": [], "last": "Vidal", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Vilar", "suffix": "" } ], "year": 2001, "venue": "Proc. of 26 th IEEE International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "613--616", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Casacuberta, D. Llorens, C. Martinez, S. Molau, F. Nevado, H. Ney, M. Pastor, D. Pico, A. Sanchis, E. Vidal, J. M. Vilar. Speech-to-Speech Translation Based on Finite-State Transducers. In Proc. of 26 th IEEE International Conference on Acoustics, Speech, and Signal Processing, pp.613 -616, 2001.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Finite-State Speech-to-Speech Translation", "authors": [ { "first": "E", "middle": [], "last": "Vidal", "suffix": "" } ], "year": 1997, "venue": "Proc. of 22 nd IEEE International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "111--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Vidal. Finite-State Speech-to-Speech Translation. In Proc. of 22 nd IEEE International Conference on Acoustics, Speech, and Signal Processing, pp.111-114, 1997.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Algorithms for Statistical Translation of Spoken Language", "authors": [ { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "S", "middle": [], "last": "Nie\u00dfen", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Sawaf", "suffix": "" }, { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" }, { "first": "S", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2000, "venue": "IEEE Transaction on Speech and Audio Processing", "volume": "", "issue": "8", "pages": "24--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Ney, S. Nie\u00dfen, F. J. Och, H. Sawaf, C. Tillmann, and S. Vogel. Algorithms for Statistical Translation of Spoken Language. IEEE Transaction on Speech and Audio Processing(8), pp.24-36, 2000.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "JANUS III: Speech-to-Speech Translation in Multiple Languages", "authors": [ { "first": "A", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" }, { "first": "L", "middle": [], "last": "Levin", "suffix": "" }, { "first": "M", "middle": [], "last": "Finke", "suffix": "" }, { "first": "D", "middle": [], "last": "Gates", "suffix": "" }, { "first": "M", "middle": [], "last": "Gavalda", "suffix": "" }, { "first": "T", "middle": [], "last": "Zeppenfeld", "suffix": "" }, { "first": "P", "middle": [], "last": "Zahn", "suffix": "" } ], "year": 1997, "venue": "Proc. of 22 nd IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "99--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Lavie, A. Waibel, L. Levin, M. Finke, D. Gates, M. Gavalda, T. Zeppenfeld and P. Zahn. JANUS III: Speech-to-Speech Translation in Multiple Languages. In Proc. of 22 nd IEEE International Conference on Acoustics, Speech and Signal Processing, pp.99-102, 1997.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Fundamentals of Speech Recognition", "authors": [ { "first": "L", "middle": [], "last": "Rabiner", "suffix": "" }, { "first": "B", "middle": [ "H" ], "last": "Juang", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rabiner, L. and B. H. Juang. Fundamentals of Speech Recognition. Prentice-Hall, Inc., 1993.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A hybrid model for Chinese-English machine translation", "authors": [ { "first": "J", "middle": [], "last": "Liu", "suffix": "" }, { "first": "L", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 1998, "venue": "Proc. of IEEE International Conference on Systems, Man, and Cybernetics", "volume": "", "issue": "", "pages": "1201--1206", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Liu and L. Zhou. A hybrid model for Chinese-English machine translation. In Proc. of IEEE International Conference on Systems, Man, and Cybernetics, pp.1201-1206, 1998.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Verbmobil: Foundations of Speech-to-Speech Translation", "authors": [ { "first": "W", "middle": [], "last": "Wahlster", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wahlster, W. Verbmobil: Foundations of Speech-to-Speech Translation. New York: Springer-Verlag Press, 2000.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Speech Translation System with Mobile Wireless Clients", "authors": [ { "first": "K", "middle": [], "last": "Yamabana", "suffix": "" }, { "first": "K", "middle": [], "last": "Hanazawa", "suffix": "" }, { "first": "R", "middle": [], "last": "Isotani", "suffix": "" }, { "first": "S", "middle": [], "last": "Osada", "suffix": "" }, { "first": "A", "middle": [], "last": "Okumura", "suffix": "" }, { "first": "T", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 2003, "venue": "Proc. of the Student Research Workshop at the 41st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "119--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Yamabana, K. Hanazawa, R. Isotani, S. Osada, A. Okumura and T. Watanabe. A Speech Translation System with Mobile Wireless Clients. In Proc. of the Student Research Workshop at the 41st Annual Meeting of the Association for Computational Linguistics, pp.119-122, 2003.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The proposed two-layer search plan 2.3", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Additional types of two-layer search paths", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "text": "Fours pairs of grouped translated examples For example, a new expanded translated example, denoted by ExTrans, derived from the translated examples in Table 2 is shown below. An example of expanded translated example Expanded translated example: Is \u2329V 1 \u232a still available for \u2329V 2 \u232a ? \u2194 \u2329V 3 \u232a \u9084 \u6709 \u2329V 4 \u232a \u55ce? where \u2329V 1 \u232a=\u2329room service, breakfast, laundry service, a single room\u232a, \u2329V 2 \u232a=\u2329tomorrow, tonight\u232a, \u2329V 3 \u232a=\u2329\u5ba2\u623f \u670d\u52d9, \u65e9\u9910, \u6d17\u6ecc \u670d\u52d9, \u4e00\u9593 \u55ae\u4eba \u623f\u232a, \u2329V 4 \u232a=\u2329\u660e\u5929, \u4eca\u665a\u232a", "num": null, "type_str": "table", "html": null, "content": "
Translated examplesWord mappings
1Is room service still available? \u2194 \u9084 \u6709 \u5ba2\u623f \u670d\u52d9 \u55ce?\u2329Is\u2194\u55ce, room\u2194\u5ba2\u623f, service\u2194\u670d\u52d9, still\u2194 \u9084, available\u2194\u6709\u232a
2Is breakfast available for tomorrow? \u2194 \u660e\u5929 \u6709 \u65e9\u9910 \u55ce?\u2329Is\u2194\u55ce, breakfast\u2194\u65e9\u9910, available for\u2194\u6709, tomorrow\u2194\u660e\u5929\u232a
3Is laundry service still available? \u2194 \u9084 \u6709 \u6d17\u6ecc \u670d\u52d9 \u55ce?\u2329Is\u2194\u55ce, room\u2194\u6d17\u6ecc, service\u2194\u670d\u52d9, still\u2194 \u9084, available\u2194\u6709\u232a
4Is a single room available for tonight? \u2194 \u4eca\u665a \u6709 \u4e00\u9593 \u55ae\u4eba\u623f \u55ce?\u2329Is\u2194\u55ce, a\u2194\u4e00\u9593, single\u2194\u55ae\u4eba, room\u2194\u623f, available for\u2194\u6709, tonight\u2194\u4eca\u665a\u232a
" }, "TABREF2": { "text": "v ) and pair (s, j v ), the scores used for comparing the two pairs are", "num": null, "type_str": "table", "html": null, "content": "
* D\u2032 and i* D\u2032 , where j* D\u2032 is the similarity measurement of pair ( i i v\u2032 , s) and* D\u2032 is the similarity j
measurement of pair ( j v\u2032 , s).
" }, "TABREF4": { "text": "Basic characteristics of the collected translated examples", "num": null, "type_str": "table", "html": null, "content": "
EnglishChinese
" }, "TABREF5": { "text": "Average retrieval accuracy of baseline and the improvement in English-to-Chinese(E2C) Translation", "num": null, "type_str": "table", "html": null, "content": "
1234
Baseline+unVE+ScNAll
Example SizeTop 1Top 5Top 1Top 5Top 1Top 5Top 1Top 5
1500.530.660.630.860.660.860.81
2500.530.660.630.860.660.860.81
3500.530.630.60.830.660.860.760.96
4500.530.630.60.830.630.830.760.93
5500.50.60.60.80.60.80.760.93
6500.50.560.60.760.60.80.760.9
7500.460.50.560.730.560.760.730.86
8500.430.50.530.70.530.730.730.83
9500.430.460.530.70.50.660.70.83
10500.40.430.460.660.460.660.660.8
Table 5. Average retrieval accuracy of baseline and the improvement in Chinese-to-English(C2E) Translation
1234
Baseline+unVE+ScNAll
Example SizeTop 1Top 5Top 1Top 5Top 1Top 5Top 1Top 5
1500.460.60.630.80.60.760.761
2500.460.60.60.760.60.730.760.96
3500.460.560.60.760.560.70.730.93
4500.430.560.560.730.530.660.70.9
5500.430.530.560.70.530.630.70.86
6500.430.530.530.70.50.60.660.83
7500.40.50.530.660.50.60.630.8
8500.40.50.50.660.460.560.630.8
9500.40.460.460.630.460.560.60.76
10500.360.430.460.60.430.560.60.7
" }, "TABREF6": { "text": "Average retrieval accuracy in multiple speaker testing", "num": null, "type_str": "table", "html": null, "content": "
Example Size (Speech features of Sp1)
(Top5)1502503504505506507508509501050
Sp1E2C110.96 0.93 0.930.90.86 0.83 0.830.8
C2E10.96 0.930.90.86 0.830.80.80.760.7
AllSp2E2C C2E0.9 0.83 0.83 0.86 0.83 0.80.8 0.76 0.73 0.73 0.76 0.73 0.73 0.70.7 0.66 0.63 0.660.66 0.63
Sp3E2C C2E0.83 0.76 0.76 0.73 0.73 0.8 0.76 0.76 0.73 0.70.7 0.66 0.66 0.63 0.7 0.66 0.66 0.60.63 0.6
" } } } }