{ "paper_id": "O04-3002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:00:15.325358Z" }, "title": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation", "authors": [ { "first": "Jhing-Fa", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": { "addrLine": "No.1", "postCode": "70101", "settlement": "East District, Tainan City", "country": "Taiwan, R.O.C" } }, "email": "wangjf@csie.ncku.edu.tw" }, { "first": "Shun-Chieh", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": { "addrLine": "No.1", "postCode": "70101", "settlement": "East District, Tainan City", "country": "Taiwan, R.O.C" } }, "email": "" }, { "first": "Hsueh-Wei", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": { "addrLine": "No.1", "postCode": "70101", "settlement": "East District, Tainan City", "country": "Taiwan, R.O.C" } }, "email": "" }, { "first": "Fan-Min", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": { "addrLine": "No.1", "postCode": "70101", "settlement": "East District, Tainan City", "country": "Taiwan, R.O.C" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The critical issues involved in speech-to-speech translation are obtaining proper source segments and synthesizing accurate target speech. Therefore, this article develops a novel multiple-translation spotting method to deal with these issues efficiently. Term multiple-translation spotting refers to the task of extracting target-language synthesis patterns that correspond to a given set of source-language spotted patterns in conditional multiple pairs of speech patterns known to be translation patterns. According to the extracted synthesis patterns, the target speech can be properly synthesized by using a waveform segment concatenation-based synthesis method. Experiments were conducted with the languages of Mandarin and Taiwanese. The results reveal that the proposed approach can achieve translation understanding rates of 80% and 76% on average for Mandarin/Taiwanese translation and Taiwanese/Mandarin translation, respectively.", "pdf_parse": { "paper_id": "O04-3002", "_pdf_hash": "", "abstract": [ { "text": "The critical issues involved in speech-to-speech translation are obtaining proper source segments and synthesizing accurate target speech. Therefore, this article develops a novel multiple-translation spotting method to deal with these issues efficiently. Term multiple-translation spotting refers to the task of extracting target-language synthesis patterns that correspond to a given set of source-language spotted patterns in conditional multiple pairs of speech patterns known to be translation patterns. According to the extracted synthesis patterns, the target speech can be properly synthesized by using a waveform segment concatenation-based synthesis method. Experiments were conducted with the languages of Mandarin and Taiwanese. The results reveal that the proposed approach can achieve translation understanding rates of 80% and 76% on average for Mandarin/Taiwanese translation and Taiwanese/Mandarin translation, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Automatic speech-to-speech translation is a prospective application of speech and language technology [See JANUS III [Lavie et al. 1997] , Verbmobil [W. Wahlster 2000] , EUTRANS [Casacuberta et al. 2001] and ATR-MATRIX [Sugaya et al. 1999] ]. However, the unsolved problems in speech-to-speech translation are how to obtain proper source segments and how to generate accurate target sequences while the system performance is degraded by speech input.", "cite_spans": [ { "start": 117, "end": 136, "text": "[Lavie et al. 1997]", "ref_id": "BIBREF0" }, { "start": 153, "end": 167, "text": "Wahlster 2000]", "ref_id": "BIBREF1" }, { "start": 178, "end": 203, "text": "[Casacuberta et al. 2001]", "ref_id": null }, { "start": 219, "end": 239, "text": "[Sugaya et al. 1999]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "With the rising importance of parallel texts (bitexts) in language translation, an approach called translation spotting has been applied for proposing appropriate translations, referring to the TransSearch system [Macklovitch et al., 2000] and sub-sentential translation memory systems [M. Simard, 2003] . Previous works in this area have suggested that manual review or crafting is required to obtain example bases of sufficient coverage and accuracy to be truly useful.", "cite_spans": [ { "start": 213, "end": 239, "text": "[Macklovitch et al., 2000]", "ref_id": "BIBREF4" }, { "start": 286, "end": 303, "text": "[M. Simard, 2003]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Translation spotting (TS) is a term coined by V\u00e9ronis and Langlais [2000] and refers to the task of identifying word tokens in a target-language (TL) translation that correspond to some given word-patterns in a source-language (SL) text. This process takes as input a couple, i.e., a pair of SL and TL text segments known to be translation patterns, and an SL query, i.e., a subset of the patterns of the SL segment, on which the TS will focus its attention. In more formal terms:", "cite_spans": [ { "start": 46, "end": 73, "text": "V\u00e9ronis and Langlais [2000]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The input to the TS process is a pair of SL and TL text segments TS , and a contiguous, non-empty input sequence of word-tokens in SL,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u0178", "sec_num": null }, { "text": "n ssq L 1 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u0178", "sec_num": null }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u0178", "sec_num": null }, { "text": "\u0178 The output is a pair of sets of translation patterns )(), ( TrSr qq : the SL answer and TL answer, respectively. Table 1 shows some examples of TS, where the words in italics represent the SL input, and the words in bold are the SL and TL answers. As can be seen in these examples, the patterns in the input q and answers )( ", "cite_spans": [ { "start": 60, "end": 69, "text": "( TrSr qq", "ref_id": null } ], "ref_spans": [ { "start": 115, "end": 122, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "\u0178", "sec_num": null }, { "text": "q : \u6211 \u8981 \u8a02 \u5169 \u9593 \u55ae\u4eba \u623f )( Sr q ={\u6211,\u8981,\u8a02,\u5169,\u9593,\u55ae \u4eba\u623f} )( Tr q ={goar,bueq,dexng,lerng,kefng,danjiin paang} \u8acb \u554f \u4f60\u5011 \u4eca\u665a \u6709 \u4e00 \u9593 \u96d9\u4eba\u623f \u55ce", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u0178", "sec_num": null }, { "text": "chviar bun lirn ehngf u cit kefng sianglaang paang but 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u0178", "sec_num": null }, { "text": "q : \u4eca\u665a \u6709 [\u2026] \u96d9\u4eba\u623f \u55ce )( Sr q ={\u4eca\u665a,\u6709,\u96d9\u4eba\u623f, \u55ce} )( Tr q ={ehngf,u,sianglaang,paang,but} \u6709 \u5305\u62ec \u65e9\u9910 \u5728 \u5167? u zafdngx but 4. q : \u5305\u62ec \u2026 \u5728 \u5167 )( Sr q ={\u5305\u62ec,\u5728,\u5167} )( Tr q ={\u03c6 }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u0178", "sec_num": null }, { "text": "However, translation spotting can only draw out the TL answer from the best translation; it can not handle an SL query whose word-tokens are distributed in different translations. Consequently, we propose conducting multiple-translation spotting of a speech input using multiple pairs of translation patterns. Figure 1 shows an example of multiple-translation spotting of a speech input. When a speaker inputs an SL speech query \"\u4eca\u665a\u6703\u6709\u4e09\u9593\u55ae\u4eba\u623f \u55ce \", the proposed system can obtain a TL speech pattern set that includes five elements, \"ehngf\", \"kvaru\", \"svaf\", \"kefng\", and \"danjiinpaang\", according to the spotted SL speech patterns \"\u4eca\u665a\", \"\u6703\u6709\", \"\u9593\", \"\u55ce\", \"\u4e09\", and \"\u55ae\u4eba\u623f\". The rest of this article is organized as follows. Section 2 presents the framework of the proposed system. Section 3 presents system data training for Mandarin and Taiwanese. Section 4 describes the proposed translation method for speech-to-speech translation. Section 5 presents experimental results. ", "cite_spans": [], "ref_spans": [ { "start": 310, "end": 318, "text": "Figure 1", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 15", "sec_num": null }, { "text": "The proposed speech-to-speech translation system is divided into two phasesa training phase and a translation phase. In the training phase, the developed translation examples are imported to derive multiple-translation templates and develop speech data. In the following step, the developed speech data are applied to construct multiple-translation spotting models and synthesis templates. method is adopted to identify input spoken phrases for each spotting template, and the template candidates are assigned in the following score normalization and ranking process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework of the Proposed System", "sec_num": "2." }, { "text": "However, the hypothesized word sequence generally includes noise-like segments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework of the Proposed System", "sec_num": "2." }, { "text": "Accordingly, the segments are adjusted by smoothing the hypothesized word sequences. After the hypothesized word sequences of all template candidates have been smoothed, the hypothesized target sequences are generated using the translation template with the maximum number of spotting tokens of speech input. The obtained target speech segments are used to produce target speech by means of the corresponding synthesis template in the final target generation process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework of the Proposed System", "sec_num": "2." }, { "text": "As for the task of translating Mandarin and Taiwanese language pairs, although these languages both belong to the family of Chinese languages, their language usages still have various development by language families and their origins, Mandarin belongs to Altaic language family, and Taiwanese belongs to Sinitic language family [Sher et al., 1999] .", "cite_spans": [ { "start": 329, "end": 348, "text": "[Sher et al., 1999]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Data Training Phase", "sec_num": "3." }, { "text": "Therefore, in the following section, we will consider their language usages for three template construction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Training Phase", "sec_num": "3." }, { "text": "While translation templates can be fully constructed, one major issue in translation pattern exploitation, called \"divergence,\" makes straightforward transfer mapping extraction impractical. Dorr (1993) describes divergence in the following way: \"translation divergence arises when the natural translation of one language into another result in a very different form than that of the original.\" Therefore, we choose translations with no divergence to practice constructing templates. An example of a simple translation template derived from a practicable translated example is shown below.", "cite_spans": [ { "start": 191, "end": 202, "text": "Dorr (1993)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Multiple Translation Template Construction", "sec_num": "3.1" }, { "text": "Translated Example: SL: \"\u6211 \u670b\u53cb \u8981 \u8a02 \u623f\u9593\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple Translation Template Construction", "sec_num": "3.1" }, { "text": "\u2194 TL: \"goarn pengiuo bueq dexng pangkefng\" Intention Translation: M p1 \u8981 \u8a02 M p2 \u2194 T p1 bueq dexng T p2 Variable Translation: If M p1 \u2194 T p1 , \u6211 \u670b\u53cb \u2194 goarn pengiuo If M p2 \u2194 T p2 , \u623f\u9593 \u2194 pangkefng", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple Translation Template Construction", "sec_num": "3.1" }, { "text": "The translation template is composed of a translated example, an intention translation, and two variable translations. The example shows how a sentence in Mandarin (SL) that contains an intention \"\u8981 \u8a02\" with two variables, M p1 (\u6211 \u670b\u53cb) and M p2 (\u623f\u9593), can be translated into a sentence in Taiwanese (TL) with an intention \"bueq dexng\" and two variables, T p1 (goarn pengiuo) and T p2 (pangkefng). According to the template, the number of variable translations should be expanded to improve the capability for spotting the speech input. From the preceding example, variable translation expansion can be illustrated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple Translation Template Construction", "sec_num": "3.1" }, { "text": "Variable Translation Expansion: If M p1 \u2194 T p1 , \u6211 \u2194 goar \u6211 \u670b\u53cb\u2194 goarn pengiuo If M p2 \u2194 T p2 , \u623f\u9593 \u2194 pangkefng \u7968 \u2194 phiaux alignments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple Translation Template Construction", "sec_num": "3.1" }, { "text": "Therefore, we can obtain corpus-specific multiple translations in a template constructed from three translation patterns, which are \"\u6211 \u670b \u53cb \u8981 \u8a02 \u623f\u9593\u2194goarn pengiuo bueq dexng pangkefng\", \"\u6211\u2194goar\", and \"\u7968\u2194phiaux\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple Translation Template Construction", "sec_num": "3.1" }, { "text": "Taiwanese is a typical oral language and still has no uniform system of writing. In the literature, there are two ways to represent Taiwanese words: Chinese characters and alphabetic writing. [Sher et al., 1999] ", "cite_spans": [ { "start": 192, "end": 211, "text": "[Sher et al., 1999]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Spotting Model Construction", "sec_num": "3.2" }, { "text": "Both Mandarin and Taiwanese are tonal languages, and it is difficult to determine whether a morpheme will take its inherent tone or the derived tone when every word in a sentence is synthesized. [Wang et al., 1999; Sher et al., 1999] . Therefore, we utilize the obtained intention speech and variable speech as synthesis templates that include intention synthesis units and variable synthesis units. These synthesis units can be used to generate a speech output to be processed using a waveform segment concatenation-based synthesis method [Wang et al., 1999] . For each synthesis unit in the obtained speech data, the following features are stored:", "cite_spans": [ { "start": 195, "end": 214, "text": "[Wang et al., 1999;", "ref_id": "BIBREF8" }, { "start": 215, "end": 233, "text": "Sher et al., 1999]", "ref_id": "BIBREF9" }, { "start": 540, "end": 559, "text": "[Wang et al., 1999]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Synthesis Template Construction", "sec_num": "3.3" }, { "text": "\u2022 the waveform and its length,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthesis Template Construction", "sec_num": "3.3" }, { "text": "\u2022 the code of the synthesis unit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthesis Template Construction", "sec_num": "3.3" }, { "text": "To deal with the problem of spotting between a speech input L X 1 and a translation pattern set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting Method", "sec_num": "4.1" }, { "text": "{ } J j v j v j ts 1 )()( , =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting Method", "sec_num": "4.1" }, { "text": "in the v-th translation template ( v r ), we use the standard notation l to represent the frame index of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting Method", "sec_num": "4.1" }, { "text": "L X 1 , Ll \u2264 \u2264 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting Method", "sec_num": "4.1" }, { "text": ", j to represent the spotting pair (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting Method", "sec_num": "4.1" }, { "text": ")()( , v j v j ts ) index of v r , Jj \u2264 \u2264 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting Method", "sec_num": "4.1" }, { "text": ", and k to represent the frame index of j-th spotting pattern", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting Method", "sec_num": "4.1" }, { "text": ")( v j s , 1 j Kk \u2264 \u2264", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting Method", "sec_num": "4.1" }, { "text": ". Then for each input frame, the accumulated distance", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting Method", "sec_num": "4.1" }, { "text": "),,( jkld A is computed by ( ) ),,1(min),,(),,( 2 jmldjkldjkld A kmk A \u2212+= \u2264\u2264\u2212 . (1) For j Kk \u2264\u2264 2 , Jj \u2264 \u2264 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting Method", "sec_num": "4.1" }, { "text": ", where ),,( jkld is the local distance between the l-th frame of T X 1 and the k-th frame of source pattern )( v j s . The recursion of (1) is carried out of for all internal frames (i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting Method", "sec_num": "4.1" }, { "text": ") of each source pattern. At the speech pattern boundary, i.e., 1 = k , the recursion can be calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2265 k", "sec_num": "2" }, { "text": "( ) [ ] ),1,1(,),,1(minmin),1,(),1,( 1 jldmKldjldjld AmA Jm A \u2212 \u2212 + = \u2264\u2264 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2265 k", "sec_num": "2" }, { "text": "( 2)The final solution for the best path is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2265 k", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) [ ] jKLdd jA Jj v G ,,min 1 )( \u2264\u2264 =", "eq_num": "(3)" } ], "section": "\u2265 k", "sec_num": "2" }, { "text": "The details of the multiple-translation spotting algorithm are given below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2265 k", "sec_num": "2" }, { "text": "/* Parameter descriptions { } J j v j 1 )( = \u03c4 : the spotting results of { } J j (v) j s 1 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2265 k", "sec_num": "2" }, { "text": ", where ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2265 k", "sec_num": "2" }, { "text": "\uf8f3 \uf8f2 \uf8f1 = otherwise. ,0 .by spotted is pattern speech SL if ,1 1 )( )( Lv j v j Xs \u03c4 ; { } Jjtw v j v jv \u2264\u2264=\u2190 1 ,1| )()( \u03c4 : the hypothesized TL synthesis patterns; */ /* Initialization */ 1 \u2190 \u2190 \u2190 jkl ; 0 )( \u2190 v j \u03c4 , Jj \u2264 \u2264 1 ; { } \u03c6 \u2190 v w ; /* v-th template spotting */ while ( Ll \u2264 ) for each spotting pattern )( v j s while ( j Kk \u2264 ) if (k = 1) )],,1()],,,1([minmin[),,(),,( 1 jkldjKldjkldjkld AjA Jj A \u2212\u2212+\u2190 \u2264\u2264 )],,1()],,,1([minmin[arg),,( 1 jkldmKldjklp AmA Jm \u2212\u2212\u2190 \u2264\u2264 else if (k > 1) )),,1((min),,(),,( 2 jmldjkldjkld A kmk A \u2212+\u2190 \u2264\u2264\u2212 )),,1((minarg),,( 2 jmldjklp A kmk \u2212\u2190 \u2264\u2264\u2212 else if (k = K j ) 1 \u2190 k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2265 k", "sec_num": "2" }, { "text": "{ } )] ,,[(back trace\u02c6 1 )( jKL \u03c4 j J j v j \u2190 = ; /*, )( v j \u03c4 is assigned as 1 or 0*/ for each )( v j \u03c4 , j=1,2,\u2026,J If ( 1 )( = v j \u03c4 ) { } )( v jvv tww \u222a\u2190 ; end if end for return v w and { } J j v j \u03c4 1 )( = ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2265 k", "sec_num": "2" }, { "text": "The length of the matching sequence can severely impact the cumulative dissimilarity measurement, so a length-conditioning weight is applied to overcome this defect. Scoring methods that involve the length measurement ( ) Zhou, 1998 ] can be defined in a number of similar ways:", "cite_spans": [ { "start": 222, "end": 232, "text": "Zhou, 1998", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Normalizing the Score and Ranking", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ")( 1 , vL sX \u2206 ( U J j (v) j v", "eq_num": "ss" } ], "section": "Normalizing the Score and Ranking", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "),(minor )(,max(),( )( 1 )( 1 )( 1 vLvLvL sXsXsX =\u2206 , (4) )( 1 )( 1 ),( vLvL sXsX * =\u2206 ,", "eq_num": "(5)" } ], "section": "Normalizing the Score and Ranking", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "3/),(),(),( 11 )( 1 \u2211\u2211 == +=\u2206 J j j J j j vL KLFKLNsX ,", "eq_num": "(6)" } ], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "where L X 1 is the number of frames in speech input x;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": ")( v s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "is the total number of search", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "frames in { } J j v j s 1 )( = ; ),( 1 \u2211 = J j j KLN", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "is the number of frames compared; and ),(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "1 \u2211 = J j j KLF", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "is the number of frames that fail to be matched. To improve the flexibility and reliability of the dissimilarity measurement, an exponential ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": ")( 1 , vL sX \u2206", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "is exponentially defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ")( , ),( )( 1 v sX w vL sX \u2202=\u2206 ,", "eq_num": "(7)" } ], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": ")( , v sX w \u2202", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "is a weighting factor and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "1 )()( 1 , )( \u2212 \u22c5\uf8f7 \uf8f8 \uf8f6 \uf8ec \uf8ed \uf8eb \u2212= vvL sX ssXw v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": ". The weighting factor of Eq. 7has two features: one is length correlation normalization, and the other is exponential score normalization. For length correlation normalization, the tendency to choose a template", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": ")( v s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "with the same length difference of L X 1 but smaller length multiplication is eliminated. With exponential score normalization, when the difference between the speech input and each template is larger, a higher dissimilarity score is obtained and spotting discrimination improves. Finally, the normalized measured dissimilarity is determined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ")( , )()( v sX w v G v G dd \u2202\u22c5= .", "eq_num": "(8)" } ], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "The experimental analysis shown in Fig. 3 indicates that the interval \u2202 that yields the most accurate dissimilarity measurement is [ ] \u03b4\u03b4 +\u2212 2.1,2.1 . Therefore, the value of \u2202 chosen here is 1.2. The weighting factor is determined using the feature models of the first speaker for inside training. The feature models are different from the test data; thus, \u2202 is a test-independent weighting factor. After all the templates are ranked, the retrieval accuracy is estimated using the criterion that the intention of the source speech is located in the set of the best N retrieved translation templates. 11.21.31.41.51.61.71.81.92 weight ", "cite_spans": [ { "start": 601, "end": 627, "text": "11.21.31.41.51.61.71.81.92", "ref_id": null } ], "ref_spans": [ { "start": 35, "end": 41, "text": "Fig. 3", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Multiple-Translation Spotting for Mandarin-Taiwanese Speech-to-Speech Translation 21", "sec_num": null }, { "text": "The main weakness with the one-stage algorithm for multiple-translation spotting is that it provides no mechanism for controlling the resulting sequence length, that is, for determining the optimal token sequence of arbitrary length. The algorithm finds a single best path whose sequence length is arbitrary. Therefore, the hypothesized token sequence generally includes noise-like components. The components should be in the form of duplications, and their durations should be below a threshold. Based on this assumption, hypothesized token outputs with segmented durations below the threshold are considered for further smoothing. With Mandarin and Taiwanese, the duration of a syllable is 0.3 sec on average [Sher et al., 1999] , and this value is set as the relevant threshold to sift out noise-like components whose durations are less than 0.3 sec. These are the preliminary speaker-dependent results of our experiments. This system is able to adjust the threshold when a speaker speaks at different rates. Additionally, this system is corpus-specific, and out of vocabulary (OOV) words are rejected based on their high dissimilarity scores. After the token sequences of all the TopN templates have been smoothed, the hypothesized target sequences is generated using the translation template with the maximum number of spotting tokens of speech input.", "cite_spans": [ { "start": 711, "end": 730, "text": "[Sher et al., 1999]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Smoothing the Hypothesized Template", "sec_num": "4.3" }, { "text": "Once the hypothesized target sequences have been determined, the target speech generation process is straightforward, similar to the waveform segment concatenation-based synthesis method. In this method, waveform segments are extracted beforehand from the recorded intention synthesis units and variable synthesis units of the synthesis template, and they are rearranged with adequate overlapping portions to generate speech with the desired energy and duration. The merits of the method are the small computational cost in the synthesis process and the high level of intelligibility of the synthesized speech. The generation process includes complete matching, waveform replacement, and waveform deletion; thus, it is similar to the example-base translation method [J. Liu and L. Zhou, 1998 ].", "cite_spans": [ { "start": 770, "end": 791, "text": "Liu and L. Zhou, 1998", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Target Speech Generation", "sec_num": "4.4" }, { "text": "We built a collection of Mandarin sentences and their Taiwanese translations that usually appear in phrasebooks for foreign tourists. Because the translations were made sentence by sentence, the corpus was sentence-aligned at birth. Table 2 shows the basic characteristics of the collected corpus.", "cite_spans": [], "ref_spans": [ { "start": 233, "end": 240, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "The Task and the Corpus", "sec_num": "5.1" }, { "text": "In this work, the content of the high divergent example sentence pairs needed to be collated or sieved out to improve the accuracy and effectiveness of alignment exploration between word sequences and the derivation of multiple translation templates. Table 3 shows the basic characteristics of the derived multiple translation templates. The derived templates were used to develop the speech corpus, which was used to construct spotting models and synthesis templates. In order to evaluate the system performance, a collection of 1,050 utterances were speaker-dependent trained, and 30 additional utterances of each language were collected by using one male speaker (Sp1) for inside testing and by using two bilingual male speakers (Sp2 and Sp3) for outside testing. All the utterances were sampled at an 8 kHz sampling rate with 16-bit precision on a Pentium \u00ae IV 1.8GHz, 1GB RAM, Windows \u00ae XP PC.", "cite_spans": [], "ref_spans": [ { "start": 251, "end": 258, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Table 2. Basic characteristics of the collected translated examples.", "sec_num": null }, { "text": "For the speech translation system, we found that the recognition performance of 39-dimension MFCCs and 10-dimension LPCCs was close. Therefore, we adopted 10-dimension LPCCs due to their advantages of faster operation and simpler hardware design. Speech feature analysis of recognition was performed using 10 linear prediction coefficient cepstrums (LPCCs) on a 32ms frame that overlapped every 8ms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Evaluations", "sec_num": "5.2" }, { "text": "For estimating the computational load of the proposed MTS algorithm, a complexity analysis is shown in depend on the feature dimension, so we used O(LPCC_add) and O(LPCC_mul) to represent the complexity of additions and multiplications, respectively. We applied Itakura type in each internal dynamic programming path selection employed 3 additions to decide the last node and 1 addition to accumulate the node distance, and 3 multiplications for slope weighting. In Table 4 , the second row, Distance computation, presents the computational complexity of computing the local distance, and the third row, Path selection, presents the computational complexity of selecting the best path, that is, the computational overload of MTS for each template. ", "cite_spans": [], "ref_spans": [ { "start": 466, "end": 474, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Translation Evaluations", "sec_num": "5.2" }, { "text": "addLPCCOKL J j v j _ 1 )( \u22c5\u22c5 \u2211 = ( ) mulLPCCOKL J j v j _ 1 )( \u22c5\u22c5 \u2211 = Path selection \u2211 = \u22c5\u22c5 J j v j KL 1 )( 5 \u2211 = \u22c5\u22c5 J j v j KL 1 )( 3 Total for each template ( ) ( ) addLPCCOKL J j v j _5 1 )( +\u22c5\u22c5 \u2211 = ( ) ( ) mulLPCCOKL J j v j _3 1 )( +\u22c5\u22c5 \u2211 = Total for all templates ( ) ( ) \u2211\u2211 \uf8f7 \uf8f7 \uf8f8 \uf8f6 \uf8ec \uf8ec \uf8ed \uf8eb +\u22c5\u22c5 = v J j v j addLPCCOKL _5 1 )( ( ) ( ) \u2211\u2211 \uf8f7 \uf8f7 \uf8f8 \uf8f6 \uf8ec \uf8ec \uf8ed \uf8eb +\u22c5\u22c5 = v J j v j mulLPCCOKL _3 1 )(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Evaluations", "sec_num": "5.2" }, { "text": "When input speech is being spotted, a major sub-problem in speech processing is determining the presence or absence of a voice component in a given signal, especially the beginnings and endings of voice segments. Therefore, the energy-based approach, which is a classic one and works well under high SNR conditions, was applied to eliminate unvoiced components in this research. The measurement results were divided into four parts: the dissimilarity measurement of linear prediction coefficient cepstrum (LPCC)-based (baseline), the baseline with unvoiced elimination (unVE), the baseline with the time-conditioned weight (TcW), and the combination of unVE and TcW considerations with the baseline. A given translation template is called a match when it contained the same intention as the speech input. The reason for adopting this strategy was that variables could be confirmed again while a dialogue was being processed, while wrong intentions could cause endless iterations of dialogue. The experimental results for proper template spotting are shown in Table 5 and Table 6 .", "cite_spans": [], "ref_spans": [ { "start": 1059, "end": 1080, "text": "Table 5 and Table 6", "ref_id": null } ], "eq_spans": [], "section": "Translation Evaluations", "sec_num": "5.2" }, { "text": "Based on the constructed translation templates, when the template or vocabulary size increases, more templates would possibly lead to more feature models and more similarities in speech recognition, thus causing false recognition results and lower spotting accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Evaluations", "sec_num": "5.2" }, { "text": "Additionally, multiple speaker dependent results were obtained using three speakers. The first speaker's feature models (spotting models) were used to perform tests on the other two speakers, and the results are shown in Table 7 . The experimental results show that although the feature models were trained by Sp1, the spotting accuracy of Sp2 and Sp3 was only reduced by 10 to 15 percent.", "cite_spans": [], "ref_spans": [ { "start": 221, "end": 228, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Translation Evaluations", "sec_num": "5.2" }, { "text": "A bilingual evaluator was used to classify the target generation results into three categories [Yamabana et al., 2003] was 63% for M/T translation, and it was 60% for T/M translation. We examined the translation templates in a specific domain and found that 100% translation accuracy could be achieved. In other words, translation errors occurred only as a result of speech recognition errors, such as word recognition errors and segmentation errors. The results show that T/M had poorer performance than M/T. This is perhaps because spoken Taiwanese has more tones than Mandarin; thus, it is harder for T/M translation spotting to find an appropriate translation template.", "cite_spans": [ { "start": 95, "end": 118, "text": "[Yamabana et al., 2003]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Translation Evaluations", "sec_num": "5.2" }, { "text": "Mandarin-to-Taiwanese Translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 5. Average accuracy of baseline spotting and the improvement in", "sec_num": null }, { "text": "Template Size Table 6 . Average accuracy of baseline spotting and the improvement in Taiwanese-to-Mandarin Translation.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 21, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Table 5. Average accuracy of baseline spotting and the improvement in", "sec_num": null }, { "text": "Top 1Top 5Top 1Top 5Top 1Top 5Top 1Top 5 1500. 460.60.60.830.60.760.761 2500.460.60.60.830.60.70.730.96 3500.460.560.560.80.560.70.70.96 4500.430.560.560.760.560.660.70.93 5500.430.530.530.760.560.660.660.86 6500.430.530.530.730.530.60.660.86 7500.40.50.50.70.50.60.630.83 8500.40.50.50.70.50.560.60.8 9500.40.460.460.660.460.560.60.76 10500.360.430.430.660.460.560.60.76 1234 Baseline + unVE +TcW Baseline Baseline + unVE Baseline + TcW Table 7 . Average accuracy of spotting in multiple speaker testing.", "cite_spans": [ { "start": 47, "end": 371, "text": "460.60.60.830.60.760.761 2500.460.60.60.830.60.70.730.96 3500.460.560.560.80.560.70.70.96 4500.430.560.560.760.560.660.70.93 5500.430.530.530.760.560.660.660.86 6500.430.530.530.730.530.60.660.86 7500.40.50.50.70.50.60.630.83 8500.40.50.50.70.50.560.60.8 9500.40.460.460.660.460.560.60.76 10500.360.430.430.660.460.560.60.76", "ref_id": null } ], "ref_spans": [ { "start": 438, "end": 445, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Template Size", "sec_num": null }, { "text": "Template Size (Sp1 model ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Template Size", "sec_num": null }, { "text": "In this work, we have proposed an approach that retrieves identified target speech segments by carrying out multiple-translation spotting on a source input. According to the retrieved speech segments, the target speech can be further generated by using the waveform segment concatenation-based synthesis method. Experiments using Mandarin and Taiwanese were performed on Pentium \u00ae PCs. The experimental results reveal that our system can achieve an average translation understanding rate of about 78%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." } ], "back_matter": [ { "text": "This article is a partial result of Project NSC 90-2215-E-006-009, sponsored by the National Science Council, Taiwan, R.O.C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "JANUS III: Speech-to-Speech Translation in Multiple Languages", "authors": [ { "first": "A", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "A", "middle": [], "last": "Waibel", "suffix": "" }, { "first": "L", "middle": [], "last": "Levin", "suffix": "" }, { "first": "M", "middle": [], "last": "Finke", "suffix": "" }, { "first": "D", "middle": [], "last": "Gates", "suffix": "" }, { "first": "M", "middle": [], "last": "Gavalda", "suffix": "" }, { "first": "T", "middle": [], "last": "Zeppenfeld", "suffix": "" }, { "first": "P", "middle": [], "last": "Zahn", "suffix": "" } ], "year": 1997, "venue": "Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "22", "issue": "", "pages": "99--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lavie, A., A. Waibel, L. Levin, M. Finke, D. Gates, M. Gavalda, T. Zeppenfeld and P. Zahn, \"JANUS III: Speech-to-Speech Translation in Multiple Languages,\" Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, 22(I) 1997, pp. 99-102.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Verbmobil: Foundations of Speech-to-Speech Translation", "authors": [ { "first": "W", "middle": [], "last": "Wahlster", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wahlster, W., \"Verbmobil: Foundations of Speech-to-Speech Translation,\" New York: Springer-Verlag Press, 2000.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Speech-to-Speech Translation Based on Finite-State Transducers", "authors": [ { "first": "F", "middle": [], "last": "Casacuberta", "suffix": "" }, { "first": "D", "middle": [], "last": "Llorens", "suffix": "" }, { "first": "C", "middle": [], "last": "Martinez", "suffix": "" }, { "first": "S", "middle": [], "last": "Molau", "suffix": "" }, { "first": "F", "middle": [], "last": "Nevado", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "M", "middle": [], "last": "Pastor", "suffix": "" }, { "first": "D", "middle": [], "last": "Pico", "suffix": "" }, { "first": "A", "middle": [], "last": "Sanchis", "suffix": "" }, { "first": "E", "middle": [], "last": "Vidal", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Vilar", "suffix": "" } ], "year": null, "venue": "Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "26", "issue": "", "pages": "613--616", "other_ids": {}, "num": null, "urls": [], "raw_text": "Casacuberta, F., D. Llorens, C. Martinez, S. Molau, F. Nevado, H. Ney, M. Pastor, D. Pico, A. Sanchis, E. Vidal and J. M. Vilar, \"Speech-to-Speech Translation Based on Finite-State Transducers,\" Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, 26(I) 2001, pp. 613-616.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "End-to-End Evaluation in ATR-MATRIX: Speech Translation System between English and Japanese", "authors": [ { "first": "F", "middle": [], "last": "Sugaya", "suffix": "" }, { "first": "T", "middle": [], "last": "Takezawa", "suffix": "" }, { "first": "A", "middle": [], "last": "Yokoo", "suffix": "" }, { "first": "S", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": null, "venue": "Proceedings of European Conference on Speech Communication and Technology", "volume": "6", "issue": "", "pages": "2431--2434", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sugaya, F., T. Takezawa, A. Yokoo and S. Yamamoto, \"End-to-End Evaluation in ATR-MATRIX: Speech Translation System between English and Japanese,\" Proceedings of European Conference on Speech Communication and Technology, 6(I) 1999, pp. 2431-2434.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "TransSearch: A Free Translation Memory on the World Wide Web", "authors": [ { "first": "E", "middle": [], "last": "Macklovitch", "suffix": "" }, { "first": "M", "middle": [], "last": "Simard", "suffix": "" }, { "first": "P", "middle": [], "last": "Langlais", "suffix": "" } ], "year": 2000, "venue": "Proceedings of International Conference on Language Resources & Evaluation, 3(I)", "volume": "", "issue": "", "pages": "1201--1208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Macklovitch, E., M. Simard and P. Langlais, \"TransSearch: A Free Translation Memory on the World Wide Web,\" Proceedings of International Conference on Language Resources & Evaluation, 3(I) 2000, pp. 1201-1208.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Translation Spotting for Translation Memories", "authors": [ { "first": "S", "middle": [], "last": "Michel", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT-NAACL Workshop on Building and Using Parallel Texts: Data Driven Machine Translation and Beyond", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel, S., \"Translation Spotting for Translation Memories,\" Proceedings of HLT-NAACL Workshop on Building and Using Parallel Texts: Data Driven Machine Translation and Beyond, 2003, pp. 65-72.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Evaluation of Parallel Text Alignment Systems -The ARCADE Project", "authors": [ { "first": "J", "middle": [], "last": "V\u00e9ronis", "suffix": "" }, { "first": "P", "middle": [], "last": "Langlais", "suffix": "" } ], "year": 2000, "venue": "Parallel Text Processing", "volume": "", "issue": "", "pages": "369--388", "other_ids": {}, "num": null, "urls": [], "raw_text": "V\u00e9ronis, J. and P. Langlais, \"Evaluation of Parallel Text Alignment Systems -The ARCADE Project,\" in J. V\u00e9ronis (ed.): Parallel Text Processing. Dordrecht: Kluwer Academic, 2000, pp. 369-388.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Machine Translation: A View from the Lexicon", "authors": [ { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorr, B. J., \"Machine Translation: A View from the Lexicon,\" The MIT press, 1993.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Study for Mandarin Text to Taiwanese Speech System", "authors": [ { "first": "J", "middle": [ "F" ], "last": "Wang", "suffix": "" }, { "first": "B", "middle": [ "Z" ], "last": "Houg", "suffix": "" }, { "first": "S", "middle": [ "C" ], "last": "Lin", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 12th Research on Computational Linguistics Conference", "volume": "", "issue": "", "pages": "37--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, J. F., B. Z. Houg and S. C. Lin, \"A Study for Mandarin Text to Taiwanese Speech System,\" Proceedings of the 12th Research on Computational Linguistics Conference , 1999, pp. 37-53.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Establish Taiwanese 7-Tones Syllable-based Synthesis Units Database for the Prototype Development of Text-to-Speech System", "authors": [ { "first": "Y", "middle": [ "J" ], "last": "Sher", "suffix": "" }, { "first": "K", "middle": [ "C" ], "last": "Chung", "suffix": "" }, { "first": "C", "middle": [ "H" ], "last": "Wu", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 12th Research on Computational Linguistics Conference", "volume": "", "issue": "", "pages": "15--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sher, Y. J., K. C. Chung and C. H. Wu, \"Establish Taiwanese 7-Tones Syllable-based Synthesis Units Database for the Prototype Development of Text-to-Speech System, \" Proceedings of the 12th Research on Computational Linguistics Conference , 1999, pp. 15-35.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A Hybrid Model for Chinese-English Machine Translation", "authors": [ { "first": "J", "middle": [], "last": "Liu", "suffix": "" }, { "first": "L", "middle": [], "last": "Zhou", "suffix": "" } ], "year": null, "venue": "Proceedings of IEEE International Conference on Systems, Man, and Cybernetics", "volume": "2", "issue": "", "pages": "1201--1206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, J. and L. Zhou, \"A Hybrid Model for Chinese-English Machine Translation,\" Proceedings of IEEE International Conference on Systems, Man, and Cybernetics , 2(I) 1998, pp.1201-1206.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "not be contiguous (examples 2 and 3), and the TL answer may possibly be empty (example 4) when there is no satisfactory way of linking TL patterns to the input. By varying the identification criteria, the translation spotting method can help evaluate units over various dimensions, such as frequency ranges, parts of speech and even speech features of spoken language." }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Section 6 draws conclusions." }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "An example of multiple-translation spotting." }, "FIGREF3": { "num": null, "type_str": "figure", "uris": null, "text": "Figure 2(a) shows a block diagram of the training phase. Framework of the proposed system: (a) a training phase; (b) a translation phase." }, "FIGREF4": { "num": null, "type_str": "figure", "uris": null, "text": "(b) shows a block diagram of the translation phase. A one-stage based spotting" }, "FIGREF8": { "num": null, "type_str": "figure", "uris": null, "text": "Time-conditioned weight convergence for dissimilarity measurement" }, "FIGREF9": { "num": null, "type_str": "figure", "uris": null, "text": "Good, Understandable, and Bad. A Good generation needed to have no syntactic errors, and its meaning had to be correctly understood. Understandable generations could have some variable translation errors, but the main intention of the source speech had to be conveyed without misunderstanding. Otherwise, the translations were classified as Bad. With this subjective measure, the percentage of Good or Understandable generations for the Top 5 was 80% for Mandarin to Taiwanese (M/T) translation and 76% for Taiwanese to Mandarin (T/M) translation. The percentage of Good generations for the Top 1" }, "TABREF0": { "type_str": "table", "text": "", "num": null, "content": "
QuerySL (Mandarin)Sentence Pair TL (Taiwanese)
1. q : \u5f85 \u5e7e \u5929\u4f60 \u9810\u8a08 \u8981 \u5f85 \u5e7e \u5929 )( Sr q ={\u5f85,\u5e7e,\u5929}lie phahsngx bueq doax kuie jit )( Tr q ={doax,kuie,jit}
\u6211 \u660e\u5929 \u8981 \u8a02 \u5169 \u9593 \u6709minafzaix goar bueq dexng lerng kefng u
2.\u6dcb\u6d74 \u8a2d\u5099 \u7684 \u55ae\u4eba\u623fsea sengqw e danjiin paang
", "html": null }, "TABREF1": { "type_str": "table", "text": ". Chinese characters have huge hieroglyph character sets; therefore, it is difficult to systematize developed examples. Although alphabetic writing would be an appropriate representation form, a universal phonemic transcription system is still not available. Such spotting reference models are embedded with latent grammars from the constructed templates. When dealing with Mandarin-Taiwanese speech feature models, we build the database by extracting LPCC features from recorded template speeches. Hence, when speech recognition is performed, the LPCC features are extracted from the recorded template speeches, and the LPCC features of speech input are used in combination to compute the degree of dissimilarity. After language pairs of both Taiwanese and Mandarin speech data are developed, the transfer mapping information for a pair of Taiwanese and Mandarin speech segments known to be similar in terms of text-form word alignment is constructed.", "num": null, "content": "", "html": null }, "TABREF3": { "type_str": "table", "text": "", "num": null, "content": "
Number of templates1,050
Number of intentions1,050
Total number of translation patterns5,542
Number of translation entries1,260
Average number of translations per template5.28
", "html": null }, "TABREF4": { "type_str": "table", "text": "Parts of the overall computation of the local frame distance", "num": null, "content": "
Mandarin Taiwanese
", "html": null }, "TABREF5": { "type_str": "table", "text": "", "num": null, "content": "
Computational Load
AdditionMultiplication
Distance computation()
", "html": null } } } }