{ "paper_id": "O12-2003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:03:09.807413Z" }, "title": "The Polysemy Problem, an Important Issue in a Chinese to Taiwanese TTS System", "authors": [ { "first": "Ming-Shing", "middle": [], "last": "Yu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chung-Hsing University", "location": { "postCode": "40227", "settlement": "Taichung", "country": "Taiwan" } }, "email": "" }, { "first": "Yih-Jeng", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chien-Kuo Technology University", "location": { "addrLine": "Chang-hua 500", "country": "Taiwan" } }, "email": "yclin@ctu.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper brings up an important issue, polysemy problems, in a Chinese to Taiwanese TTS (text-to-speech) system. Polysemy means there are words with more than one meaning or pronunciation, such as \"\u6211\u5011\" (we), \"\uf967\" (no), \"\u4f60\" (you), \"\u6211\" (I), and \"\u8981\" (want). We first will show the importance of the polysemy problem in a Chinese to Taiwanese (C2T) TTS system. Then, we will propose some approaches to a difficult case of such problems by determining the pronunciation of \"\u6211\u5011\" (we) in a C2T TTS system. There are two pronunciations of the word \"\u6211\u5011\" (we) in Taiwanese, /ghun/ and /lan/. The corresponding Chinese words are \"\uf9c6\" (we 1) and \"\u54b1\" (we 2). We propose two approaches and a combination of the two to solve the problem. The results show that we have a 93.1% precision in finding the correct pronunciation of the word \"\u6211\u5011\" (we). Compared to the results of the layered approach, which has been shown to work well in solving other polysemy problems, the results of the combined approach are an improvement.", "pdf_parse": { "paper_id": "O12-2003", "_pdf_hash": "", "abstract": [ { "text": "This paper brings up an important issue, polysemy problems, in a Chinese to Taiwanese TTS (text-to-speech) system. Polysemy means there are words with more than one meaning or pronunciation, such as \"\u6211\u5011\" (we), \"\uf967\" (no), \"\u4f60\" (you), \"\u6211\" (I), and \"\u8981\" (want). We first will show the importance of the polysemy problem in a Chinese to Taiwanese (C2T) TTS system. Then, we will propose some approaches to a difficult case of such problems by determining the pronunciation of \"\u6211\u5011\" (we) in a C2T TTS system. There are two pronunciations of the word \"\u6211\u5011\" (we) in Taiwanese, /ghun/ and /lan/. The corresponding Chinese words are \"\uf9c6\" (we 1) and \"\u54b1\" (we 2). We propose two approaches and a combination of the two to solve the problem. The results show that we have a 93.1% precision in finding the correct pronunciation of the word \"\u6211\u5011\" (we). Compared to the results of the layered approach, which has been shown to work well in solving other polysemy problems, the results of the combined approach are an improvement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Besides Mandarin, Taiwanese is the most widely spoken dialect in Taiwan. According to Liang et al. (2004) , about 75% of the population in Taiwan speaks Taiwanese. Currently, it is government policy to encourage people to learn one's mother tongue in schools because local languages are a part of local culture.", "cite_spans": [ { "start": 86, "end": 105, "text": "Liang et al. (2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Researchers (Bao et al., 2002; Chen et al., 1996; Lin et al., 1998; Lu, 2002; Shih et al., 1996; Wu et al., 2007; Yu et al., 2005) have had outstanding results in developing Mandarin Figure 1 shows a common structure of a C2T TTS system. In general, a C2T TTS system should contain four basic modules. They are (1) a text analysis module, (2) a tone sandhi module, (3) a prosody generation module, and (4) a speech synthesis module. A C2T TTS system also needs a text analysis module like that of a Mandarin TTS system. This module requires a well-defined bilingual lexicon. We also find that text analysis in a C2T TTS system should have functions not found in a Mandarin TTS system, such as phonetic transcription, digit sequence processing (Liang et al., 2004) , and a method for solving the polysemy problem. Solving the polysemy problem is the most complex and difficult of these. There has been little research on solving the polysemy problem. Polysemy means that a word has two or more meanings, which may lead to different pronunciations. For example, the word \"\u4ed6\" (he) has two pronunciations in Taiwanese, /yi/ and /yin/. The first pronunciation /yi/ of \"\u4ed6\" (he) means \"he,\" while the second pronunciation /yin/ of \"\u4ed6\" (he) means \"second-person possessive\". The correct pronunciation of a word affects the comprehensibility and fluency of Taiwanese speech.", "cite_spans": [ { "start": 12, "end": 30, "text": "(Bao et al., 2002;", "ref_id": "BIBREF0" }, { "start": 31, "end": 49, "text": "Chen et al., 1996;", "ref_id": "BIBREF1" }, { "start": 50, "end": 67, "text": "Lin et al., 1998;", "ref_id": "BIBREF10" }, { "start": 68, "end": 77, "text": "Lu, 2002;", "ref_id": "BIBREF12" }, { "start": 78, "end": 96, "text": "Shih et al., 1996;", "ref_id": "BIBREF15" }, { "start": 97, "end": 113, "text": "Wu et al., 2007;", "ref_id": "BIBREF16" }, { "start": 114, "end": 130, "text": "Yu et al., 2005)", "ref_id": "BIBREF19" }, { "start": 743, "end": 763, "text": "(Liang et al., 2004)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 183, "end": 191, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Many researchers have studied C2T TTS systems (Ho, 2000; Huang, 2001; Hwang, 1996; Lin et al., 1999; Pan, Yu, & Tsai, 2008; Yang, 1999; Zhong, 1999) . Nevertheless, none of the researchers considered the polysemy problem in a C2T TTS system. We think that solving the polysemy problem in a C2T TTS system is a fundamental task. The correct meaning of the synthesized words cannot be determined if this problem is not solved properly. The remainder of this paper is organized as follows. In Section 2, we will describe the polysemy problem in Taiwanese. We will give examples to show the importance of solving the polysemy problem in a C2T TTS system. Determining the correct pronunciation of the word \"\u6211\u5011\" (we) is the focus of the challenge in these cases. Section 3 is the description of the layered approach, which has been shown to work well in solving the polysemy problem (Lin et al., 2008) . Lin (2006) has also shown that the layered approach works very well in solving the polyphone problem in Chinese. We will apply the layered approach in determining the pronunciation of \"\u6211\u5011\" (we) in this section. In Section 4 and Section 5, we use two models to determine the pronunciation of the word \"\u6211\u5011\" (we) in sentences. The first approach in Section 4 is called the word-based unigram model (WU). The second approach, which will be applied in Section 5, is the word-based long-distance bigram model (WLDB). We also make some new inferences in these two sections. Section 6 shows a combination of the two models discussed in Section 4 and Second 5 for a third approach to solving the polysemy problem. Finally, in Section 7, we summarize our major findings and outline some future works.", "cite_spans": [ { "start": 46, "end": 56, "text": "(Ho, 2000;", "ref_id": "BIBREF2" }, { "start": 57, "end": 69, "text": "Huang, 2001;", "ref_id": null }, { "start": 70, "end": 82, "text": "Hwang, 1996;", "ref_id": "BIBREF5" }, { "start": 83, "end": 100, "text": "Lin et al., 1999;", "ref_id": "BIBREF8" }, { "start": 101, "end": 123, "text": "Pan, Yu, & Tsai, 2008;", "ref_id": "BIBREF14" }, { "start": 124, "end": 135, "text": "Yang, 1999;", "ref_id": "BIBREF17" }, { "start": 136, "end": 148, "text": "Zhong, 1999)", "ref_id": "BIBREF21" }, { "start": 877, "end": 895, "text": "(Lin et al., 2008)", "ref_id": "BIBREF11" }, { "start": 898, "end": 908, "text": "Lin (2006)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Unlike in Chinese, the polysemy problem in Taiwanese appears frequently and is complex. We will give some examples to show the importance of solving the polysemy problem in a C2T TTS system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polysemy Problems in Taiwanese", "sec_num": "2." }, { "text": "The first examples feature the pronouns \"\u4f60\" (you), \"\u6211\" (I), and \"\u4ed6\" (he) in Taiwanese. These three pronouns have two pronunciations, each of which corresponds to a different meaning. Example 2.1 shows the pronunciations of the word \"\u6211\" (I) and \"\u4f60\" (you) in Taiwanese. The two pronunciations of \"\u6211\" (I) are /ghua/ with the meaning of \"I\" or \"me\" and /ghun/ with the meaning of \"my\". The two pronunciations of \"\u4f60\" (you) are /li/ with the meaning of \"you\" and /lin/ with the meaning of \"your\". If one chooses the wrong pronunciation, the utterance will carry the wrong meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polysemy Problems in Taiwanese", "sec_num": "2." }, { "text": "\u6211/ghua/\u904e\u4e00\u6703\u5152\u6703\u62ff\u5e7e\u672c\u6709\u95dc\u53f0\u8a9e\u6587\u5316\u7684\u66f8\u5230\u4f60/lin/\u5bb6\u7d66\u4f60/li/\uff0c\u4f60/li/\u53ef\u4ee5 \uf967\u5fc5\u5230\u6211/ghun/\u5bb6\uf92d\u627e\u6211/ghua/\u62ff\u3002 (I will bring some books about Taiwanese culture to your house for you later; you need not come to my home to get them from me.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.1", "sec_num": null }, { "text": "Example 2.2 shows the two different pronunciations of \"\u4ed6\" (he). They are /yi/, with the meaning of \"he\" or \"him,\" and /yin/, with the meaning of \"his\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.1", "sec_num": null }, { "text": "\u6211\u770b\u5230\u4ed6/yi/\u62ff\u4e00\u76c6\uf91f\u82b1\u56de\u4ed6/yin/\u5bb6\u7d66\u4ed6/yin/\u7238\u7238\u3002 (I saw him bring an orchid back to his home for his father.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.2", "sec_num": null }, { "text": "The following examples focus on \"\uf967\" (no), which has six different pronunciations. They are /bho/, /m/, /bhei/, /bhuaih/, /mai/, and /but/. Examples 2.3 through 2.6 show four of the six pronunciations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.2", "sec_num": null }, { "text": "\u4e00\u822c\u4eba\u4e26\uf967/bho/\u5bb9\uf9e0\u770b\u51fa\u5b83\u7684\u91cd\u8981\u6027\u3002 (It is not easy for a person to see its importance.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.3", "sec_num": null }, { "text": "Example 2.4 \uf967/m/\u77e5\uf92a\u8cbb\uf9ba\u591a\u5c11\u570b\u5bb6\u8cc7\u6e90\u3002 (We do not know how many national resources were wasted.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.3", "sec_num": null }, { "text": "Example 2.5 \u8b93\u4eba\uf997\u60f3\uf967/bhei/\u5230\u4ed6\u8207\u6a5f\u68b0\u7684\u95dc\u4fc2\u3002 (One would not come to the proper conclusion regarding the relationship between that person and machines.) Example 2.6 \u83ef\u822a\u4f7f\u7528\u4e4b\u822a\u7a7a\u7ad9\u4ea4\u901a\u5df2\uf967/but/\u5982\u5f9e\u524d\u65b9\uf965\u3002 (The traffic at the airport is not as convenient as it was in the past for China Airlines.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.3", "sec_num": null }, { "text": "Examples 2.7 through 2.9 are examples of pronunciations of the word \"\u4e0a\" (up). The word \"\u4e0a\" (up) has three pronunciations. They are /ding/, /siong/, and /jiunn/. The meaning of the word \"\u4e0a\" (up) in Example 2.7 has the sense of \"previous\". Example 2.8 shows a case where \"\u4e0a\" (up) means \"on\". Example 2.9 is an example of the use of \"\u4e0a\" (up) to mean, \"get on\". Another word we want to discuss is \"\u4e0b\" (down). The word \"\u4e0b\" (down) has four pronunciations. They are /ha/, /ao/, /loh/, and /ei/. Examples 2.10-2.13 are some examples of pronunciations of the word \"\u4e0b\" (down). The meaning of \"\u4e0b\" (down) in Example 2.10 is \"close\" or \"end\". Example 2.11 shows how the same word can mean \"next\". Example 2.12 illustrates the meaning \"falling\". Example 2.13 shows another example of it used to mean \"next\". We have proposed a layered approach in predicting the pronunciations \"\u4e0a\" (up), \"\u4e0b\" (down), and \"\uf967\" (no) (Lin et al., 2008) . The layered approach works very well in solving the polysemy problems in a C2T TTS system. A more difficult case of the polysemy problem will be encountered in this paper.", "cite_spans": [ { "start": 898, "end": 916, "text": "(Lin et al., 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Example 2.3", "sec_num": null }, { "text": "In addition to the above words, another difficult case is \"\u6211\u5011\" (we). Taiwanese speakers arrive at the correct pronunciation of the word \"\u6211\u5011\" (we) by deciding whether to include the listener in the pronoun. Unlike Chinese, \"\u6211\u5011\" (we) has two pronunciations with different meanings when used in Taiwanese. This word can include (1) both the speaker and listener(s) or (2) just the speaker. These variations lead to two different pronunciations in Taiwanese, /lan/ and /ghun/. The Chinese characters for /lan/ and /ghun/ are \"\u54b1\" (we) and \"\uf9c6\" (we), respectively. The following example helps to illustrate the different meanings. More examples to illustrate these differences will be used later in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.3", "sec_num": null }, { "text": "Assume first that Jeffrey and his younger brother, Jimmy, ask their father to take them to see a movie then go shopping. Jeffrey can say the following to his father: Example 2.14 \u7238\u7238\u4f60\u8981\u8a18\u5f97\u5e36\u6211\u5011\u4e00\u8d77\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u518d\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, remember to take us to see a movie and go shopping with us after we see the movie.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.3", "sec_num": null }, { "text": "The pronunciation of the first word \"\u6211\u5011\" (we) in Example 2.14 is /ghun/ in Taiwanese since the word \"\u6211\u5011\" (we) does not include the listener, Jeffrey's father. The second instance of \"\u6211\u5011\" (we), however, is pronounced /lan/ since this instance includes both the speaker and the listener.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.3", "sec_num": null }, { "text": "The pronunciation of \"\u6211\u5011\" (we) in Example 2.15 is /ghun/ in Taiwanese since the word \"\u6211\u5011\" (we) includes Jeffrey and Jimmy but does not include the listener, Jeffrey's father.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.3", "sec_num": null }, { "text": "will go to see a movie with my younger brother, and the two of us will go shopping after seeing the movie.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I", "sec_num": null }, { "text": "If a C2T TTS system cannot identify the correct pronunciation of the word \"\u6211\u5011\" (we), we cannot understand what the synthesized Taiwanese speech means. In a C2T TTS system, it is necessary to decide the correct pronunciation of the Chinese word \"\u6211\u5011\" (we) in order to have a clear understanding of synthesized Taiwanese speech.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I", "sec_num": null }, { "text": "Distinguishing different kinds of meanings of \"\u6211\u5011\" (we) is a semantic problem. It is a difficult but important issue to be overcome in the text analysis module of a C2T TTS system. As there is only one pronunciation of \"\u6211\u5011\" (we) in Mandarin, a Mandarin TTS system does not need to identify the meaning of the word \"\u6211\u5011\" (we).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I", "sec_num": null }, { "text": "To compare this work with the research in Hwang et al. (2000) and Yu et al. (2003) , determining the meaning of the word \"\u6211\u5011\" (we) may be more difficult than solving the non-text symbol problem. A person can determine the relationship between the listeners and the speaker then determine the meaning of the word \"\u6211\u5011\" (we). It is more difficult, however, for a computer to recognize the relationship between the listeners and speakers in a sentence.", "cite_spans": [ { "start": 42, "end": 61, "text": "Hwang et al. (2000)", "ref_id": "BIBREF6" }, { "start": 66, "end": 82, "text": "Yu et al. (2003)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I", "sec_num": null }, { "text": "Since determining whether listeners are included is a context-sensitive problem, we need to look at the surrounding words, sentences, or paragraphs to find the answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I", "sec_num": null }, { "text": "Let us examine the following Chinese sentence (Example 2.16) to help clarify the problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I", "sec_num": null }, { "text": "Example 2.16 \u6211\u5011\u5fc5\u9808\u52a0\u7dca\u8173\u6b65\u6539\u5584\u53f0\uf963\u5e02\u7684\u4ea4\u901a\uf9fa\u6cc1\u3002 (We should press forward to improve the traffic of Taipei City.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I", "sec_num": null }, { "text": "It is difficult to determine the Taiwanese pronunciation of the word \"\u6211\u5011\" (we) in Example 2.16 from the information in this sentence. To get the correct pronunciation of the word \"\u6211\u5011\" (we), we need to expand the sentence by adding words to the subject, i.e., look forward, and predicate, i.e., look backward. Assume that, when we add words to the subject and the predicate, we have a sentence that looks like Example 2.17: As the reporters from the USA have no obligation to improve the traffic of Taipei, we can conclude that \"\u6211\u5011\" (we) does not include them. Therefore, it is safe to say that the correct pronunciation of the word \"\u6211\u5011\" (we) in Example 2.17 should be /ghun/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I", "sec_num": null }, { "text": "Example", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I", "sec_num": null }, { "text": "On the other hand, if the sentence reads as in Example 2.18 and context is included, the pronunciation of the word \"\u6211\u5011\" (we) should be /lan/. We can find some important keywords such as \"\u53f0\uf963\u5e02\u9577\" (the Taipei city mayor) and \"\u5e02\u5e9c\u6703\u8b70\" (a meeting of the city government).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.15 \u7238\u7238, \u6211\u8981\u548c\u5f1f\u5f1f\u53bb\u770b\u96fb\u5f71, \u6211\u5011\u770b\u5b8c\u96fb\u5f71\u5f8c, \u6703\u4e00\u8d77\u53bb\u901b\u8857\u3002 (Daddy, I", "sec_num": null }, { "text": "\uf9fa\u6cc1\u3002\u300d (In a meeting of the city government, the Taipei city mayor, Ma", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.18 \u53f0\uf963\u5e02\u9577\u99ac\u82f1\u4e5d\u5728\u5e02\u5e9c\u6703\u8b70\u4e2d\u6307\u51fa: \u300c\u6211\u5011\u5fc5\u9808\u52a0\u7dca\u8173\u6b65\u6539\u5584\u53f0\uf963\u5e02\u7684\u4ea4\u901a", "sec_num": null }, { "text": "Ying-Jeou, said that we should press forward to improve the traffic of Taipei City.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.18 \u53f0\uf963\u5e02\u9577\u99ac\u82f1\u4e5d\u5728\u5e02\u5e9c\u6703\u8b70\u4e2d\u6307\u51fa: \u300c\u6211\u5011\u5fc5\u9808\u52a0\u7dca\u8173\u6b65\u6539\u5584\u53f0\uf963\u5e02\u7684\u4ea4\u901a", "sec_num": null }, { "text": "When disambiguating the meaning of some non-text symbols, such as \"/\", \":\", and \"-\" the keywords to decide the pronunciation of the special symbols may be within a fixed distance from the given symbol. Nevertheless, the keywords can be at any distance from the word \"\u6211\u5011\" (we), as per Example 2.19. Some words that could be used to determine the pronunciation of \"\u6211\u5011\" (we), such as \"\u5e02\u5e9c\u6703\u8b70\" (a meeting of the city government), \"\u53f0\uf963 \u5e02\u9577\" (the Taipei city mayor), and \"\u99ac\u82f1\u4e5d\" (Ma Ying-Jeou), are at various distances from \"\u6211\u5011\" (we).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.18 \u53f0\uf963\u5e02\u9577\u99ac\u82f1\u4e5d\u5728\u5e02\u5e9c\u6703\u8b70\u4e2d\u6307\u51fa: \u300c\u6211\u5011\u5fc5\u9808\u52a0\u7dca\u8173\u6b65\u6539\u5584\u53f0\uf963\u5e02\u7684\u4ea4\u901a", "sec_num": null }, { "text": "\u5e02\u9577\uf96f: \u300c\u6211\u5011\u5fc5\u9808\u52a0\u7dca\u8173\u6b65\u6539\u5584\u53f0\uf963\u5e02\u7684\u4ea4\u901a\uf9fa\u6cc1\u3002\u300d (In a meeting of the city government, the Taipei city mayor, Ma Ying-Jeou, talked about the problem of the traffic in Taipei city. Mayor Ma said that we should press forward to improve the traffic of Taipei city.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.19 \u5728\u4eca\u5929\u7684\u5e02\u5e9c\u6703\u8b70\u4e2d\uff0c\u53f0\uf963\u5e02\u9577\u99ac\u82f1\u4e5d\u63d0\u5230\u95dc\u65bc\u53f0\uf963\u5e02\u7684\u4ea4\u901a\u554f\u984c\u6642\uff0c\u99ac", "sec_num": null }, { "text": "These examples illustrate the importance of determining the proper pronunciation for each word in a C2T TTS system. Compared to other cases of polysemy, determining the proper pronunciation of the word \"\u6211\u5011\" (we) in Taiwanese is a difficult task. We will focus on solving the polysemy problem of the word \"\u6211\u5011\" (we) in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2.19 \u5728\u4eca\u5929\u7684\u5e02\u5e9c\u6703\u8b70\u4e2d\uff0c\u53f0\uf963\u5e02\u9577\u99ac\u82f1\u4e5d\u63d0\u5230\u95dc\u65bc\u53f0\uf963\u5e02\u7684\u4ea4\u901a\u554f\u984c\u6642\uff0c\u99ac", "sec_num": null }, { "text": "(we) Lin (2006) showed that the layered approach worked very well in solving the polyphone problem in Chinese. Lin (2006) also showed that using the layered approach to solve the polyphone problem is more accurate than using the CART decision tree. We also show that using the layered approach in solving the polysemy problems of other words has worked well in our research (Lin et al., 2008) . We will apply the layered approach in solving the polysemy problem of \"\u6211\u5011\" (we) in Taiwanese.", "cite_spans": [ { "start": 5, "end": 15, "text": "Lin (2006)", "ref_id": "BIBREF9" }, { "start": 111, "end": 121, "text": "Lin (2006)", "ref_id": "BIBREF9" }, { "start": 374, "end": 392, "text": "(Lin et al., 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Using the Layered Approach to Determine the Pronunciation of \"\u6211\u5011\"", "sec_num": "3." }, { "text": "First, we will describe the experimental data used in this paper. The experimental data is comprised of over forty thousand news items from eight news categories, in which 1,546 articles contain the word \"\u6211\u5011\" (we). The data was downloaded from the Internet from August 23, 2003 to October 21, 2004. The distribution of these articles is shown in Table 1 . We determined the pronunciation of each \"\u6211\u5011\" (we) manually. As shown in Table 2 , in the 1,546 news articles, \"\u6211\u5011\" occurred 3,195 times. In our experiment, 2,556 samples were randomly chosen for the training data while the other 639 samples were added to the test data. In the training data, there were 1,916 instances with the pronunciation of /ghun/ for the Chinese character \" \uf9c6 \" and 640 instances with the pronunciation of /lan/ for the Chinese character \"\u54b1\". Figure 2 shows the layered approach to the polysemy problem with an input test sentence. We use Example 3.1 to illustrate how the layered approach works.", "cite_spans": [], "ref_spans": [ { "start": 346, "end": 353, "text": "Table 1", "ref_id": "TABREF4" }, { "start": 428, "end": 435, "text": "Table 2", "ref_id": "TABREF5" }, { "start": 821, "end": 829, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Description of Experimental Data", "sec_num": "3.1" }, { "text": "Example 3.1 \u7238\u7238 \u544a\u8a34 \u6211\u5011 \u904e \u99ac\uf937 \u8981 \u5c0f\u5fc3\u3002 (Dad told us to be careful when crossing the street.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Description of Layered Approach", "sec_num": "3.2" }, { "text": "Example 3.1 is an utterance in Chinese with segmentation information. Spaces were used to separate the words in Example 3.1. We want to predict the correct pronunciation for the word \"\u6211\u5011\" (we) in Example 3.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Description of Layered Approach", "sec_num": "3.2" }, { "text": "As depicted in Figure 2 , there are four layers in our approach. We set ( 2 1 0 1 2 , , , , w w w w w \u2212 \u2212 + + ) as (\u7238\u7238,\u544a\u8a34,\u6211\u5011,\u904e,\u99ac\uf937). This pattern (\u7238\u7238,\u544a\u8a34,\u6211\u5011,\u904e,\u99ac\uf937) will be the input for Layer 4. Nevertheless, as this pattern is not found in the training data, we cannot decide the pronunciation of \"\u6211\u5011\" (we) with this pattern. We then use two patterns", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 23, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 72, "end": 81, "text": "( 2 1 0", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Description of Layered Approach", "sec_num": "3.2" }, { "text": "( 2 1 0 1 , , , w w w w \u2212 \u2212 + ) and ( 1 0 1 2 , , , w w w w \u2212 + + )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Description of Layered Approach", "sec_num": "3.2" }, { "text": "to derive (\u7238\u7238,\u544a\u8a34,\u6211\u5011,\u904e) and (\u544a\u8a34,\u6211\u5011,\u904e, \u99ac\uf937), respectively, as the inputs for Layer 3. Since we cannot find any patterns in the training data that match either of these patterns, the pronunciation cannot be decided in this layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Description of Layered Approach", "sec_num": "3.2" }, { "text": "Three patterns are used in Layer 2. They are (\u7238\u7238,\u544a\u8a34,\u6211\u5011), (\u544a\u8a34,\u6211\u5011,\u904e), and (\u6211 \u5011,\u904e,\u99ac\uf937). We find that the pattern (\u7238\u7238,\u544a\u8a34,\u6211\u5011) has appeared in training data. The frequencies are 2 for pronunciation /ghun/ and 1 for /lan/. Thus, the probabilities for the possible pronunciations of \"\u6211\u5011\" (we) in Example 3.1 are 2/3 for /ghun/ and 1/3 for /lan/. We can conclude that the predicted pronunciation is /ghun/. The layered approach terminates in Layer 2 in this example. If the process did not terminate prematurely, as in this example, it would have terminated in Layer 1, as shown by the dashed lines in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 592, "end": 600, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Description of Layered Approach", "sec_num": "3.2" }, { "text": "We used the experimental data mentioned in 3.1. There are 3,159 samples in the corpus. We used 2,556 samples to train the four layers. The other 639 samples form the test data. Table 3 shows the accuracy of using the layered approach based on word patterns. Thus, the features in the layered approach are words. The results show that the layered approach does not work well. The overall accuracy is 77.00%. No pattern found, go to the next layer. ", "cite_spans": [], "ref_spans": [ { "start": 177, "end": 184, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Results of Using the Layered Approach", "sec_num": "3.3" }, { "text": "/ghun/=0 /lan/=0 (\u7238\u7238,\u544a\u8a34) (\u544a\u8a34,\u6211\u5011) (\u6211\u5011,\u904e) (\u904e,\u99ac\uf937) \uff0b \uff0b \uff0b /ghun/=0 /lan/=0 /ghun/=0 /lan/=0 /ghun/=2 /lan/=1 /ghun/=0 /lan/=0 /ghun/=0 /lan/=0 /ghun/=0 /lan/=0 /ghun/=0 /lan/=0 /ghun/=0 /lan/=0 /ghun/=0 /lan/=0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of Using the Layered Approach", "sec_num": "3.3" }, { "text": "In this section, we propose a word-based unigram language model (WU). Two statistical results are needed in this model. Statistical results were compiled for (1) the frequency of appearance for words that appear to the left of \"\u6211\u5011\" (we) in the training data and (2) the frequencies for words that appear to the right. Each punctuation mark was treated as a word. Each testing sample looks like the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based Unigram Language Model", "sec_num": "4." }, { "text": "w -M w -(M-1) \u2026 w -2 w -1 \u6211\u5011 w +1 w +2 \u2026 w +(N-1) w +N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based Unigram Language Model", "sec_num": "4." }, { "text": "where w -i is the i th word to the left of \"\u6211\u5011\" (we) and w i is the i th word to the right. The following formulae were used to find four different scores for each testing sample:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based Unigram Language Model", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S uL (/lan/), S uR (/lan/), S uL (/ghun/), and S uR (/ghun/). 1 (/ / & ) (/ /) / / (/ / & ) (/ / & ) (/ /) (/ /) j M uL uL j j j uL uL C lan w T lan S ( lan ) C lan w C ghun w T lan T ghun \u2212 \u2212 \u2212 = = + \u2211 (1) 1 (/ / & ) (/ /) (/ /) (/ / & ) (/ / & ) (/ /) (/ /) j N uR uR j j j uR uR C lan w T lan S lan C lan w C ghun w T lan T ghun + + + = = + \u2211 (2) 1 (/ / & ) (/ /) (/ /) (/ / & ) (/ / & ) (/ /) (/ /) j M uL uL j j j uL uL C ghun w T ghun S ghun C lan w C ghun w T lan T ghun \u2212 \u2212 \u2212 = = + \u2211 (3) 1 (/ / & ) (/ /) / / (/ / & ) (/ / & ) (/ /) (/ /) j N uR uR j j j uR uR C ghun w T ghun S ( ghun ) C lan w C ghun w T lan T ghun + + + = = + \u2211 (4) where 1 (/ /) (/ / & ) uL uL l l T lan C lan w \u2212 = = \u2211 (5) 1 (/ /) (/ / & ) uL uL p p T ghun C ghun w \u2212 = = \u2211 (6) 1 (/ /) (/ / & ) uR uR l l T lan C lan w + = = \u2211 (7) 1 (/ /) (/ / & ) uR uR p p T ghun C ghun w + = = \u2211", "eq_num": "(8)" } ], "section": "Word-based Unigram Language Model", "sec_num": "4." }, { "text": "uL different kinds of words appear on the left side of \"\u6211\u5011\" (we) in the training corpus. T uL (/lan/) is the total frequency of these uL words in the training data where the pronunciation of \"\u6211\u5011\" (we) is /lan/. Similarly, T uL (/ghun/) represents the total frequency of uL words where \"\u6211\u5011\" (we) is pronounced /ghun/. uR is the number of different words that appear to the right side of \"\u6211\u5011\" (we) in the training corpus. T uR (/lan/) and T uR (/ghun/) are the total frequencies of these uR words in the training data where pronunciation of \"\u6211\u5011\" (we) is /lan/ and /ghun/, respectively. C(/ghun/&w p ) is the frequency that the word w p appears in the training corpus where the pronunciation of \"\u6211\u5011\" (we) is /ghun /.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based Unigram Language Model", "sec_num": "4." }, { "text": "(/ / & ) (/ /) j uL C lan w T lan \u2212", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based Unigram Language Model", "sec_num": "4." }, { "text": "in (1) means the significance of pronunciation /lan/ of word w -j in training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based Unigram Language Model", "sec_num": "4." }, { "text": "Formulae (1) through (4) were applied to each test sample to produce four scores. The scores were S uL (/lan/) for the words to the left of \"\u6211\u5011\" (we) when the pronunciation was /lan/, S uR (/lan/) for the words to the right when the pronunciation was /lan/, S uL (/ghun/) for the words to the left of \"\u6211\u5011\" (we) when the pronunciation was /ghun/, and S uR (/ghun/) for the words to the right when the pronunciation was /ghun/. The pronunciation of \"\u6211\u5011\" (we) is /lan/ if S uL (/lan/)+ S uR (/lan/) > S uL (/ghun/) + S uR (/ghun/). The result is /ghun/ otherwise. The experiments were inside and outside tests. First, we applied WU with the training data mentioned in Section 3.1 to find the best ranges in determining the pronunciation of \"\u6211 \u5011\" (we). We defined a window as (M, N), where M was number of words to the left of \"\u6211\u5011\" (we) and N was the number of words to the right. Three hundred and ninety nine (20*20-1=399) different windows were applied when using the WU model. As shown in Table 4 , the best result from an inside test was 87.00%, with a window of (17, 10).", "cite_spans": [], "ref_spans": [ { "start": 989, "end": 997, "text": "Table 4", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Word-based Unigram Language Model", "sec_num": "4." }, { "text": "The best result when the correct pronunciation of \"\u6211\u5011\" (we) was /ghun/ was 94.01%, achieved when the window was (12, 6). Nevertheless, the results when the pronunciation was /lan/ and the window was the same were not good. The highest accuracy achieved was 45.48%. Also, as shown in 4 th row of Table 4 , the best result when applying WU when the pronunciation was /lan/ was just 77.88%, when the window was (19, 14) . This shows that WU did not work well when the pronunciation of \"\u6211\u5011\" (we) was /lan/. We applied WU with a window of (17, 10) for testing data. The overall accuracy of the outside tests was 75.59%. The accuracies were 90.40% and 31.25% when the pronunciations were /ghun/ and /lan/, respectively. ", "cite_spans": [ { "start": 408, "end": 412, "text": "(19,", "ref_id": null }, { "start": 413, "end": 416, "text": "14)", "ref_id": null } ], "ref_spans": [ { "start": 295, "end": 302, "text": "Table 4", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Word-based Unigram Language Model", "sec_num": "4." }, { "text": "We will bring up the word-based long-distance bigram language model (WLDB) in this section. According to Section 2 of this paper, there are two different meanings for \"\u6211\u5011\" (we). The two meanings are different in that one includes the listener(s) and the other does not. We propose a modification of the WU model by having two words appear together in the text to clarify the relationship between the speaker and listener(s). Examples of this modification are \"\u53f0\uf963\u5e02\u9577\" (the Taipei city mayor) and \"\u7f8e\u570b\u8a18\u8005\" (the reporter(s) from the USA) in Example 2.17 and \"\u53f0\uf963\u5e02\u9577\" and \"\u5e02\u5e9c\u6703\u8b70\" (a city government meeting) in Examples 2.18 and 2.19.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-based Long Distance Bigram Language Model", "sec_num": "5." }, { "text": "The following formulae were used to find four scores for each testing sample, S bL (/lan/), S bR (/lan/), S bL (/ghun/), and S bR (/ghun/). We assume that bL different words appear to the left of \"\u6211\u5011\" (we) in the training corpus and bR different words appear to the right. Formulae 9, 10, 11, and 12 were applied to each test sample, and they produced four scores. C(/lan/&w i &w j ) in (9) is the frequency at which words w i and w j appear in the training corpus when the pronunciation of \"\u6211\u5011\" (we) is /lan/. S bL (/lan/) is the score for the words to the left of \"\u6211\u5011\" (we) when the pronunciation is /lan/, and S bR (/lan/) is the score for the words to the right. Similarly, S bL (/ghun/) and S bR (/ghun/) represent the scores for the words to the left and right, respectively, when \"\u6211\u5011\" (we) is pronounced /ghun/. In summary, the pronunciation of the word \"\u6211\u5011\" (we) is /lan/ if S bL (/lan/) + S bR (/lan/) > S bL (/ghun/) + S bR (/ghun/). The pronunciation is /ghun/ otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each testing sample, w -M w -(M-1) \u2026 w -2 w -1 \u6211\u5011 w +1 w +2 \u2026 w +(N-1) w +N .", "sec_num": null }, { "text": "1 (/ / & & ) (/ /) (/ /) (/ / & & ) (/ / & & ) (/ /) (/ /) i j M M bL bL i j i j i j i bL bL C lan w w T lan S lan C lan w w C ghun w w T C lan T C ghun \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 = = = + \u2211 \u2211 (9) 1 (/ / & & ) (/ /) (/ /) (/ / & & ) (/ / & & ) (/ /) (/ /) i j N N bR bR i j i j i j i bR bR C lan w w T lan S lan C lan w w C ghun w w T lan T ghun + + + + = = = + \u2211 \u2211 (10) 1 (/ / & & ) (/ /) (/ /) (/ /& & ) (/ /& & ) (/ /) (/ /) i j M M bL bL i j i j i j i bL bL C ghun w w T ghun S ghun C ghun w w C lan w w T ghun T lan \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 = = = + \u2211 \u2211 (11) 1 (/ / & & ) (/ /) (/ /) (/ / & & ) (/ / & & ) (/ /) (/ /) i j N N bR bR i j i j i j i bR bR C ghun w w T ghun S ghun C ghun w w C lan", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "For each testing sample, w -M w -(M-1) \u2026 w -2 w -1 \u6211\u5011 w +1 w +2 \u2026 w +(N-1) w +N .", "sec_num": null }, { "text": "We applied WLDB with the training data mentioned in Section 3.1 to find the best ranges in determining the pronunciation of \"\u6211\u5011\" (we). We defined a window of (M, N), where M was the number of words to the left and N was number of words to the right. Three hundred and sixty (19*19-1=360) different windows were applied in the analysis of using the WLDB model. As shown in the 2 nd row of Table 5 , the best result of the inside test was 94.25% with the best range being 11 words to the left of \"\u6211\u5011\" (we) and 7 words to the right.", "cite_spans": [], "ref_spans": [ { "start": 388, "end": 395, "text": "Table 5", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "For each testing sample, w -M w -(M-1) \u2026 w -2 w -1 \u6211\u5011 w +1 w +2 \u2026 w +(N-1) w +N .", "sec_num": null }, { "text": "The best result when the correct pronunciation of \"\u6211\u5011\" (we) was /lan/ was 99.87%, when the window was (11, 5). Nevertheless, the result for /ghun/ with the same window was not good. The highest accuracy achieved was 89.69%. As shown in the 3 rd row of Table 5 , the best result when applying WLDB when the pronunciation was /ghun/ was 93.48%, when the window was (4, 13). This shows that WLDB does not work well when the pronunciation of \"\u6211\u5011\" (we) is /ghun/. We applied the WLDB model to the test data using a window of (11, 7). The overall accuracy of outside tests was 85.72%. The accuracies were 83.26% and 93.10% when the pronunciations were /ghun/ and /lan/, respectively.", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 259, "text": "Table 5", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "For each testing sample, w -M w -(M-1) \u2026 w -2 w -1 \u6211\u5011 w +1 w +2 \u2026 w +(N-1) w +N .", "sec_num": null }, { "text": "Based on the results from the two models, WU and WLDB, we can draw the following The Polysemy Problem, an Important Issue in a 57 Chinese to Taiwanese TTS System conclusions: the word-based long distance bigram language model is good when the pronunciation is /lan/, while the word-based unigram language model works well when the pronunciation is /ghun/. In this section, we propose combining the models to achieve better results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The combined Approach", "sec_num": "6." }, { "text": "According to the inside experimental results shown in Table 4 and Table 5 , we will combine the WU model with a window of (12, 6) and the WLDB model with a window of (11, 5) as our combined approach. This combination of WU and WLDB is similar to the approach used by Yu and Huang. We will try to find the possibility of making a correct choice when using WU or WLDB, which will be termed \"confidence\". We will adopt the output of the method with higher confidence.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 73, "text": "Table 4 and Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "The combined Approach", "sec_num": "6." }, { "text": "The first step in this process is to find a confidence curve for each model. The goal is to estimate the confidence for each approach and assess the difference. The higher score is more likely to be the correct answer. To do so, we measure the accuracy of each division and use a regression to estimate the confidence measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confidence Measure", "sec_num": "6.1" }, { "text": "Algorithm 1, below, will be used to find the confidence curve for the word-based unigram language model. As the total number of words in each input sample is not constant, we must first normalize the scores Su i (/lan/) and Su i (/ghun/). We will find the precision rates (PR k ) in the interval [0, 1] for |NSu i (/ghun/)-NSu i (/lan/)| in Step 2 of Algorithm 1 for each i. We then find a regression curve for the PR k . The regression curve is used to estimate the probability of making a correct decision when using WU. Therefore, it follows that, the higher the probability is, the greater the confidence we can have in the results from WU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confidence Measure", "sec_num": "6.1" }, { "text": "Input: The score for each training sample, Su i (/lan/) and Su i (/ghun/), where i=1,2,3, \u2026, n and n is the number of training samples. Output: A function for the confidence curve for the given Su i (/lan/) and Su i (/ghun/), i=1,2,3, \u2026, n. Algorithm:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Finding the confidence curve of WU.", "sec_num": null }, { "text": "Step 1: Normalize Su i (/lan/) and Su i (/ghun/) for each training sample i using the following formula: NSu i (/lan/)=Su i (/lan/)/(Total number of words in training sample i) NSu i (/ghun/)=Su i (/ghun/)/(Total number of words in training sample i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Finding the confidence curve of WU.", "sec_num": null }, { "text": "Step 2: Let d i =| NSu i (/ghun/)-NSu i (/lan/)| and let D={d 1 , d 2 ,\u2026,d n }. Find the accuracy rate for each interval using the following formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Finding the confidence curve of WU.", "sec_num": null }, { "text": "PR k = C k /N k , k=1, 2, \u2026, 18", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Finding the confidence curve of WU.", "sec_num": null }, { "text": "Here, C k is the number of correct conjectures of training sample i with (k-1)/18 d i < (k+1)/18, and N k is the number of training sample i with (k-1)/18", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Finding the confidence curve of WU.", "sec_num": null }, { "text": "d i < (k+1)/18.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Finding the confidence curve of WU.", "sec_num": null }, { "text": "Step 3: Find a regression curve for PR 1 , PR 2 , \u2026, PR 18 . Output the function of the regression curve. The confidence curve for WU is the black line in Figure 3 . The function derived was f(x)=0.1711*ln(x)+1.0357, where x is the absolute value of the difference between the normalized Su i (/lan/) and Su i (/ghun/).", "cite_spans": [], "ref_spans": [ { "start": 155, "end": 163, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Algorithm 1: Finding the confidence curve of WU.", "sec_num": null }, { "text": "Algorithm 2 is used to find the confidence curve for the word-based long-distance bigram language model (WLDB). We began by normalizing the scores of pronunciation Sb i (/lan/) and Sb i (/ghun/). In Step 2, we find the precision rates (PR k ) in the interval [0, 1] then calculate a regression curve for the PR k . The regression curve will be used to estimate the probability of making a correct decision. Again, it follows that, the higher the probability, the more confidence in the results from using WLDB.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Finding the confidence curve of WU.", "sec_num": null }, { "text": "The confidence curve of WLDB is the black line in Figure 4 , in which the function is f(x) = 0.2346*ln(x) + 1.0523, where x is the difference between the normalized Sp i (/lan/) and Sp i (/ghun/).", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 58, "text": "Figure 4", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Algorithm 1: Finding the confidence curve of WU.", "sec_num": null }, { "text": "Input: The score of each training sample, named Sb i (/lan/) and Sb i (/ghun/), where i=1, 2, 3, \u2026, n, and n is the number of training samples. Output: A function for the confidence curve for the given Sb i (/lan/) and Sb i (/ghun/), i=1, 2, 3, \u2026, n. Algorithm:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2: Find the confidence curve of WLDB", "sec_num": null }, { "text": "Step 1: Normalize Sb i (/lan/) and Sb i (/ghun/) for each training sample i using the following formula: NSb i (/lan/)=Sb i (/lan/)/(Total number of words in training sample i) 2 NSb i (/ghun/)=Sb i (/ghun/)/(Total number of words in training sample i) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2: Find the confidence curve of WLDB", "sec_num": null }, { "text": "Step 2: Let d i =| NSb i (/ghun/)-NSb i (/lan/)| and let D={d 1 , d 2 ,\u2026,d n }. Find the accuracy rate for each interval using the following formula: PR k = C k /N k , k=1, 2, \u2026, 13 where C k is the number of correct conjectures of training samples i with (k-1)/13 d i <(k+1)/13 and N k is the number of training samples i with (k-1)/13 d i <(k+1)/13. Step 3: Find a regression curve for PR 1 , PR 2 , \u2026, PR 13 . Output the function of the regression curve. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2: Find the confidence curve of WLDB", "sec_num": null }, { "text": "After the functions for the confidence curves for the two models have been derived, the combined approach can be applied. The two models are used to determine the pronunciation of \"\u6211\u5011\" (we) for a given input text. The two functions for the confidence curves, derived in Section 6.1, are applied to evaluate the degree of confidence in the two models. Let the confidence curves of the two models be C WU for WU and C WLDB for WLDB. We will use the results obtained using WU under the condition C WU > C WLDB . Otherwise, we will use the results obtained from using the WLDB model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Determining the Pronunciation for \"\u6211\u5011\" (we)", "sec_num": "6.2" }, { "text": "Consider Figure 4 , which is derived from the training data. The x-axis is the normalized difference between the two scores. The y-axis is the percentage of correct decisions. Take the example sentence \"\u5982\u679c\u82b1\u65d7\u5e0c\u671b\u7e7c\u7e8c\u505a\u6211\u5011\u7684\u5927\u80a1\u6771\uff0c\u6211\u5011\u9084\u662f\u5f88\u6b61\u8fce\". We want to predict the pronunciation of the first \"\u6211\u5011\" (we) in the above sentence. Its confidences were 0.875 for the WU model (choosing /ghun/) and 0.761 for the WLDB model (choosing /lan/). Since the confidence of the WU model was higher than that of the WLDB model, we adopted /ghun/ as the pronunciation.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 17, "text": "Figure 4", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Determining the Pronunciation for \"\u6211\u5011\" (we)", "sec_num": "6.2" }, { "text": "We used the 639 testing samples described in Section 3.1. Among the 639 testing samples, there were 479 samples with the pronunciation /ghun/ and 160 samples with the pronunciation /lan/. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results Using Combined Models", "sec_num": "6.3" }, { "text": "We used the test data mentioned in 3.1 as the experimental data. The overall accuracy rate from applying the combined approach was 93.6%. The accuracy rate was 95.00% when the answer was /lan/, and the accuracy rate was 93.1% when the answer was /ghun/. Based on these results, it can be concluded that the combination of the two models works very well in determining the pronunciation of the word \"\u6211\u5011\" (we) for a given Chinese text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Precision", "sec_num": null }, { "text": "The three approaches, WU, WLDB, and combined, are compared in Table 6 . As shown in Table 6 , the word-based long-distance bigram language model (WLDB) worked well in the case of /lan/ and achieved an accuracy rate of 93.10%. The word-based unigram language (WU) model worked well in the case of /ghun/ and achieved an accuracy rate of 90.40%. The combined approach, however, achieved higher accuracy rates in both cases, achieving accuracy as high as 93.6%. There is an important issue in the combined approach. When we use a language model like WLDB, we may encounter the problem of data scarcity. If data is scarce, the combined approach will use the result of the word-based unigram language model. Table 7 compares the accuracy of the approaches used in this paper. The findings show that the combined approach (CP) performed the best. We can conclude that layered approach does not work well in determining the pronunciation of \"\u6211\u5011\" (we) in Taiwanese. It also shows that the polysemy problem caused by \"\u6211\u5011\" (we) is more difficult and quite different from that caused by the words \"\u4e0a\" (up), \"\u4e0b\" (down), and \"\uf967\" (no). This also shows that the viewpoints we gave in Section 2 are reasonable. For our approaches, we might encounter the problem of data sparseness, especially with WLDB. It seems that this cannot be avoided in processing languages like Taiwanese, for which corpora are rare. We have tried to use part-of-speech information as the features in our approaches. The experimental results are not good. We also find that most cases can be solved by using WU or WLDB, and only about 5% are solved by using default values. This shows that our approach is suitable for the current data size. We have shown that our combined approach is promising.", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 69, "text": "Table 6", "ref_id": "TABREF11" }, { "start": 84, "end": 91, "text": "Table 6", "ref_id": "TABREF11" }, { "start": 703, "end": 710, "text": "Table 7", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Precision", "sec_num": null }, { "text": "This paper proposes an elegant approach to determine the pronunciation of \"\u6211\u5011\" (we) in a C2T TTS system. Our methods work very well in determining the pronunciations of the Chinese word \"\u6211\u5011\" (we) in a C2T TTS system. Experimental results also show that the model used is better than the layered approach, the WU model, and the WLDB model. Polysemy problems in translating C2T are very common and it is imperative that they are solved in a C2T TTS system. We will continue to focus on other important polysemy problems in a C2T TTS system in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Works", "sec_num": "7." }, { "text": "The polysemy problem of \"\u6211\u5011\" (we) is more difficult than that of other words in Taiwanese. We have proposed a combined approach for this problem. If more training data can be prepared, the proposed approach can be expected to achieve better results. Nevertheless, as the training data needs to be processed manually, we will attempt to propose unsupervised approaches in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Works", "sec_num": "7." }, { "text": "To build a quality C2T TTS system is a long-term project because of the many issues in the text analysis phase. In contrast to a Mandarin TTS system, a C2T TTS system needs more textual analysis functions. In addition, two imperative tasks are the development of solutions for the polysemy problem and the tone sandhi problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Works", "sec_num": "7." }, { "text": "Ming-Shing Yu and Yih-Jeng Lin", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Study of Evaluation Method for Synthetic Mandarin Speech", "authors": [ { "first": "H", "middle": [], "last": "Bao", "suffix": "" }, { "first": "A", "middle": [], "last": "Wang", "suffix": "" }, { "first": "S", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2002, "venue": "The Third International Symposium on Chinese Spoken Language Processing", "volume": "", "issue": "", "pages": "383--386", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bao, H., Wang, A., & Lu, S. (2002). A Study of Evaluation Method for Synthetic Mandarin Speech, in Proceedings of ISCSLP 2002, The Third International Symposium on Chinese Spoken Language Processing, 383-386.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Mandarin Text-to-Speech System", "authors": [ { "first": "S", "middle": [ "H" ], "last": "Chen", "suffix": "" }, { "first": "S", "middle": [ "H" ], "last": "Hwang", "suffix": "" }, { "first": "Y", "middle": [ "R" ], "last": "Wang", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "1", "issue": "1", "pages": "87--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, S. H., Hwang, S. H., & Wang, Y. R. (1996). A Mandarin Text-to-Speech System, Computational Linguistics and Chinese Language Processing, 1(1), 87-100.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Hybrid Statistical/RNN Approach to Prosody Synthesis for Taiwanese TTS", "authors": [ { "first": "C", "middle": [ "C" ], "last": "Ho", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ho, C. C. (2000). A Hybrid Statistical/RNN Approach to Prosody Synthesis for Taiwanese TTS, Master thesis, Department of Communication Engineering, National Chiao Tung University.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Implementation of Tone Sandhi Rules and Tagger for Taiwanese TTS", "authors": [ { "first": "J", "middle": [ "Y" ], "last": "Hunag", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hunag, J. Y. (2001). Implementation of Tone Sandhi Rules and Tagger for Taiwanese TTS, Master thesis, Department of Communication Engineering, National Chiao Tung University.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Text to Pronunciation Conversion in Taiwanese", "authors": [ { "first": "C", "middle": [ "H" ], "last": "Hwang", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hwang, C. H. (1996). Text to Pronunciation Conversion in Taiwanese, Master thesis, Institute of Statistics, National Tsing Hua University.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The Improving Techniques for Disambiguating Non-Alphabet Sense Categories", "authors": [ { "first": "F", "middle": [ "L" ], "last": "Hwang", "suffix": "" }, { "first": "M", "middle": [ "S" ], "last": "Yu", "suffix": "" }, { "first": "M", "middle": [ "J" ], "last": "Wu", "suffix": "" } ], "year": 2000, "venue": "Proceedings of ROCLING XIII", "volume": "", "issue": "", "pages": "67--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hwang, F. L., Yu, M. S., & Wu, M. J. (2000). The Improving Techniques for Disambiguating Non-Alphabet Sense Categories, in Proceedings of ROCLING XIII, 67-86.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A Taiwanese Text-to-Speech System with Application to Language Learning", "authors": [ { "first": "M", "middle": [ "S" ], "last": "Liang", "suffix": "" }, { "first": "R", "middle": [ "C" ], "last": "Yang", "suffix": "" }, { "first": "Y", "middle": [ "C" ], "last": "Chiang", "suffix": "" }, { "first": "D", "middle": [ "C" ], "last": "Lyu", "suffix": "" }, { "first": "R", "middle": [ "Y" ], "last": "Lyu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the IEEE International Conference on Advanced Learning Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang, M. S., Yang, R. C., Chiang, Y. C., Lyu, D. C., & Lyu, R. Y. (2004). A Taiwanese Text-to-Speech System with Application to Language Learning, in Proceedings of the IEEE International Conference on Advanced Learning Technologies, 2004.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Mandarin to Taiwanese Min Nan Machine Translation System with Speech Synthesis of Taiwanese Min Nan", "authors": [ { "first": "C", "middle": [ "J" ], "last": "Lin", "suffix": "" }, { "first": "H", "middle": [ "H" ], "last": "Chen", "suffix": "" } ], "year": 1999, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "14", "issue": "1", "pages": "59--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, C. J. & Chen, H. H. (1999). A Mandarin to Taiwanese Min Nan Machine Translation System with Speech Synthesis of Taiwanese Min Nan, International Journal of Computational Linguistics and Chinese Language Processing, 14(1), 59-84.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Prediction of Pronunciation of Polyphonic Characters in a Mandarin Text-to-Speech System", "authors": [ { "first": "Y", "middle": [ "C" ], "last": "Lin", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, Y. C. (2006). The Prediction of Pronunciation of Polyphonic Characters in a Mandarin Text-to-Speech System, Master thesis, Department of Computer Science and Engineering, National Chung Hsing University.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An Efficient Mandarin Text-to-Speech System on Time Domain", "authors": [ { "first": "Y", "middle": [ "J" ], "last": "Lin", "suffix": "" }, { "first": "M", "middle": [ "S" ], "last": "Yu", "suffix": "" } ], "year": 1998, "venue": "IEICE Transactions on Information and Systems", "volume": "", "issue": "6", "pages": "545--555", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, Y. J. & Yu, M. S. (1998). An Efficient Mandarin Text-to-Speech System on Time Domain, IEICE Transactions on Information and Systems, E81-D(6), June 1998, 545-555.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A Multi-Layered Approach to the Polysemy Problems in a Chinese to Taiwanese TTS System", "authors": [ { "first": "Y", "middle": [ "J" ], "last": "Lin", "suffix": "" }, { "first": "M", "middle": [ "S" ], "last": "Yu", "suffix": "" }, { "first": "C", "middle": [ "Y" ], "last": "Lin", "suffix": "" }, { "first": "Y", "middle": [ "T" ], "last": "Lin", "suffix": "" } ], "year": 2008, "venue": "Proceeding of 2008 IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing", "volume": "", "issue": "", "pages": "428--435", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, Y. J., Yu, M. S., Lin, C. Y., & Lin, Y. T. (2008). A Multi-Layered Approach to the Polysemy Problems in a Chinese to Taiwanese TTS System, in Proceeding of 2008 IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing, June, 2008, 428-435.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An Implementation and Analysis of Mandarin Speech Synthesis Technologies", "authors": [ { "first": "H", "middle": [ "M" ], "last": "Lu", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lu, H. M. (2002). An Implementation and Analysis of Mandarin Speech Synthesis Technologies, M. S. Thesis, Institute of Communication Engineering, National Chiao-Tung University, June 2002.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Improving Intonation Modules in Chinese TTS Systems", "authors": [ { "first": "N", "middle": [ "H" ], "last": "Pan", "suffix": "" }, { "first": "M", "middle": [ "S" ], "last": "Yu", "suffix": "" } ], "year": 2008, "venue": "The 13th Conference on Artificial Intelligence and Applications (TAAI 2008", "volume": "", "issue": "", "pages": "329--336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pan, N. H. & Yu, M. S. (2008). Improving Intonation Modules in Chinese TTS Systems, in The 13th Conference on Artificial Intelligence and Applications (TAAI 2008), 329-336, Nov. 21-22, 2008, Yilan, Taiwan.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A Mandarin Text to Taiwanese Speech System", "authors": [ { "first": "N", "middle": [ "H" ], "last": "Pan", "suffix": "" }, { "first": "M", "middle": [ "S" ], "last": "Yu", "suffix": "" }, { "first": "C", "middle": [ "M" ], "last": "Tsai", "suffix": "" } ], "year": 2008, "venue": "The 13th Conference on Artificial Intelligence and Applications (TAAI 2008", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pan, N. H., Yu, M. S., & Tsai, C. M. (2008). A Mandarin Text to Taiwanese Speech System, in The 13th Conference on Artificial Intelligence and Applications (TAAI 2008), 1-5, Nov. 21-22, 2008, Yilan, Taiwan.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Issues in Text-to-Speech Conversion for Mandarin", "authors": [ { "first": "C", "middle": [], "last": "Shih", "suffix": "" }, { "first": "R", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "1", "issue": "", "pages": "37--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shih, C. & Sproat, R. (1996). Issues in Text-to-Speech Conversion for Mandarin, Computational Linguistics and Chinese Language Processing, 1(1), 37-86.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Variable-Length Unit Selection in TTS Using Structural Syntactic Cost", "authors": [ { "first": "C", "middle": [ "H" ], "last": "Wu", "suffix": "" }, { "first": "C", "middle": [ "C" ], "last": "Hsia", "suffix": "" }, { "first": "J", "middle": [ "F" ], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [ "F" ], "last": "Wang", "suffix": "" } ], "year": 2007, "venue": "IEEE Transactions on Audio, Speech, and Language Processing", "volume": "15", "issue": "4", "pages": "1227--1235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, C. H., Hsia, C. C., Chen, J. F., & Wang, J. F. (2007). Variable-Length Unit Selection in TTS Using Structural Syntactic Cost, IEEE Transactions on Audio, Speech, and Language Processing, 15(4), 1227-1235.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "An Implementation of Taiwanese Text-to-Speech System", "authors": [ { "first": "Y", "middle": [ "C" ], "last": "Yang", "suffix": "" } ], "year": 1999, "venue": "The Polysemy Problem", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, Y. C. (1999). An Implementation of Taiwanese Text-to-Speech System, Master thesis, Department of Communication Engineering, National Chiao Tung University, 1999. The Polysemy Problem, an Important Issue in a 63", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Chinese to Taiwanese TTS System", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chinese to Taiwanese TTS System", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A Mandarin Text-to-Speech System Using Prosodic Hierarchy and a Large Number of Words", "authors": [ { "first": "M", "middle": [ "S" ], "last": "Yu", "suffix": "" }, { "first": "T", "middle": [ "Y" ], "last": "Chang", "suffix": "" }, { "first": "C", "middle": [ "H" ], "last": "Hsu", "suffix": "" }, { "first": "Y", "middle": [ "H" ], "last": "Tsai", "suffix": "" } ], "year": 2005, "venue": "Proc. 17th Conference on Computational Linguistics and Speech Processing", "volume": "", "issue": "", "pages": "183--202", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu, M. S., Chang, T. Y., Hsu, C. H., & Tsai, Y. H. (2005). A Mandarin Text-to-Speech System Using Prosodic Hierarchy and a Large Number of Words, in Proc. 17th Conference on Computational Linguistics and Speech Processing, (ROCLING XVII), 183-202, Sep. 15-16, 2005, Tainan, Taiwan.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Disambiguating the Senses of Non-Text Symbols for Mandarin TTS Systems with a Three-Layer Classifier", "authors": [ { "first": "M", "middle": [ "S" ], "last": "Yu", "suffix": "" }, { "first": "F", "middle": [ "L" ], "last": "Huang", "suffix": "" } ], "year": 2003, "venue": "Speech Communication", "volume": "39", "issue": "3-4", "pages": "191--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu, M. S. & Huang, F. L. (2003). Disambiguating the Senses of Non-Text Symbols for Mandarin TTS Systems with a Three-Layer Classifier, Speech Communication, 39(3-4), 191-229.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "An Improvement on the Implementation of Taiwanese TTS System", "authors": [ { "first": "X", "middle": [ "R" ], "last": "Zhong", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhong, X. R. (1999). An Improvement on the Implementation of Taiwanese TTS System, Master thesis, Department of Communication Engineering, National Chiao Tung University.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "A Common module structure of a C2T TTS System.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "An example applying the layered approach. is (2/3, 1/3). Output /ghun/.", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "Estimate the confidence curve using WU. The function we attained is f(x)=0.1711*ln(x)+1.0357.", "uris": null, "num": null }, "FIGREF5": { "type_str": "figure", "text": "Estimate the confidence curve of WLDB. The function we attained is f(x)=0.2346*ln(x)+1.0523.", "uris": null, "num": null }, "TABREF0": { "type_str": "table", "num": null, "text": "The Polysemy Problem, an Important Issue in a 45 Chinese to Taiwanese TTS System", "content": "
Input Chinese texts
BilingualText Analysis
Lexicon
Tone Sandhi
Prosody Generation
Synthesis unitsSpeech Synthesis
Synthesized
Taiwanese Speech
", "html": null }, "TABREF4": { "type_str": "table", "num": null, "text": "", "content": "
News CategoryNumber of News ItemsNumber of News Items Containing the word \"\u6211\u5011\"Percentage
International News224232614.5%
Travel News92731811.9%
Local News6066951.5%
Entertainment News323140812.6%
Scientific News35201002.8%
Social News49361603.2%
Sports News28111936.9%
Stock News8066831.0%
Total Number of News Items4014515463.9%
", "html": null }, "TABREF5": { "type_str": "table", "num": null, "text": "", "content": "
Frequency of \"\u6211\u5011\"Pronunciation /lan/Pronunciation /ghun/Total Frequency
Training data6401,9162,556
Test data160479639
Token frequency of \"\u6211\u5011\"8002,3953,195
", "html": null }, "TABREF6": { "type_str": "table", "num": null, "text": "", "content": "
Number of test samples Number of correct samplesAccuracy rate
/ghun/47944592.90%
/lan/1604729.38%
Total63949277.00%
", "html": null }, "TABREF8": { "type_str": "table", "num": null, "text": "", "content": "
Window Size (M, N)Accuracy when the pronunciation is /ghun/Accuracy when the pronunciation is /lan/Overall accuracy
(17, 10)91.04%74.92%87.00%
(12,6)94.01%45.48%81.85%
(19, 14)88.75%77.88%86.03%
", "html": null }, "TABREF10": { "type_str": "table", "num": null, "text": "", "content": "
Window Size (k L , k R )Accuracy when the pronunciation is /ghun/Accuracy when the pronunciation is /lan/Overall accuracy
(11,7)93.33%97.04%94.25%
(4, 13)93.48%93.61%93.52%
(11,5)89.69%99.87%92.15%
", "html": null }, "TABREF11": { "type_str": "table", "num": null, "text": "", "content": "
Accuracy using WU Accuracy using WLDBAccuracy combing the two models
/ghun/90.40%83.26%93.10%
/lan/31.25%93.10%95.00%
Total75.59%85.72%93.60%
", "html": null }, "TABREF12": { "type_str": "table", "num": null, "text": "", "content": "
WUWLDBLPCP
/ghun/90.40% 83.26%92.90%93.10%
/lan/31.25% 93.10%29.38%95.00%
Total75.59% 85.72%77.00%93.60%
", "html": null } } } }