|
{ |
|
"paper_id": "O08-4006", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:02:09.846449Z" |
|
}, |
|
"title": "Data Driven Approaches to Phonetic Transcription with Integration of Automatic Speech Recognition and Grapheme-to-Phoneme for Spoken Buddhist Sutra", |
|
"authors": [ |
|
{ |
|
"first": "Min-Siong", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Chang Gung University", |
|
"location": { |
|
"postCode": "259" |
|
} |
|
}, |
|
"email": "minsiong@gmail.com" |
|
}, |
|
{ |
|
"first": "Ren-Yuan", |
|
"middle": [], |
|
"last": "Lyu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Chang Gung University", |
|
"location": { |
|
"addrLine": "259 Wen-Hwa 1 st" |
|
} |
|
}, |
|
"email": "renyuan.lyu@gmail.com" |
|
}, |
|
{ |
|
"first": "Yuang-Chin", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "chiang@stat.nthu.edu.tw" |
|
}, |
|
{ |
|
"first": "Kwei-Shan", |
|
"middle": [], |
|
"last": "Tao-Yuan", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose a new approach for performing phonetic transcription of text that utilizes automatic speech recognition (ASR) to help traditional grapheme-to-phoneme (G2P) techniques. This approach was applied to transcribe Chinese text into Taiwanese phonetic symbols. By augmenting the text with speech and using automatic speech recognition with a sausage searching net constructed from multiple pronunciations of text, we are able to reduce the error rate of phonetic transcription. Using a pronunciation lexicon with multiple pronunciations for each item, a transcription error rate of 12.74% was achieved. Further improvement can be achieved by adapting the pronunciation lexicon with pronunciation variation (PV) rules derived manually from corrected transcription in a speech corpus. The PV rules can be categorized into two kinds: knowledge-based and data-driven rules. By incorporating the PV rules, an error rate of 10.56% could be achieved. Although this technique was developed for Taiwanese speech, it could easily be adapted to other Chinese spoken languages or dialects.", |
|
"pdf_parse": { |
|
"paper_id": "O08-4006", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose a new approach for performing phonetic transcription of text that utilizes automatic speech recognition (ASR) to help traditional grapheme-to-phoneme (G2P) techniques. This approach was applied to transcribe Chinese text into Taiwanese phonetic symbols. By augmenting the text with speech and using automatic speech recognition with a sausage searching net constructed from multiple pronunciations of text, we are able to reduce the error rate of phonetic transcription. Using a pronunciation lexicon with multiple pronunciations for each item, a transcription error rate of 12.74% was achieved. Further improvement can be achieved by adapting the pronunciation lexicon with pronunciation variation (PV) rules derived manually from corrected transcription in a speech corpus. The PV rules can be categorized into two kinds: knowledge-based and data-driven rules. By incorporating the PV rules, an error rate of 10.56% could be achieved. Although this technique was developed for Taiwanese speech, it could easily be adapted to other Chinese spoken languages or dialects.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Automatic phonetic transcription is gaining popularity in the speech processing field, especially in speech recognition, text-to-speech, and speech database construction [Haeb-Umbach et al. 1995; Wu et al. 1999; Lamel et al. 2002; Evermann et al. 2004; Nanjo et al. 2004; Nouza et al. 2004; Sarada et al. 2004; Siohan et al. 2004; Soltau et al. 2005; Kim et al. 2005] . It is traditionally performed using two different approaches: an acoustic feature input method and a text input method. The former is the speech recognition task, or more specifically, the phoneme recognition task. The latter is the grapheme-to-phoneme (G2P) task. Both tasks, including phoneme recognition and G2P remain unsolved technology problems. The state-of-the-art speaker-independent (SI) phone recognition accuracy in a large vocabulary task is currently less than 80%, far from human expectations. Although the accuracy of G2P tasks seems much better, it relies on a \"perfect\" pronunciation lexicon and cannot effectively deal with pronunciation variation issues.", |
|
"cite_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 195, |
|
"text": "[Haeb-Umbach et al. 1995;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 211, |
|
"text": "Wu et al. 1999;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 230, |
|
"text": "Lamel et al. 2002;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 252, |
|
"text": "Evermann et al. 2004;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 253, |
|
"end": 271, |
|
"text": "Nanjo et al. 2004;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 290, |
|
"text": "Nouza et al. 2004;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 310, |
|
"text": "Sarada et al. 2004;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 330, |
|
"text": "Siohan et al. 2004;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 331, |
|
"end": 350, |
|
"text": "Soltau et al. 2005;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 367, |
|
"text": "Kim et al. 2005]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "This problem becomes non-trivial when the target text is the Chinese text (\u6f22\u5b57). The Chinese writing system is widely used in China and in East/South Asian areas including Taiwan, Singapore, and Hong Kong. Although the same Chinese character is used in different areas, the pronunciation may be very different. Therefore, they are mutually unintelligible and considered different languages rather than dialects by most linguists.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In this paper, we chose Buddhist Sutra (written collections of Buddhist teachings) as the target text processed in this research. Buddhism is a major religion in Taiwan (23% of the population) [IIP 2003 ]. The Buddhist Sutra, translated into Chinese text in a terse ancient style (\u53e4\u6587) , is commonly read in Taiwanese (Min-nan). Due to a lack of proper education, most people are not capable of correctly pronouncing all of the text. Besides, no qualified pronunciation lexicon exists and very few appropriately computational linguistic research projects have been conducted to support developing a G2P system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 202, |
|
"text": "[IIP 2003", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Taiwanese uses Chinese characters as a part of the written form, with its own phonetic system differing greatly from Mandarin. This is in contrast to the case of Mandarin, where the problem of multiple pronunciations (MP) is less severe. A Chinese character in Taiwanese can commonly have a classic literate pronunciation (known as Wen-du-in, or \"\u6587\uf95a\u97f3\" in Chinese) and a colloquial pronunciation (known as Bai-du-in, or \"\u767d\uf95a\u97f3\" in Chinese) [Liang et al. 2004a] . In addition to MPs, Taiwanese also have pronunciation variation (PV) due to sub-dialectical accents, such as Tainan and Taipei accents. We use the term MPs to stress the fact that variation may cause more deterioration in phonetic transcription [Cremelie et al. 1999; Hain 2005; Raux 2004 ].", |
|
"cite_spans": [ |
|
{ |
|
"start": 437, |
|
"end": 457, |
|
"text": "[Liang et al. 2004a]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 705, |
|
"end": 727, |
|
"text": "[Cremelie et al. 1999;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 728, |
|
"end": 738, |
|
"text": "Hain 2005;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 739, |
|
"end": 748, |
|
"text": "Raux 2004", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The traditional approach to transcribing Chinese Buddhist Sutra text is human dictation. A master monk or nun reads the text aloud, sentence by sentence. Then, some phonetic experts Data Driven Approaches to Phonetic Transcription with Integration of 235 Automatic Speech Recognition and Grapheme-to-Phoneme for Spoken Buddhist Sutra transcribe the text manually. The manual transcription process is tedious and prone to errors. An example is given in Table 1 as follows [Chen 2006; Tripitaka et al. 2005] . Since more transcribed Sutras are planned, we are interested in how G2P and ASR technology can help in this situation. Owing to the fact that human experts capable of phonetically transcribing the Sutra in Taiwanese are difficult to find, the first phonetically transcribed Sutra in Taiwanese did not appear until 2004 [Sik 2004a [Sik , 2004b . As shown in Figure 1 , our task is to discover which of them is actually pronounced when the Sutra text is segmented into a series of sentences and recorded by a senior master nun. Then, the output of transcription is formed in ForPA or Tongyong Pinin . These two phonetic symbol systems are well-designed in ASCII code and suitable for any learners with common understanding of the English phonetic system. This architecture is much easier for a person to use to record his/her reading of the text than acquiring a transcribing expert. For marginalized languages with serious MPs and PV problems, this technique is very useful. Text, e.g.", |
|
"cite_spans": [ |
|
{ |
|
"start": 471, |
|
"end": 482, |
|
"text": "[Chen 2006;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 483, |
|
"end": 505, |
|
"text": "Tripitaka et al. 2005]", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 827, |
|
"end": 837, |
|
"text": "[Sik 2004a", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 838, |
|
"end": 850, |
|
"text": "[Sik , 2004b", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 452, |
|
"end": 459, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 865, |
|
"end": 873, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Waveforms, e.g.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In this paper, we report two experiments using speech and text data, called the Taiwanese Buddhist Sutra (TBS) corpus [Sik 2004b ]. The phonetic transcription framework is described in Section 2. Given a speech corpus with phonetic transcription for training, Section 3 reports the speech recognition results with and without the corresponding text for its phonetic transcription. Section 4 discusses the second experiment involving speech recognition with the corresponding text under various pronunciation variation conditions in the training corpus. Section 5 presents our conclusions. Figure 2 is the framework of phonetic transcription using the speech recognition technique. While the input is a speech waveform and Chinese Sutra text, the output is a phonetic transcription corresponding to the input Chinese text. The entire framework can be divided into two major parts, i.e. an acoustic part and a linguistic part.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 128, |
|
"text": "[Sik 2004b", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 589, |
|
"end": 597, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "G2P ASR", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Based on flow chart in Figure 2 , we define the following notations: s is the syllable sequence, while c and o are the input character and augmented acoustic sequences. The phonetic transcription target is to find the most probable syllable sequence * s given o and c . The formula is:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 31, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Phonetic Transcription Augmented by Speech Recognition Technique", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "* arg max ( ) | , s S s s c P o \u2200 \u2208 = (1) Where 1 1 { | ... , } M M i c C c c c c c c C \u2208 = = = \u2208 , i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Phonetic Transcription Augmented by Speech Recognition Technique", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "c is an arbitrary Chinese character, C is the set of all Chinese characters, and the number of elements in C is n(C)\u224813000.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Phonetic Transcription Augmented by Speech Recognition Technique", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "1 1 { | ... , } N N i s S s s s s s s S \u2208 = = = \u2208 , i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Phonetic Transcription Augmented by Speech Recognition Technique", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "s is an arbitrary Taiwanese syllables, S is the set of all Taiwanese syllables, and the number of elements in S is n(S)\u22481000. Using the Bayes theorem:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Phonetic Transcription Augmented by Speech Recognition Technique", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "* ( | ) ( | , ) arg max ( | ) s S P s c P o s c s P o c \u2200 \u2208 =", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "The Phonetic Transcription Augmented by Speech Recognition Technique", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The acoustic sequence o is assumed dependent only on the syllable sequence s . Equation 2 could be simplified as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Phonetic Transcription Augmented by Speech Recognition Technique", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "* arg max ( | ) ( | ) s S s P s c P o s \u2200 \u2208 =", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "The Phonetic Transcription Augmented by Speech Recognition Technique", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The first term, ( | ) P s c , of Equation 3 is independent of o and plays the major role in the linguistic part of the recognition scheme. The second term, ( | ) P o s , is the probability of observation given the syllable sequence, which plays the major role in the acoustic part.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Phonetic Transcription Augmented by Speech Recognition Technique", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "For the acoustic part, which is the probability of observing an acoustic sequence o , given a phonetic syllable sequence s , it is well known that the Hidden Markov Model (HMM) can be used to model it. We can choose a speaker independent HMM model (SI-HMM) with The linguistic part, which is the probability of observing a syllable sequence s , given a character sequence c , could be modeled as a traditional grapheme-to-phoneme problem. In such a problem, a \"well-coverage\" phonetic lexicon, which covers as many as possible correct pronunciations for each phoneme, is quite useful. The problem of multiple pronunciations could be solved using a specially designed searching net, such as the sausage net, which was named for its shape being similar to a sausage. All the searching nets, including the sausage net, were constructed according to a multiple pronunciation lexicon and described in the next section. Even the best pronunciation lexicon would miss the true pronunciation for a certain Chinese character. This is severe, especially for a minority language without many linguistic resources, like Taiwanese. To address this issue, the pronunciation variation rules would be incorporated in a sausage net to improve the accuracy of transcription.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Phonetic Transcription Augmented by Speech Recognition Technique", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The first experiment is performed on the Sutra phonetic transcription using the sausage recognition network without considering the pronunciation variation problem. For a syllabic language such as Taiwanese or Mandarin, we can construct a concatenated net of all syllables. Based on Equation 3, we define:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "1 2 , ,..., N s s s s =", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "as the syllable sequence. As our goal is to find the real pronunciation, it would not be crucial to know the relationship between Chinese characters and syllables. Therefore, assume that the underlined character sequence c is known and independent of s , and all syllables are independent of each other. Following ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "= = = \u220f (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "To make the case simple and straightforward, we could assume that ( ) i P s is a uniform distribution, then Equation 4 can be simplified as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "* 1 2 1 2 1 arg max( ) ( | , ,..., ) arg max ( | , ,..., ) ( ) N N N s S s S s P o s s s P o s s s n S \u2200 \u2208 \u2200 \u2208 = = (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "where S is the set of all possible syllables and the n(S) is the number of elements in S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Such a concatenated net is called a total-syllable net. It is a compact representation of the searching space S , which is a set of all possible syllable sequences. The transcription performance in this way is dependent only on the acoustic part. Therefore, the experimental results conducted using a total-syllable net is referred to as the performance of the acoustic part.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Second, it is also possible to perform the phonetic transcription using only text input without any speech/acoustic clues. This is the linguistic part in the recognition scheme shown in Fig. 2 . In this case, Equation 3 can be simplified as:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 192, |
|
"text": "Fig. 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "1 1 ... |C ...C * 1 1 | arg m P ( ... ... | c ...c .. ) . x | c ( ) a N N S S i S N S N C s i s s P s c s s \u2200 \u2208 = = (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "As only a small scale database is available, we assume that i s is dependent on i c and 1 i s \u2212 , or even only on i c . Equation 6 can then be simplified as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "* | 1 arg max ( | ) i i i i N S C i i i s S s P s c = \u2200 \u2208 = \u220f (7) and * | 1 1 arg max ( | , ) i i i i N S C i i i i s S s P s s c \u2212 = \u2200 \u2208 = \u220f (8)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The results from the experiments conducted using Equations. 7 and 8 depend only on the textual input instead of the acoustic input, and are referred to as the language part performance. Therefore, discussion about Equations 5 and 7 require traditional automatic speech recognition and grapheme-to-phoneme approaches for dealing with the phonetic transcription tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "What is proposed in this paper is an approach to integrate both. Given a Chinese character sequence, based on the multiple pronunciations of each Chinese character, a much smaller recognition net can be constructed. Thus, by integrating Equations 5 and 7, we have:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Data Driven Approaches to Phonetic Transcription with Integration of 239 Automatic Speech Recognition and Grapheme-to-Phoneme for Spoken Buddhist Sutra", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "* 1 2 | 1 arg max ( | , ,..., ) ( ) ( | ) i i i i N N i SC i i i s S s P o s s s P s P s c = \u2200 \u2208 = \u220f (9)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Taking an example of a typical text sentence \"\u70ba\u6bcd\uf96f\u6cd5\", which is shown in Figure 3 , we will call such a net (with multiple pronunciations) a sausage net. Higher recognition accuracy can be expected due to the smaller perplexity in the recognition net. Our task is to construct \"good\" sausage nets to help the acoustic part do the job. In the following, we will discuss how to use the lexicons, the recognition networks to implement the proposed framework and show some experiment results. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 79, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Solutions to Multiple Pronunciation Problem", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "In this paper, we use as the speech database, Formosa Speech Database (ForSDAT), which was collected over the past several years . The SI-HMM model can be trained from the ForSDAT-01, which contains 200 speakers and 23 hours of speech. All speech data were recorded in 16K, 16bit PCM format. The statistical information of ForsDAT-01 was summarized as in Table 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 362, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Speech Database", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In addition, the partial ForSDAT-02 speech corpus was used to derive the rule set of pronunciation variations, which contains 131 speakers and 7.2 hours of speech. The statistical information of partial ForsDAT-02 was summarized as in Table 2 and the detail is discussed in Section 4.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 242, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Speech Database", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The distribution of another speech database, TBS, is listed in Table 3 , where there are 1,619 utterances in this speech data set with a total length of about 230 minutes [Sik 2004b ]. 502 utterances, which include 5909 syllables, are randomly chosen and reserved for testing ", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 181, |
|
"text": "[Sik 2004b", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 70, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Speech Database", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "There is one pronunciation lexicon available to us, Formosa Lexicon, which provides multiple pronunciations in Taiwanese for all Chinese characters. The lexicon contains about 123,000 words in Chinese/Taiwanese text with Mandarin/Taiwanese pronunciations. It is a combination of two lexica: Formosa Mandarin-Taiwanese Bi-lingual lexicon and Gang's Taiwanese lexicon [Liang et al. 2004a; Liang et al. 2004b; . The former is derived from a Mandarin lexicon; thus, many commonly used Taiwanese terms are missing due to the fundamental difference between these two languages. The latter contains more widely used Taiwanese expressions from samples of radio talk shows. Some examples of the lexicon are shown in Table 4 , containing 65,007 entries using the Wen-du-in pronunciation and 58,431 entries using the Bai-du-in pronunciation. There are a total of 123,438 pronunciation entries. For all 65,007 Wen-du-in pronunciation entries, there are 6,890 entries for one-syllable words, 39,840 entries for two-syllable words, and so on. The lexicon as described above is a general-purpose lexicon. It could be used for a wide range of applications and tends to have a higher number of multiple pronunciations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 366, |
|
"end": 386, |
|
"text": "[Liang et al. 2004a;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 406, |
|
"text": "Liang et al. 2004b;", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 707, |
|
"end": 714, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Pronunciation Lexica", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We used two kinds of searching nets in these experiments, according to the lexicon. The first is the Total-syllable net. It is simply a concatenated net of all Taiwanese syllables existing in the Taiwanese Buddhist Sutra (TBS), where the total number of syllables is 467, denoted as the Total-Syl-Net. The other searching nets are the sausage nets generated from Data Driven Approaches to Phonetic Transcription with Integration of 241 Automatic Speech Recognition and Grapheme-to-Phoneme for Spoken Buddhist Sutra each of the pronunciation lexica. The nets were constructed by filling in each node of the net with the corresponding multiple pronunciations of each Chinese character from the pronunciation lexicon. One example is shown in Figure 3 . The nets are denoted the General-Sau-Net for the general-purpose Formosa Lexicon. However, a lexicon is inevitably incomplete, and we could be confronted with the missing character problem and the missing pronunciation problem. The missing character problem is when a character used in the Sutra does not appear in the lexicon. One reason is because many of the Chinese characters used in ancient times are no longer used in modern times. Thus, even the Unicode Standard, which contains more than thirty thousand Chinese characters, does not contain them. The Formosa Lexicon has much fewer distinct characters, and the missing character problem is inevitable. When a missing character is encountered, we use all possible syllables as its multiple pronunciations. One example is illustrated in Figure 4 , where the sausage searching net is constructed for the Chinese character string \"C 0 C 1 C 2 \". It is assumed that the character C 0 is a missing character. In such a case, all possible syllables, denoted as S 00 ,S 01 ,\u2026S 0N , are used as possible pronunciations of C 0 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 739, |
|
"end": 747, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 1544, |
|
"end": 1552, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Pronunciation Lexica", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As insufficient coverage of pronunciation nodes in the searching net will severely degrade the recognition performance, some approaches to extend the pronunciation coverage will be considered to help the overall performance. Since global lexicon modification by experts would take considerable effort and not necessarily benefit, we adopted alternative rule-based methods. By rule-based pronunciation variations, we mean that phonetic units will be changed by speakers according to some underlying rules. Usually, a rule could be notated as the form \"B S\" for canonical pronunciation B (base-form) being substituted with the actual pronunciation S (surface-form) [Saraclar et al. 2004] . Briefly speaking, some rule-derived variant pronunciations are added directly into the searching net to enhance the poor pronunciation coverage of an imperfect pronunciation lexicon.", |
|
"cite_spans": [ |
|
{ |
|
"start": 663, |
|
"end": 685, |
|
"text": "[Saraclar et al. 2004]", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Incorporating Pronunciation Variation Rules", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "An example is shown in Figure 5 , where the number of pronunciations for the Chinese character \"\u6bcd\" was increased from 4 to 5 by incorporating some specific pronunciation rules as \"/\u0264/ \u2192 /o/\". It could be shown that, as long as the pronunciation rules could be well designed, the phonetic transcription performance would be effectively improved.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 31, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Incorporating Pronunciation Variation Rules", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Generally speaking, the pronunciation-variation (PV) rules can be categorized into two kinds: knowledge-based and data-driven rules. The knowledge-based rules were derived from the knowledge established by phoneticians. On the other hand, the data-driven PV rules rely on the availability of transcribed speech corpora. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Incorporating Pronunciation Variation Rules", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Considering the trade-off between the number of elements and the degree of detail or perplexity, the triphone was used as the acoustic unit, thus, the transcription unit in this paper. The form LBR LSR represents the pronunciation variation rules, where B and S represent the base form and surface form of a central phone, and L, R are the left and right contexts respectively. The number of triphone units in Taiwanese is about 1200.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Knowledge-Based Variation Rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As with the other members of the Chinese language family, there are about three types of pronunciation variations in Taiwanese. These could be summarized as follows and are shown in Table 5: 1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 190, |
|
"text": "Table 5:", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Knowledge-Based Variation Rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Variation between Bai-du-in and Wen-du-in: The variations may vary due to using classic literate pronunciation (known as Wen-du-in) or a colloquial pronunciation (known as Bai-du-in). This point has been discussed previously in Section 1. For example, the Chinese character \"\u751f\" (to give birth) might be pronounced as /si\u014b/ in Wen-du-in and /s\u1ebd/ or / s\u0129 / in Bai-du-in. All of these are acceptable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Knowledge-Based Variation Rules", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Variation between sub-dialectal regions: Some variations were referred to as dialectal differences; For instance, the initials /z/ is substituted with /l/ or /g/ depending on the sub-dialect of Taiwanese. Such a rule was denoted as \"z l/g\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Variation due to personal pronunciation errors: Some kinds of variations are considered personal pronunciation errors. Owing to the lack of some phonemes in the mainstream language (such as Mandarin in Taiwan), some pronunciation may disappear in younger generations. One of these phonemes is /g/, where the phenomenon is denoted as \"g {}\". Table 5 can be considered as a knowledge source to select the pronunciation variation rules. The knowledge-based PV rules, which were derived by more than one linguist, were sometimes contradictory with each other. This made them difficult to choose between at times in implementation. Such a difficulty leads to a need for another approach. One of them is a data-driven approach from large-scale real data. Since a little manually transcribed speech data was available, we could use statistical computational measures to extract the PV rules from real data. This issue will be discussed in the following section. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 341, |
|
"end": 348, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "3.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The same simple way to adopt the methodology of pronunciation variation is to expand the pronunciation lexicon using variation rules of the form LBR LSR. Similar work for such an approach was shown in Mandarin [Tsai et al. 2002] . To derive such rules, a speech corpus with both canonical pronunciation and actual pronunciation is necessary. We choose a subset of ForSDAT, called ForSDAT-02, to derive PV rules, and the statistical information is summarized as in Table 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 228, |
|
"text": "[Tsai et al. 2002]", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 464, |
|
"end": 471, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Sata-Driven Variation Rules", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "ForSDAT-02 is a speech database with rich bi-phone coverage. This database was recorded by prompting speakers with a script. Although the script in Taiwanese text was shown with phonetic transcription, we did observe variations in the recorded speech. A small portion of the speech data was then manually checked, and the phonetic transcription of the script was corrected according to actual speech. Some examples of the original transcription (the base-form) and the manually corrected transcription (the surface-form) are shown in Table 6 , which is called the sentence-level confusion table.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 534, |
|
"end": 541, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Sata-Driven Variation Rules", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Original transcription (base-form) Manually corrected transcription (surface-form) \u00e8 b\u0264\u0300 k'\u00ee \u0103 \u00e8 b\u01d2 k'\u00ee \u0103 g\u00e1m k'\u00e0i b\u00e0n ts'\u00e9n g\u00e1n k'\u00e0i b\u00e0n ts'\u00e9n s\u012d\u014b \u00f9a h h\u00f9 h h\u016da h s \u012dn \u00f9a h h\u00f9 h h\u016da h", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table 6. Sentence-level confusion table. The output is manually corrected transcription (the surface-form), and the input is the original transcription (the base-form).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "From the sentence-level confusion table, it is quite a straightforward process to construct other confusion tables in syllable level and triphone level. These two tables are shown in Table 7 and Table 8 as follows. The triphone-level confusion table is used as a direct knowledge source to derive the PV rules, where each cell in the table was looked upon as a rule. The number of rules shown in Table 8 is P 2 , where P is the number of triphones (about 1200 in the target language). The number of rule set selections is 2 2 P , which is an enormous number which is impossible to be processed in modern computers. To make the problem more solvable, some specially designed algorithms should be developed that are able to specifically find a useful route in the huge rule set selection space within a reasonable time. First of all, some criteria should be adopted to choose the most significant rule sets. Three kinds of statistical measures were used in this paper. They are (1) Joint probability [Raux 2004 ], (2) Conditional probability, and (3) Mutual information-like of the base form pronunciation and the surface form pronunciation. The mathematical definitions of the above three measures are as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 998, |
|
"end": 1008, |
|
"text": "[Raux 2004", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 202, |
|
"text": "Table 7 and Table 8", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 403, |
|
"text": "Table 8", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "\u2026\u2026 \u2026\u2026", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. Joint probability of the base form pronunciation i b , and the surface form pronunciation j s , In all of the above equations, n ij is the number of (base-form) triphone b i substitutions by the surface-form triphone s j that appear in a corpus, and Note that each pair (i,j),i \u2260 j, corresponds to a substitution rule, and we select those pairs (i,j) with higher scores of ( , ) and I ij to be the variation rules to extend the sausage net pronunciation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table 7. Syllable-level confusion table, where z ij represents the number of variation from syllable x i (base-form) to triphone y j (surface-form), T is the number of surface-form and base-", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "ij i j N n = \u2211 \u2211 , i i j j N n = \u2211 ,", |
|
"eq_num": "( , )" |
|
} |
|
], |
|
"section": "Table 7. Syllable-level confusion table, where z ij represents the number of variation from syllable x i (base-form) to triphone y j (surface-form), T is the number of surface-form and base-", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In Table 9 , the rules were sorted by rank based on joint probability, conditional-probability, and mutual-information. There are variants among the three lists. One rule which is much more important in some method may be trivial in the other method.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Table 7. Syllable-level confusion table, where z ij represents the number of variation from syllable x i (base-form) to triphone y j (surface-form), T is the number of surface-form and base-", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Rank based on Joint Probability", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table 9. Data-driven rules: The top 10 substitution errors were listed from the partially validated ForSDAT-02 corpus for Joint-Probability-Based, Conditional-Probability-Based and Mutual-information-Based method", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Rank based on Conditional Probability Rank based on Mutual-Information i-\u014b \u2192 i-n x-\u00e3\u02b0 \u2192 x-a\u02b0 i-\u014b \u2192 i-n a-m \u2192 a-n n-\u0264 \u2192 n-\u00f5 b-\u0264 \u2192 b-o b-\u0264 \u2192 b-o \u014b-\u0129 \u2192 \u014b-\u0115 \u0129-\u00f5 \u2192 \u0129-\u0169 i-m \u2192 i-n l-o\u02b0 \u2192 l-o l-i k \u2192 l-i t \u0129-\u00f5 \u2192 \u0129-\u0169 k\u02bc-i\u02b0 \u2192 k\u02bc-i k k\u02bc-\u0264 \u2192 k\u02bc-o a-n \u2192 a-m ts-a k \u2192 ts-a t \u0129-\u014b \u2192 \u0129-n a-\u014b \u2192 a-m g-\u0264 \u2192 g-o p-\u0264 \u2192 p-o i-a-\u014b \u2192 i-o-\u014b p\u02bc-i k \u2192 p\u02bc-i t i-m \u2192 i-n i-m \u2192 i-\u014b \u0129-\u00f5 \u2192 \u0129-\u0169 t-\u0264 \u2192 t-o i-n \u2192 i-m g-i k \u2192 g-i b-i t \u2192 b-i\u02b0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Table 9. Data-driven rules: The top 10 substitution errors were listed from the partially validated ForSDAT-02 corpus for Joint-Probability-Based, Conditional-Probability-Based and Mutual-information-Based method", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In training or estimating SI-HMM models for the acoustic part, we use continuous Gaussian-mixture HMM models with feature vectors of 12-dimensional MFCC with 1-dimensional energy, plus the first, second, and third derivatives computed using a 20-ms frame width and 10-ms frame shift. Context-dependent intra-syllabic tri-phone models were built using a decision-tree state tying procedure. As the testing data is speaker dependent, adaptation with some manually transcribed data must be useful in automatic phonetic Data Driven Approaches to Phonetic Transcription with Integration of 247 Automatic Speech Recognition and Grapheme-to-Phoneme for Spoken Buddhist Sutra transcription. Maximum Likelihood Linear Regression (MLLR) is then used to adapt speaker independent models using 31-utterance adaptation speech data. Most of the training and recognition are carried out by using the HTK tools [Young et al. 2008] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 895, |
|
"end": 914, |
|
"text": "[Young et al. 2008]", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Experiment Results and Discussion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "With the two searching nets (Total-Syl-Net, General-Sau-Net) and acoustic models (SI with adaptation), the recognition results measured as the syllable error rate (SER) are shown in Figure 6 . In addition, we also show the result of only language, called grapheme-to-phoneme (G2P), with unigram. Through observation of the experimental results, we can see that neither G2P with unigram nor Total-Syl-Net with adaptation model can reach acceptable performance. Therefore, it is necessary to integrate the linguistic and acoustic parts. General-Sau-Net could surpass Total-Syl-Net. For example, under the same speaker adaptation models, the result was 12.74% better with General-Sau-Net than other results in Figure 6 . Thus, if the speaker independent model could be adapted using some phonetically transcribed speech data, the adapted speaker independent model under General-Sau-Net would be suitable for the phonetic annotation task. In multiple pronunciation problems, by our speech data observation, we could see that some errors result from pronunciation variations. Therefore, we hypothesized that the performance would get better by adaptation of the Formosa Lexicon Sausage net, i.e. adaptation of the Formosa lexicon, described in the Section 4.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 190, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF10" |
|
}, |
|
{ |
|
"start": 707, |
|
"end": 715, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Experiment Results and Discussion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "We adapted the Formosa (general-purpose) pronunciation lexicon according to different pronunciation variation rule sets. The speech recognition task with a sausage searching net and speaker adapted acoustic models was then conducted, as described in Section 4, wherein, the SER achieved before the application of the pronunciation variation rules was 12.74%, as shown in Figure 6 . This would be looked upon as the performance of the baseline setup in this section.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 371, |
|
"end": 379, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Experiment Results and Discussion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "In Figure 7 , the transcription performance was measured in terms of syllable error rate vs. the number of ranked PV rules sorted according to different measures, including mutual-information (MI), joint-probability (JP), and conditional-probability (CP) as well as the baseline setup. We could observe that it is truly helpful to decrease the SER by increasing the searching net coverage via the PV rules. The evidence is that the lowest error rate (10.56%) was achieved by utilizing the first 52 variation rules, which were selected by the Mutual-Information (MI) measure. Similar improvement would also be observed in the best SER (11.81% and 11%) achieved using the Joint-Probability (JP) and Conditional-Probability (CP) measures when the complexities of the JP and CP measure are 2.68 and 2.55, respectively. It is interesting to point out that, in Figure 7 , choosing different statistical measures was found to influence the achievable lowest SER and also the speed of decrease in SER. In these experiments, we found that MI was the best in terms of the rate of decrease in SER or the achievable lowest SER. Although the JP-based measure could make the error rate converge more quickly than the CP-based measure, the performance also degraded quickly. This is because the CP-based measure score was normalized by the base-form count in contrast to the Data Driven Approaches to Phonetic Transcription with Integration of 249 Automatic Speech Recognition and Grapheme-to-Phoneme for Spoken Buddhist Sutra JP-based measure. However, the insignificant and harmless PV-rules might get the higher conditional probability sometimes due to few base-form observations. The PV-rules for the CP-based measure might not increase the perplexity but still lead to the slowest convergence among the three measures. In the MI-based measure, the formula could avoid slow convergence using the Joint-Probability as a weight when the base-form had few variations. Observing the confusion table, the surface-form would have lower correlation with these base-forms if many base-forms would transform into the same surface-form. So, we proposed that the mutual information between the base and surface-forms should be used to calculate the base and surface-form correlation using the normalization of their count. Consequently, the error rate of the MI rank converges most quickly and the performance of the MI measure in error reduction was also better than JP and CP measures, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF12" |
|
}, |
|
{ |
|
"start": 855, |
|
"end": 863, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF12" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Experiment Results and Discussion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Another interesting point was that the SER will possibly increase if too many PV rules are applied. For example, the lowest SER is achieved by applying 52 rules when MI was adopted as the ranking measure. However, after applying more rules, the SER increased! It even became worse than that in the baseline experiments. This means that some \"bad\" pronunciation variation rules may lead to a performance reduction. Take the Joint-Probability (JP) measure for example. The optimal performance was achieved when 17 ranked rules were applied, but when the number of rules further increased, the performance degraded. It was similar when MI or CP was used. Therefore, it is important to determine \"good\" rules and choose them so that the optimal performance could be achieved as soon as possible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Experiment Results and Discussion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Extending the searching net can enhance the SER performance, but the extension must be limited to a suitable range. This point can be observed from the perplexity of the searching net in Figure 8 . Regardless what measures we use, the differences in the perplexity values from the best results among the three measures were always slight. For example, in Figure 8 , the perplexity of the best JP measure result was 2.68 when the perplexity of best MI measure result was 2.62. That means too many rules may lead to more real pronunciation coverage, but the performance may improve slightly or even decrease progressively. The perplexity is a good measure to evaluate the searching net in obtaining the best results.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 195, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 355, |
|
"end": 363, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Experiment Results and Discussion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Finally, in Figure 9 , the error rate of General-Sau-Net is 12.74%. However, some errors resulted from pronunciation variations caused by a speaker's accent. Therefore, through incorporating variation rules into General-Sau-Net with different statistic measures, the best error rates can be reduced to 11.81%, 11%, and 10.56% with respect to JP-, CP-, and MI-based measures, respectively. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 20, |
|
"text": "Figure 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Experiment Results and Discussion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "We have proposed a new approach to address the phonetic transcription of Chinese text into Taiwanese pronunciation. Considering the fact that there are very few linguistic resources for Taiwanese, we used speech recognition techniques to deal with multiple pronunciation variations, which is a very common phenomenon in Taiwanese but hard to deal with using traditional text-based approaches. A general-purpose lexicon (called the Formosa lexicon), and a speaker-adapted HMM model were used to achieve a syllable error rate, 12.74%. In order to enhance the performance, the trivial adaptation of a general-purpose sausage net with pronunciation variation rules was used instead of global pronunciation lexicon modification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "In pronunciation variation rule (PV-rules) selection, the data-driven variation rules, which were derived using three statistical measures, were used to extend more possible pronunciations. Although the knowledge-based rules were also derived from a knowledge source, the rules were difficult to implement and dependent on the specific language. Thus, we selected data-driven rules with context-dependent triphones as the general solution to the PV problem. In the data-driven measure, the mutual-information-based (MI) rank outperformed the Joint-Probability rank and Conditional-Probability rank. Compared with baseline experiment result error rate of 12.74%, the lowest error rate of the MI-based measures had an error reduction rate of 17.11%, which was the best among the three statistical measures proposed in this paper. The error rate of the MI rank converged most quickly and the best performance of MI-based measure appeared in the first 52 ranks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "The experimental results from data-driven measures could possibly provide the evidence to help choose the corresponding knowledge-based PV rules. Of course, some of the pronunciation variation rules were certainly language-dependent (i.e. the phonological and phonetic processes differ between languages) [Kanokphara et al. 2003 ]. However, the major points to be emphasized are that the proposed technique to model pronunciation variation for transcription was rather language-independent.", |
|
"cite_spans": [ |
|
{ |
|
"start": 305, |
|
"end": 328, |
|
"text": "[Kanokphara et al. 2003", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "The recognition of tones was still an unsolved problem in this research. This is another issue for further research. In Taiwanese, there are 7 tone classes, which could be used to distinguish the meanings of words. In addition, the complex tone sandhi would also be accompanied with tone recognition. If more speech and text was gathered, the analysis and statistics of pronunciation information for pronunciation probability would be the next step. We will construct a human interaction system to help more Taiwanese publications be presented. This technology may also be used as a language-learning tool.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Although the proposed technique was developed for Taiwanese speech, it could also be easily adapted for application in other similar \"minority\" Chinese spoken languages, such as Hakka, Wu, Yue, Xiang, Gan, and Min, or other non-Han family languages which also use", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Min-SiongLiang et al.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Chinese characters as the written language form.In summary, the proposed semi-automatic transcription of Chinese text into a Taiwanese pronunciation system reached a 12.74% error rate in the baseline experiment. Further improvement using pronunciation variation rules produced a 17.11% error rate reduction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Sutra on the original Vows of Bodhisattva Earth Treasure in English", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen C. H., Sutra on the original Vows of Bodhisattva Earth Treasure in English, http://www.yogichen.org/efiles/b041a.html, 2006.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Elements of Information Theory", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Cover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Thomas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cover, T. M. and J. A. Thomas, Elements of Information Theory, New York: Wiley, 1991.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Search of Better Pronunciation Models for Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Cremelie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J.-P", |
|
"middle": [], |
|
"last": "Martens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "29", |
|
"issue": "", |
|
"pages": "115--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cremelie, N., and J.-P. Martens, \"In Search of Better Pronunciation Models for Speech Recognition,\" Speech Communication, 29, 1999, pp. 115-136.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Development of the 2003 CU-HTK Conversational Telephone Speech Transcription System", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Evermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"J F" |
|
], |
|
"last": "Gales", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Mrva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Woodland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "249--252", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evermann, G., H.Y. Chan, M.J.F. Gales, T. Hain, X. Liu, D. Mrva, L. Wang, and P.C. Woodland, \"Development of the 2003 CU-HTK Conversational Telephone Speech Transcription System,\" In: Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP), 2004, Montreal, Canada, pp. I-249-I-252.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automatic Transcription of Unknown Words in a Speech Recognition system", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Haeb-Umbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Beyerlein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Thelen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "840--843", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haeb-Umbach, R., P. Beyerlein, and E. Thelen, \"Automatic Transcription of Unknown Words in a Speech Recognition system,\" In: Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP), 1995, pp. 840-843.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Implicit modeling of pronunciation variation in automatic speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hain", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Speech Communication", |
|
"volume": "46", |
|
"issue": "", |
|
"pages": "171--188", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hain, T., \"Implicit modeling of pronunciation variation in automatic speech recognition,\" Speech Communication, 46, 2005, pp. 171-188.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Department of State's Bureau of International Information Programs", |
|
"authors": [ |
|
{ |
|
"first": "U", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "IIP report", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "U.S. Department of State's Bureau of International Information Programs, IIP report, http://usinfo.state.gov/, Dec, 2003.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Pronunciation Variation Speech Recognition without Dictionary Modification on Sparse Database", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kanokphara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Tesprasit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Thongprasirt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "764--767", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kanokphara, S., V. Tesprasit, and R. Thongprasirt, \"Pronunciation Variation Speech Recognition without Dictionary Modification on Sparse Database,\" In: Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP), 2003, Hong Kong, pp. I-764-I-767.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Development of the CU-HTK 2004 Broadcast News Transcription Systems", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Evermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"J F" |
|
], |
|
"last": "Gales", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Mrva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Sim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Woodland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "861--864", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kim, D. Y., H.Y. Chan, G. Evermann, M.J.F. Gales, D. Mrva, K.C. Sim and P.C. Woodland, \"Development of the CU-HTK 2004 Broadcast News Transcription Systems,\" In: Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP), 2005, Philadelphia, USA, pp. 861-864.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Lightly Supervised and Unsupervised Acoustic Model Training", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Lamel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J.-L", |
|
"middle": [], |
|
"last": "Gauvain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Adda", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Computer Speech and Language", |
|
"volume": "16", |
|
"issue": "", |
|
"pages": "115--129", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lamel, L., J.-L. Gauvain, and G. Adda, \"Lightly Supervised and Unsupervised Acoustic Model Training,\" Computer Speech and Language, 16, 2002, pp. 115-129.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A Taiwanese Text-to-Speech System with Applications to Language Learning", |
|
"authors": [ |
|
{ |
|
"first": "M.-S", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R.-C", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y.-C", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D.-C", |
|
"middle": [], |
|
"last": "Lyu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R.-Y", |
|
"middle": [], |
|
"last": "Lyu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. Int. Conf. on Advanced Learning Technologies (ICALT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "91--95", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang, M.-S., R.-C. Yang, Y.-C. Chiang, D.-C. Lyu and R.-Y. Lyu, \"A Taiwanese Text-to-Speech System with Applications to Language Learning,\" In: Proc. Int. Conf. on Advanced Learning Technologies (ICALT), 2004, Joensuu, Finland, pp. 91-95.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Construct a Multi-Lingual Speech Corpus in Taiwan with Extracting Phonetically Balanced Articles", |
|
"authors": [ |
|
{ |
|
"first": "M.-S", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D.-C", |
|
"middle": [], |
|
"last": "Lyu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y.-C", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R.-Y", |
|
"middle": [], |
|
"last": "Lyu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. Int. Conf. on Spoken Language Processing (ICSLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang, M.-S., D.-C. Lyu, Y.-C. Chiang and R.-Y. Lyu, \"Construct a Multi-Lingual Speech Corpus in Taiwan with Extracting Phonetically Balanced Articles,\" In: Proc. Int. Conf. on Spoken Language Processing (ICSLP), 2004, Jeju Island, Korea. Data Driven Approaches to Phonetic Transcription with Integration of 253", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Toward Constructing A Multilingual Speech Corpus for Taiwanese (Minnan), Hakka, and Mandarin", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Automatic Speech Recognition and Grapheme-to-Phoneme for Spoken Buddhist Sutra Lyu", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Automatic Speech Recognition and Grapheme-to-Phoneme for Spoken Buddhist Sutra Lyu, R.Y., M.S. Liang, and Y.C. Chiang, \"Toward Constructing A Multilingual Speech Corpus for Taiwanese (Minnan), Hakka, and Mandarin,\" International Journal of Computational Linguistics & Chinese Language Processing (IJCLCLP), 9(2), August 2004, pp. 1-12.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Language Model and Speaking Rate Adaptation for Spontaneous Presentation Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Nanjo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kawahara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "IEEE Transaction on Speech and Audio Processing", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "391--400", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nanjo, H., and T. Kawahara, \"Language Model and Speaking Rate Adaptation for Spontaneous Presentation Speech Recognition,\" IEEE Transaction on Speech and Audio Processing, vol. 12, Jul. 2004, pp. 391-400.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Very Large Vocabulary Speech Recognition System for Automatic Transcription of Czech Broadcast Programs", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Nouza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Nejedlova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zdansky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kolorenc", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. Int. Conf. on Spoken Language Processing (ICSLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nouza, J., D. Nejedlova, J. Zdansky and J. Kolorenc, \"Very Large Vocabulary Speech Recognition System for Automatic Transcription of Czech Broadcast Programs,\" In: Proc. Int. Conf. on Spoken Language Processing (ICSLP), 2004, Jeju, Korea.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Automated Lexical Adaptation and Speaker Clustering based on Pronunciation Habits for Non-Native Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Raux", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. Int. Conf. on Spoken Language Processing (ICSLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raux, A., \"Automated Lexical Adaptation and Speaker Clustering based on Pronunciation Habits for Non-Native Speech Recognition,\" In: Proc. Int. Conf. on Spoken Language Processing (ICSLP), 2004, Jeju Island, Korea.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Pronunciation change in conversation speech and its implications for automatic speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Saraclar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Khudanpur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Computer Speech and Language", |
|
"volume": "18", |
|
"issue": "", |
|
"pages": "375--395", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saraclar, M., and S. Khudanpur, \"Pronunciation change in conversation speech and its implications for automatic speech recognition,\" Computer Speech and Language, 18, 2004, pp. 375-395.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Automatic Transcription of Continuous Speech using Unsupervised and Incremental Training", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Sarada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Hemalatha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Nagarajan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hema", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Murthy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. Int. Conf. on Spoken Language Processing (ICSLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarada, G.L., and N. Hemalatha, T. Nagarajan and Hema A. Murthy, \"Automatic Transcription of Continuous Speech using Unsupervised and Incremental Training,\" In: Proc. Int. Conf. on Spoken Language Processing (ICSLP), 2004, Jeju, Korea.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The Four Basic Sutra in Taiwanese", |
|
"authors": [ |
|
{ |
|
"first": "D.-G", |
|
"middle": [], |
|
"last": "Sik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sik, D.-G., The Four Basic Sutra in Taiwanese, DiGuan Temple, HsinChu, Taiwan, 2004.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Earth Treasure Sutra in Taiwanese", |
|
"authors": [ |
|
{ |
|
"first": "D.-G", |
|
"middle": [], |
|
"last": "Sik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sik, D.-G., Earth Treasure Sutra in Taiwanese, DiGuan Temple, HsinChu, Taiwan, 2004.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Speech Recognition Error Analysis on the English MALACH Corpus", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Siohan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "B. Ramabhadran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zweig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. Int. Conf. on Spoken Language Processing (ICSLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siohan, O., B. Ramabhadran, and G. Zweig, \"Speech Recognition Error Analysis on the English MALACH Corpus,\" In: Proc. Int. Conf. on Spoken Language Processing (ICSLP), 2004, Jeju Island, Korea.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "The IBM 2004 Conversational Telephony System for Rich Transcription", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Soltau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Kingsbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Mangu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Povey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Saon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Zweig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soltau, H., B. Kingsbury, L. Mangu, D. Povey, G. Saon and G. Zweig, \"The IBM 2004 Conversational Telephony System for Rich Transcription,\" In: Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP), 2005, Philadelphia, USA, pp. I-205-I-208.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Sutra on the original Vows of Bodhisattva Earth Treasure in Chinese", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Tripitaka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tripitaka, S. S., Sutra on the original Vows of Bodhisattva Earth Treasure in Chinese. http://book.bfnn.org/article/0016.htm, 2005.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Improved pronunciation modeling by inverse word frequency and pronunciation entropy", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Tsai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Chou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proc. IEEE Int. Workshop on Automatic Speech Recognition and Understanding (ASRU)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "53--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsai, M.Y., F.C. Chou, and L.S. Lee, \"Improved pronunciation modeling by inverse word frequency and pronunciation entropy,\" In: Proc. IEEE Int. Workshop on Automatic Speech Recognition and Understanding (ASRU), 2002, pp. 53-56.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Application of Simultaneous Decoding Algorithm to Automatic Transcription of Known and Unknown Words", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "589--592", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wu, J., and V. Gupta, \"Application of Simultaneous Decoding Algorithm to Automatic Transcription of Known and Unknown Words,\" In: Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP), 1999, Phoenix, USA, pp. 589-592.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "The HTK Book, 3", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Evermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Gales", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kershaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Odell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ollason", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Povey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Valtchev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Woodland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Young, S., G. Evermann, M. Gales, T. Hain, D. Kershaw, X. (Andrew) Liu, G. Moore, J. Odell, D. Ollason, D. Povey, V. Valtchev and P. Woodland, The HTK Book, 3.4 ed., 2008.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "The process of transcribing Chinese text into Taiwanese pronunciation using the ASR technique.e.g.\" zu si \u014bo bun ui bi\u0264 sua t hua t \"", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Driven Approaches to Phonetic Transcription with Integration of 237 Automatic Speech Recognition and Grapheme-to-Phoneme for Spoken Buddhist Sutra speaker adaptation techniques.", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "The flow chart of the phonetic transcription of Taiwanese Buddhist Sutra (TBS) incorporating pronunciation variation rules.", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "The sausage searching net. The net is constructed from the multiple pronunciations of each Chinese character from the Formosa Lexicon. The corresponding Chinese characters with multiple pronunciations are also shown.", |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Liang et al. while as another 31 utterances are used for acoustic model development.", |
|
"uris": null |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "The sausage searching net with missing character 0 C , where all syllables, 00 S , 01 S ,..., 0 N S , are used as its possible pronunciations.", |
|
"uris": null |
|
}, |
|
"FIGREF6": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "An example of the extended sausage searching net. The net is constructed from the multiple pronunciations in lexicon and expanded using pronunciation-variation rules for each Chinese character according to the rule \"/\u0264/ \u2192 /o/\". and Grapheme-to-Phoneme for Spoken Buddhist Sutra", |
|
"uris": null |
|
}, |
|
"FIGREF8": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "probability of b i and s j , respectively.", |
|
"uris": null |
|
}, |
|
"FIGREF10": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Syllable error rate (SER) under uni-gram, and General-Sau-Net, SI w/ adaptation. See text in subsection 3.2 for notations.", |
|
"uris": null |
|
}, |
|
"FIGREF12": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "The recognition result (Syllable error rate) v.s. the number of ranked rules sorted according to different measures", |
|
"uris": null |
|
}, |
|
"FIGREF13": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "The recognition result (syllable error rate) vs. the perplexity sorted according to different measures, including mutual-information (MI), joint-probability (JP), and conditional-probability (CP) as well as the Baseline criterion The recognition result (syllable error rate) under four kinds of net according to different measures, including mutual-information (MI), joint-probability (JP), and conditional-probability (CP) as well as the Baseline criterion. to Phonetic Transcription with Integration of 251Automatic Speech Recognition and Grapheme-to-Phoneme for Spoken Buddhist Sutra", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td>Chinese text of Buddhist Sutra</td><td>\u5730\u85cf\u83e9\u85a9\u672c\u9858\u7d93\uff1a\u5982\u662f\u6211\u805e\u3002</td></tr><tr><td/><td>\u4e00\u6642\u4f5b\u5728\u5fc9\uf9dd\u5929\uff0c\u70ba\u6bcd\uf96f\u6cd5\u3002</td></tr><tr><td>Transcription of Taiwanese</td><td>t\u00e8 ts\u00f2\u014b p'\u00f2 s\u00e1 t p\u00fan g\u00f9an k\u00ed\u014b: z\u00f9 s\u012b \u014b\u00f3 b\u00f9n</td></tr><tr><td>Pronunciation</td><td>\u00ed t s\u012d h\u00fa t ts\u00e0i t\u0264\u0304 l\u00ec t\u00eden, u\u00ec bi\u0264\u0302 s\u00faa t hu\u0101 t</td></tr><tr><td>English translation in meaning</td><td>Sutra of Earth Treasure: Thus I heard, once the</td></tr><tr><td/><td>Buddha was in Dao Li Heaven to expound the</td></tr><tr><td/><td>Dharma to his mother</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td/><td>ForSDAT-01</td><td/><td colspan=\"2\">Partial ForSDAT-02</td></tr><tr><td>Utterance</td><td>92158</td><td/><td colspan=\"2\">19731</td></tr><tr><td>Number of People</td><td colspan=\"2\">100 (male: 50, female: 50)</td><td colspan=\"2\">131 (male: 72, female: 59)</td></tr><tr><td>Number of Syllable</td><td>179730</td><td/><td colspan=\"2\">45865</td></tr><tr><td colspan=\"2\">Number of Distinct Triphones 1356</td><td/><td>1194</td></tr><tr><td>Number of Total Triphones</td><td>555731</td><td/><td colspan=\"2\">104894</td></tr><tr><td>Time (hr)</td><td>22.43</td><td/><td>7.2</td></tr><tr><td colspan=\"4\">Table 3. TBS (Taiwanese Buddhist Sutra) speech corpus.</td></tr><tr><td colspan=\"2\">Buddhist Corpus Category Utterance</td><td colspan=\"2\"># of Syllable</td><td>Time (min)</td></tr><tr><td>Adaptation</td><td>31</td><td>359</td><td/><td>2.62</td></tr><tr><td>Test</td><td>502</td><td>5909</td><td/><td>43.23</td></tr><tr><td>Other</td><td>1086</td><td>12147</td><td/><td>179.88</td></tr><tr><td>Total</td><td>1619</td><td>18415</td><td/><td>225.73</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td/><td>Pronunciation 1</td><td>Pronunciation 2</td><td>Pronunciation 3</td><td>\u2026\u2026</td></tr><tr><td>\u65e5(sun)</td><td>g\u00ed p</td><td>l\u00ed p</td><td>z\u00ed p</td><td>\u2026\u2026</td></tr><tr><td>\u706b(fire)</td><td>x\u00ea</td><td>x\u00f9e</td><td/><td/></tr><tr><td>\u52a0(add)</td><td>k\u00e1</td><td>k\u00e9</td><td>k\u00fae</td><td>\u2026\u2026</td></tr><tr><td>\u53e9(knock)</td><td>k'\u00e1u</td><td>k\u00ec\u0264</td><td>k'\u014d k</td><td>\u2026\u2026</td></tr><tr><td>\uf91c(egg)</td><td>l\u014b\u0304</td><td>l\u00fban</td><td>n\u014b\u0304</td><td>\u2026\u2026</td></tr><tr><td>\u5750(sit)</td><td>ts\u00e9</td><td>ts\u0264\u0304</td><td>ts\u016be</td><td>\u2026\u2026</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td>Variation types</td><td>Examples</td></tr><tr><td>Bai-du-in / Wen-du-in</td><td>si\u014b s\u1ebd/s\u0129</td></tr><tr><td>Dialectal difference</td><td>z l/g</td></tr><tr><td/><td>\u0264 \u2194\u0254 o</td></tr><tr><td/><td>n l</td></tr><tr><td/><td>b m</td></tr><tr><td/><td>\u0129\u0169 \u0129\u00f5</td></tr><tr><td>Personal pronunciation error</td><td>g {}</td></tr><tr><td/><td>b {}</td></tr><tr><td/><td>h {}</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td/><td colspan=\"14\">triphone b i to triphone s j , P is the number of surface-form and</td></tr><tr><td/><td colspan=\"2\">base-form, i N</td><td>= \u2211</td><td>j</td><td>i j n</td><td>,</td><td>M</td><td>j</td><td>= \u2211</td><td>i</td><td>i j n</td><td>and</td><td>N</td><td>i j = \u2211 \u2211</td><td>ij n</td></tr><tr><td/><td>bh-er</td><td>i-ng</td><td/><td/><td colspan=\"2\">i-n</td><td/><td/><td colspan=\"2\">\u2026\u2026</td><td/><td>s j</td><td colspan=\"2\">\u2026\u2026 bh-o</td><td>a-n</td><td>a-m</td></tr><tr><td>bh-er</td><td>237</td><td>0</td><td/><td/><td>0</td><td/><td/><td/><td colspan=\"2\">\u2026\u2026</td><td/><td>n 1j</td><td colspan=\"2\">\u2026\u2026</td><td>30</td><td>0</td><td>0</td><td>267</td></tr><tr><td>i-ng</td><td>0</td><td colspan=\"2\">1273</td><td/><td colspan=\"2\">84</td><td/><td/><td colspan=\"2\">\u2026\u2026</td><td/><td>n 2j</td><td colspan=\"2\">\u2026\u2026</td><td>0</td><td>0</td><td>0</td><td>1373</td></tr><tr><td>\u2026</td><td>\u2026</td><td>\u2026</td><td/><td/><td colspan=\"2\">\u2026</td><td/><td/><td/><td/><td/><td>\u2026</td><td/><td>\u2026</td><td>\u2026</td><td>\u2026</td></tr><tr><td>b i</td><td>n i1</td><td>n i2</td><td/><td/><td colspan=\"2\">n i3</td><td/><td/><td colspan=\"2\">\u2026\u2026</td><td/><td>n ij</td><td colspan=\"2\">\u2026\u2026 n i,p-2 n i,p-1</td><td>n iP</td><td>N i</td></tr><tr><td>\u2026</td><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">\u2026\u2026</td><td/><td/><td colspan=\"2\">\u2026\u2026</td></tr><tr><td>a-m</td><td>0</td><td>0</td><td/><td/><td>0</td><td/><td/><td/><td colspan=\"2\">\u2026\u2026</td><td/><td>n Pj</td><td colspan=\"2\">\u2026\u2026</td><td>0</td><td>35</td><td>834</td></tr><tr><td/><td>241</td><td colspan=\"2\">1315</td><td colspan=\"3\">1102</td><td/><td/><td/><td/><td/><td>M j</td><td/><td>107 1873</td><td>870</td><td>N</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |