Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O04-3001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:00:58.705683Z"
},
"title": "Toward Constructing A Multilingual Speech Corpus for Taiwanese (Min-nan), Hakka, and Mandarin",
"authors": [
{
"first": "Ren-Yuan",
"middle": [],
"last": "Lyu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chang Gung University",
"location": {
"settlement": "Taoyuan",
"country": "Taiwan"
}
},
"email": "rylyu@mail.cgu.edu.tw"
},
{
"first": "Min-Siong",
"middle": [],
"last": "Liang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chang Gung University",
"location": {
"settlement": "Taoyuan",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Yuang-Chin",
"middle": [],
"last": "Chiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {
"addrLine": "Hsin-chu",
"country": "Taiwan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The Formosa speech database (ForSDat) is a multilingual speech corpus collected at Chang Gung University and sponsored by the National Science Council of Taiwan. It is expected that a multilingual speech corpus will be collected, covering the three most frequently used languages in Taiwan: Taiwanese (Min-nan), Hakka, and Mandarin. This 3-year project has the goal of collecting a phonetically abundant speech corpus of more than 1,800 speakers and hundreds of hours of speech. Recently, the first version of this corpus containing speech of 600 speakers of Taiwanese and Mandarin was finished and is ready to be released. It contains about 49 hours of speech and 247,000 utterances.",
"pdf_parse": {
"paper_id": "O04-3001",
"_pdf_hash": "",
"abstract": [
{
"text": "The Formosa speech database (ForSDat) is a multilingual speech corpus collected at Chang Gung University and sponsored by the National Science Council of Taiwan. It is expected that a multilingual speech corpus will be collected, covering the three most frequently used languages in Taiwan: Taiwanese (Min-nan), Hakka, and Mandarin. This 3-year project has the goal of collecting a phonetically abundant speech corpus of more than 1,800 speakers and hundreds of hours of speech. Recently, the first version of this corpus containing speech of 600 speakers of Taiwanese and Mandarin was finished and is ready to be released. It contains about 49 hours of speech and 247,000 utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "To design a speaker independent speech recognition system, it is essential to collect a largescale speech database. Taiwan (also called Formosa historically), which has become famous for its IT industry, is basically a multilingual society. People living in Taiwan usually speak at least two of the three major languages, including Taiwanese (also called Min-nan in the linguistics literature), Hakka and Mandarin, which are all members of the Chinese language family. In the past several decades, most of the researchers studying natural language processing, speech recognition and speech synthesis in Taiwan have devoted themselves to research on Mandarin speech. Several speech corpora of Mandarin speech have, thus, been collected and distributed [Wang et al., 2000; Godfrey, 1994] . However, little has been done on the other two languages used in daily life. In this paper, we describe a governmentsponsored project which aims to collect a large-scale multilingual speech corpus, namely, the Formosa Speech Database (ForSDat), covering these three languages used in Taiwan. The construction of ForSDat is a 3-year project, the goal of which is to collect hundreds of hours of speech from up to 1,800 speakers. So far, we have finished about one-thrid of what the project is expected to achieve. This paper is organized as follows. Section 1 is the introduction. Section 2 describes the Formosa Phonetic Alphabet (ForPA), which is being used to transcribe all the speech data and the pronunciation lexicons. Section 3 discusses the phonetically balanced word sheets used to record speech utterances. Section 4 reports the software tools used for corpus collection. Section 5 describes the information obtained about speakers. Section 6 provides information about the database information. Section 7 discusses data validation, and section 8 is a conclusion.",
"cite_spans": [
{
"start": 751,
"end": 770,
"text": "[Wang et al., 2000;",
"ref_id": "BIBREF0"
},
{
"start": 771,
"end": 785,
"text": "Godfrey, 1994]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "One of the preliminary jobs involved in constructing a speech corpus is to build up a pronunciation lexicon. We have set up several pronunciation lexicons composed of more than 60,000 words for Taiwanese, more than 70,000 words for Mandarin and more than 20,000 words for Hakka. Each item in the lexicons contains a Chinese character string and a string of phonetic symbols encoded in the Formosa Phonetic Alphabet (ForPA), which will be described in the following paragraphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Phonetic Alphabet and the Pronunciation Lexicon",
"sec_num": "2."
},
{
"text": "Many symbolic systems have been developed for labeling the sounds of languages used throughout the world. One of the most popular systems is the International Phonetic Alphabet (IPA). Since many IPA symbols are not defined in the ASCII code set and are not easy to manipulate, many ASCII-coded IPA symbolic sets have been proposed in the literature. Two popular systems are SAMPA [Wells, 2003] and WorldBet [Hieronymus, 1994] . It has claimed that one can select parts of these phone sets for a specific language. However, both ASCIIcoded phonetic systems have many symbols that are difficult to read, such as \"@\"or \"&\". In addition, since these systems are designed for all the languages used around the world, they are too complex to be applied to some local languages, like those that will be addressed here.",
"cite_spans": [
{
"start": 380,
"end": 393,
"text": "[Wells, 2003]",
"ref_id": "BIBREF9"
},
{
"start": 407,
"end": 425,
"text": "[Hieronymus, 1994]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formosa Phonetic Alphabet (ForPA)",
"sec_num": "2.1"
},
{
"text": "The most widely known phonetic symbol sets used to transcribe Mandarin Chinese are the Mandarin Phonetic Alphabet (MPA, also called Zhu-in-fu-hao) and Pinyin (Han-yu-pinyin), which have been officially used in Taiwan and Mainland China, respectively, for many years. However, both systems are inadequate for application to the other members of the Chinese language family, like Taiwanese (Min-nan) and Hakka. Among the phonetic systems useful for Taiwanese and Hakka, there are Church Romanized Writing (CR, also call Peh-e-ji, \u300c\u767d\u8a71\u5b57\u300d) [Chiung 2001 ] for Taiwanese and the Taiwan Language Phonetic Alphabet (TLPA) [Ang 2002 ] for Taiwanese and Hakka. Because the same phonemes are represented using different symbols in Pinyin, CR and TLPA, it is confusing to learn these phonetic systems simultaneously. For example, the syllable \"pa(\u516b)\" in TLPA and \"pa(\u8db4)\" in CR may be confused with each other because the phoneme /p/ is pronounced differently in the two systems.",
"cite_spans": [
{
"start": 535,
"end": 547,
"text": "[Chiung 2001",
"ref_id": "BIBREF14"
},
{
"start": 613,
"end": 622,
"text": "[Ang 2002",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formosa Phonetic Alphabet (ForPA)",
"sec_num": "2.1"
},
{
"text": "Therefore, it is necessary to design a more suitable phoneme set for multilingual speech data collection and labeling [Zu, 2002] [Lyu, 2000] . The whole phone set for the three major languages used in Taiwan is listed in Table 1 for four phonetic systems: MPA, Pinyin, IPA, and the newly proposed ForPA. Table 1 also lists examples of syllables and characters which contain the target phonemes.",
"cite_spans": [
{
"start": 118,
"end": 128,
"text": "[Zu, 2002]",
"ref_id": "BIBREF2"
},
{
"start": 129,
"end": 140,
"text": "[Lyu, 2000]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 221,
"end": 228,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 304,
"end": 311,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Formosa Phonetic Alphabet (ForPA)",
"sec_num": "2.1"
},
{
"text": "It is known that phonemes can be defined in many different ways, depending on the level of detail desired. The labeling philosophy adopted in ForPA is that when faced with various choices, we prefer not to divide a phoneme into distinct allophones, except in cases where the sound is clearly different to the ear or the spectrogram is clearly different to the eye. Since labeling is often performed by engineering students and researchers (as opposed to professional phoneticians), it is generally safer to keep the number of units as small as possible, assuming that the recognizer will be able to learn any finer distinctions that might exist within any context. Generally speaking, ForPA might be considered as a subset of IPA, but it is more suitable for application to the languages used in Taiwan. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formosa Phonetic Alphabet (ForPA)",
"sec_num": "2.1"
},
{
"text": "Before producing word sheets for speakers to utter, a complete pronunciation lexicon needs to be prepared. A lexicon has been collected in this project to meet the requirement. This lexicon, called the Formosa Lexicon (ForLex), was adapted from three other lexicons: the CKIP Mandarin lexicon , Gang's Taiwanese lexicon, and Syu's Hakka lexicon [CKIP 2003 ] [Syu 2001 ]. Some statistical information about the lexicon was listed in Table 2 . ",
"cite_spans": [
{
"start": 345,
"end": 355,
"text": "[CKIP 2003",
"ref_id": null
},
{
"start": 358,
"end": 367,
"text": "[Syu 2001",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 432,
"end": 439,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Formosa Lexicon (ForLex): A Pronunciation lexicon composed of Taiwanese, Hakka and Mandarin",
"sec_num": "2.2"
},
{
"text": "Based on the three pronunciation lexicons transcribed in ForPA, we extracted sets of distinct syllables and inter-syllabic bi-phones from the three languages. The statistics of the phonetic units considered here are listed in Table 3 . In order to collect speech data related to the coarticulation effect of continuous speech, we extracted phonetically abundant word sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 233,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "The process of producing phonetically balanced word sheets",
"sec_num": "3."
},
{
"text": "Therefore, the chosen phonetic units were not only base-syllables, phones, and RCD phones, but also Initial-Finals, RCD Initial-Finals and inter-syllabic RCD phones. The process of selecting such a word set is actually a set-covering optimization problem [Shen et al., 1999] , which is NP-hard. Here, we adopted a simple greedy heuristic approximate solution [Cormen, 2001 ].",
"cite_spans": [
{
"start": 255,
"end": 274,
"text": "[Shen et al., 1999]",
"ref_id": "BIBREF5"
},
{
"start": 359,
"end": 372,
"text": "[Cormen, 2001",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The process of producing phonetically balanced word sheets",
"sec_num": "3."
},
{
"text": "First, we set the requirements of the word set as to cover the following phonetic units:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The process of producing phonetically balanced word sheets",
"sec_num": "3."
},
{
"text": "Base-syllables and Inter-syllabic RCD phones. Accordingly, the selected word set could cover all the phones, Initial-Finals, RCD phones, RCD Initial-Finals, Base-syllables and Intersyllabic RCD phones. In this way, we could obtain several sets of words for our balance-word data sheets. All the statistics of the phonetic units considered here are listed in Table 3 . [Liang 2003 ]. ",
"cite_spans": [
{
"start": 368,
"end": 379,
"text": "[Liang 2003",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 358,
"end": 365,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "The process of producing phonetically balanced word sheets",
"sec_num": "3."
},
{
"text": "The process of producing data sheets is depicted in Fig.2 . Before we produced the data sheets, we defined the sheets' coverage rate. The coverage rate of the sheets was defined as the total number of base-syllables (or inter-syllabic phones) over the number of all possible distinct base-syllables (or inter-syllabic phones). The format of the data sheet is partially shown in Table 4 . In terms of Taiwanese sheets, although we produced 364 balanced-word sets in total, we only used sets whose coverage rates exceeded 50%. Because the variation in the numbers of syllables or words in some sets was very high, we merged those sets and then re-segmented them to produce data sheets. Finally, each sheet contained about 200 syllables. The numbers of data sheets and total words were 446 and 37,275, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 57,
"text": "Fig.2",
"ref_id": "FIGREF0"
},
{
"start": 378,
"end": 385,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data sheets",
"sec_num": "3.1"
},
{
"text": "As for Miaulik-Hakka sheets, all the balanced-word sets were concatenated in sequence and then segmented into data sheets, each of which contained 70 words. Finally, we got 340 data sheets, which consisted of 23,837 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sheets",
"sec_num": "3.1"
},
{
"text": "For Mandarin, one phonetically-rich set was segmented equally into ten sheets, and every sheet consisted of roughly 300 words. In addition, all the tonal-syllables were segmented into ten equal-size data sheets. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data sheets",
"sec_num": "3.1"
},
{
"text": "The telephone system is set up in the Multi-media Signal Process Laboratory at Chang Gung University. The speakers dial into the laboratory using a handset telephone. Before recording, we give the speakers prompt sheets. The input signal is in format of 8K sampling rate with 8bits \u00b5-law compression. The speakers utter words while reading the prompt sheet, and supervised prompt speech is played to help the speakers follow the prompt speech to finish the recording. After recording, all speech data are saved in a unique directory. Figure 3 shows the recording process carried out using the telephone system.",
"cite_spans": [],
"ref_spans": [
{
"start": 534,
"end": 542,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "The telephone recording system",
"sec_num": "4.1"
},
{
"text": "When we record a waveform into a computer, it is not convenient to type the file name necessary for saving it. Therefore, we use a good tool (DQS3.1) [Chiang 2002 ] to record speech. If we create a script in a specific form for this software, we can record the waveform easily and get a labeled file, which contains information of transcription using ForPA. Then, we simply set up the system on a notebook computer and take it wherever we want to record speech.",
"cite_spans": [
{
"start": 150,
"end": 162,
"text": "[Chiang 2002",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The microphone recording system",
"sec_num": "4.2"
},
{
"text": "We employ several part-time assistants to recruit speakers around Taiwan. Each speaker is asked to record one sheet and receives a remuneration after finishing recording. Each parttime assistant receives a remuneration when they recruits a speaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speaker recruiting",
"sec_num": "5."
},
{
"text": "After a recording is finished, we ask the speakers to provide us with their profiles. This is useful for arranging speech data later. The user can also design experiments according to these profiles (see Fig. 4 ). The profile of a speaker includes the following attributes:",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 210,
"text": "Fig. 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Profiles of speakers",
"sec_num": "5.1"
},
{
"text": "i. the name and gender of the speaker;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Profiles of speakers",
"sec_num": "5.1"
},
{
"text": "ii. the age and birthplace of the speaker;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Profiles of speakers",
"sec_num": "5.1"
},
{
"text": "iii. the location of the speaker and time; iv. the number of years of education of the speaker. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Profiles of speakers",
"sec_num": "5.1"
},
{
"text": "We save the utterance in a binary file. If the speech is recorded using a microphone, we save it as a 16KHz/16bits PCM file and a corresponding label file that contains the phonetic transcription for a word. Otherwise, we save the utterances as a 8KHz/8bits \u00b5 -law file if the speech data were obtained over a telephone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech data format",
"sec_num": "5.2"
},
{
"text": "The database has been collected over both microphone and telephone channels, namely, ForSDat-TW01, ForSDat-MD01 and ForSDat-TW02, respectively. The tag \"TW01\" means that a portion of the database was collected in 2001 in Taiwanese. In the other hand, the tag Taiwanese (Min-nan), Hakka, and Mandarin \"M0\" means that the recording channel used was a microphone and gender was female, and so on. Every speaker has a unique serial number and speech data, which contain a transcription of waveforms made in the early stage and are stored in a unique folder named according to the serial number. The database structure is shown in Fig.5 . All the statistics of the database are listed in Table 5 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 626,
"end": 631,
"text": "Fig.5",
"ref_id": "FIGREF4"
},
{
"start": 683,
"end": 690,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Database information",
"sec_num": "6."
},
{
"text": "After the speakers have finished recording, the speech data need to be validated. This step can guarantee that the speech data will be useful for training the acoustic models of the speech recognizer. Although the data sheets are designed to be as readable as possible and we provide prompting speech for speakers, the utterances still are not compatible with the prompt. We thus validate the speech data using a specially designed software tool, which has the user interface shown in Fig.6 and the functions described in the following subsections. ",
"cite_spans": [],
"ref_spans": [
{
"start": 485,
"end": 490,
"text": "Fig.6",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Database validation",
"sec_num": "7."
},
{
"text": "We browse all the waveforms using the validation tool and check whether the following problems occur:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: pre-processing",
"sec_num": "7.1"
},
{
"text": "1. the voice is cut off;i.e., the speakers pronounce too fast;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: pre-processing",
"sec_num": "7.1"
},
{
"text": "2. the voice file is empty;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: pre-processing",
"sec_num": "7.1"
},
{
"text": "3. there are other sounds mixed into the waveform, such as the voices of other people or the sounds of vehicles; 4. the speakers laughed when the waveform was being recorded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: pre-processing",
"sec_num": "7.1"
},
{
"text": "If any one of the above problems are found, the speech file is considered unusable. If the total number of unusable files exceeds 10% of all the files in the directory, the directory is considered unusable. The speaker will then be asked to record the work sheet again.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: pre-processing",
"sec_num": "7.1"
},
{
"text": "Other problems may also occur. For example, two speakers may record speech data Taiwanese (Min-nan), Hakka, and Mandarin inturns in one work sheet, etc. These directories are also considered unusable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 1: pre-processing",
"sec_num": "7.1"
},
{
"text": "After the speech data is pre-processed, we validate it to determine whether the labels that consist of phonetic transcriptions correspond to the speech data. We use two methods to achieve this goal. First, we use HTK [Steven, 2002] to perform forced-alignment automatically on an utterance using all possible syllable combinations. We keep the highest scores for combinations to transcribe the speech. Secondly, we use the TTS (text-to-speech) technique to synthesize all the labels that were transcribed using HTK and then we transcribe the speech manually using more appropriate phonetic symbols. Finally, we can construct a relational database using ACCESS to record all the profiles of the speakers (see Fig.3 ) and",
"cite_spans": [
{
"start": 217,
"end": 231,
"text": "[Steven, 2002]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 708,
"end": 713,
"text": "Fig.3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Step 2: phonetic transcription by means of forced alignment",
"sec_num": "7.2"
},
{
"text": "what they recorded. Therefore, we can query the speech database using the SQL language to find the waveforms transcribed using the specific phones or syllables or even query who recorded the specific-phone waveforms. This step is on-going and will be finished soon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: phonetic transcription by means of forced alignment",
"sec_num": "7.2"
},
{
"text": "Version 1.0 of this corpus containing the speech of 600 speakers of Taiwanese (Min-nan) and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8."
},
{
"text": "Mandarin Chinese has been finished and is ready to be released. We have collected the speech of 1,773 people, including 49.47 hours of speech and 247,027 utterances. As work on this project continues, more Hakka and Mandarin speech data will be collected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Mat-2000 -design, collection, and validation of a mandarin 2,000-speaker telephone speech database",
"authors": [
{
"first": "H",
"middle": [
"C"
],
"last": "Wang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Seide",
"suffix": ""
},
{
"first": "C",
"middle": [
"Y"
],
"last": "Tseng",
"suffix": ""
},
{
"first": "L",
"middle": [
"S"
],
"last": "Lee",
"suffix": ""
}
],
"year": 2000,
"venue": "International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, H. C., F. Seide, C.Y. Tseng and L.S. Lee, \"Mat-2000 -design, collection, and validation of a mandarin 2,000-speaker telephone speech database,\" International Conference on Spoken Language Processing 2000, Beijing, China, 2000.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "International Committee for Coordination and Standardisation of Speech Databases Workshop 94",
"authors": [
{
"first": "J",
"middle": [],
"last": "Godfrey",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Godfrey, J., \"Polyphone: Second anniversary report,\" International Committee for Co- ordination and Standardisation of Speech Databases Workshop 94, Yokohama, Japan, 1994.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A super phonetic system and multi-dialect Chinese speech corpus for speech recognition",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zu",
"suffix": ""
}
],
"year": 2002,
"venue": "International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zu, Y., \"A super phonetic system and multi-dialect Chinese speech corpus for speech recognition,\" International Conference on Spoken Language Processing 2002, Denver, USA, 2002.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A bi-lingual Mandarin/Taiwanese (Min-nan), Large Vocabulary",
"authors": [
{
"first": "R",
"middle": [
"Y"
],
"last": "Lyu",
"suffix": ""
}
],
"year": 2000,
"venue": "Continuous speech recognition system based on the Tong-yong phonetic alphabet (TYPA),\" International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lyu, R. Y., \"A bi-lingual Mandarin/Taiwanese (Min-nan), Large Vocabulary, Continuous speech recognition system based on the Tong-yong phonetic alphabet (TYPA),\" International Conference on Spoken Language Processing 2000, Beijing, China, 2000.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An efficient algorithm to select phonetically balanced scripts for constructing corpus",
"authors": [
{
"first": "M",
"middle": [
"S"
],
"last": "Liang",
"suffix": ""
},
{
"first": "R",
"middle": [
"Y"
],
"last": "Lyu",
"suffix": ""
},
{
"first": "Y",
"middle": [
"C"
],
"last": "Chiang",
"suffix": ""
}
],
"year": 2003,
"venue": "IEEE International Conference on Natural Language Processing and Knowledge Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang, M. S., R. Y. Lyu and Y. C. Chiang, \"An efficient algorithm to select phonetically balanced scripts for constructing corpus,\" IEEE International Conference on Natural Language Processing and Knowledge Engineering 2003, Beijing, China, 2003.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "automatic selection of phonetically distributed sentence sets for speaker adaptation with application to large vocabulary mandarin speech recognition",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Shen",
"suffix": ""
},
{
"first": "H",
"middle": [
"M"
],
"last": "Wang",
"suffix": ""
},
{
"first": "R",
"middle": [
"Y"
],
"last": "Lyu",
"suffix": ""
},
{
"first": "L",
"middle": [
"S"
],
"last": "Lee",
"suffix": ""
}
],
"year": 1999,
"venue": "Computer speech and language",
"volume": "13",
"issue": "1",
"pages": "79--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shen, J. L., H. M. Wang, R. Y. Lyu and L. S. Lee, \"automatic selection of phonetically distributed sentence sets for speaker adaptation with application to large vocabulary mandarin speech recognition,\" Computer speech and language, vol. 13, no. 1,pp. 79-97, Jan. 1999.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Chapter 37: Approximation Algorithm",
"authors": [
{
"first": "T",
"middle": [
"H"
],
"last": "Cormen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ect",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "974--978",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cormen, T. H. ect, \"Chapter 37: Approximation Algorithm\", Introduction to Algorithm, pp. 974-978, 2001.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Lyu, the speech recording system which are developed by Dr",
"authors": [
{
"first": "Y",
"middle": [
"C"
],
"last": "Chiang",
"suffix": ""
},
{
"first": "R",
"middle": [
"Y"
],
"last": "",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiang, Y. C., and R. Y. Lyu, the speech recording system which are developed by Dr. Yuang-Chin Chiang at Nation Tsing Hua University, 2002.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The HTK book version 3.2",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Steven",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven, Y., \"The HTK book version 3.2\", Cambridge University Engineering Department, 2002.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "SAMPA (Speech Assessment Methods Phonetic Alphabet",
"authors": [
{
"first": "J",
"middle": [],
"last": "Wells",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wells, J., SAMPA (Speech Assessment Methods Phonetic Alphabet), http://www.phon. ucl.ac.uk/home/sampa/home.htm, April, 2003.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Chinese Knowledge Information Processing",
"authors": [],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CKIP, Chinese Knowledge Information Processing, http://rocling.iis.sinica.edu.tw/ CKIP/, 2003.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Hakka dictionary of Taiwan",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "Syu",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Syu, J. C., \"Hakka dictionary of Taiwan\", Nantian Bookstore published, 2001.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "ASCII phonetic symbols for the world's languages: Worldbet",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hieronymus",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hieronymus, J., \"ASCII phonetic symbols for the world's languages: Worldbet,\" AT&T Bell Laboratories, Technical Memo, 1994.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Taiwan Language Phonetic Alphabet(TLPA)",
"authors": [
{
"first": "U",
"middle": [],
"last": "Ang",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ang, U., Taiwan Language Phonetic Alphabet(TLPA), Taiwan Languages and Literature Society, http://www.tlls.org.tw/, 2002.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Romanization and Language Planning in Taiwan",
"authors": [
{
"first": "W",
"middle": [
"V T"
],
"last": "Chiung",
"suffix": ""
}
],
"year": 2001,
"venue": "The Linguistic Association of Korea Journal",
"volume": "9",
"issue": "1",
"pages": "15--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiung, W. V. T., \"Romanization and Language Planning in Taiwan,\" The Linguistic Association of Korea Journal 9(1), pp. 15-43, 2001.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "The process of producing data sheets.",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "The telephone recording system.",
"type_str": "figure",
"num": null
},
"FIGREF3": {
"uris": null,
"text": "A portion of a speaker's profile in the database.",
"type_str": "figure",
"num": null
},
"FIGREF4": {
"uris": null,
"text": "The structure of database for Taiwanese and Mandarin. (TW01: Taiwanese database collected in 2001; M0: the microphone channel was used and the gender was female, T1: the telephone channel was used and the gender was male; and so on. There is a transcription file for each unique speaker.)",
"type_str": "figure",
"num": null
},
"FIGREF6": {
"uris": null,
"text": "The software tool for validation.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table/>"
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td/><td>1-Syl</td><td>2-Syl</td><td>3-Syl</td><td>4-Syl</td><td>5-Syl</td><td/></tr><tr><td>Gang</td><td>8027</td><td>44846</td><td>12129</td><td>1823</td><td>161</td><td/></tr><tr><td>Syu</td><td>7322</td><td>9161</td><td>4948</td><td>2382</td><td>21</td><td/></tr><tr><td>CKIP</td><td>6863</td><td>39733</td><td>8277</td><td>9074</td><td>435</td><td/></tr><tr><td/><td>6-Syl</td><td>7-Syl</td><td>8-Syl</td><td>9-Syl</td><td>10-Syl</td><td>Total</td></tr><tr><td>Gang</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>66986</td></tr><tr><td>Syu</td><td>3</td><td>0</td><td>0</td><td>0</td><td>0</td><td>23837</td></tr><tr><td>CKIP</td><td>223</td><td>125</td><td>52</td><td>2</td><td>8</td><td/></tr></table>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>Language</td><td>Base syllable</td><td>Phones</td><td>Within-syllabic bi-phones</td><td>Inter-syllabic bi-phones</td></tr><tr><td>T</td><td>832</td><td>53</td><td>410</td><td>716</td></tr><tr><td>H</td><td>683</td><td>53</td><td>327</td><td>696</td></tr><tr><td>M</td><td>429</td><td>45</td><td>208</td><td>234</td></tr><tr><td>T\u222aH</td><td>1134</td><td>70</td><td>583</td><td>1036</td></tr><tr><td>T\u222aM</td><td>1055</td><td>64</td><td>486</td><td>809</td></tr><tr><td>H\u222aM</td><td>939</td><td>71</td><td>435</td><td>797</td></tr><tr><td>T\u222aH\u222aM</td><td>1326</td><td>78</td><td>600</td><td>1105</td></tr></table>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>Filename</td><td>Text</td><td>Transcription in ForPA</td></tr><tr><td>blwr00000</td><td>\u89c0\u4e16\u97f3\u83e9\u85a9</td><td>guan1_se3_im1_po5_sat7</td></tr><tr><td>blwr00001</td><td>\u9a5a ga \u523a\u6fc0\u8457</td><td>giann1_ga2_ci3_gik7_diorh6</td></tr><tr><td>blwr00002</td><td>\u85e5\u6aa2\u5be6\u9a57\u5ba4</td><td>iorh6_giam4_sit6_ghiam2_sik7</td></tr><tr><td>blwr00003</td><td>\u85dd\u8853\u5de5\u4f5c\u8005</td><td>ghe2_sut6_gang1_zok7_zia4</td></tr></table>"
},
"TABREF4": {
"num": null,
"type_str": "table",
"text": "Two kinds of database collection systems are being used to create ForSDat. They are microphone and telephone systems, respectively.",
"html": null,
"content": "<table><tr><td>Lexicon</td><td>Balanced-Word Algorithm</td><td>Coverage rate &gt; 50%</td><td/><td colspan=\"2\">Balanced word sets</td><td/></tr><tr><td/><td>Division by</td><td/><td/><td/><td/><td/></tr><tr><td>Data sheet</td><td>200</td><td>Merge</td><td>Set1</td><td>Set2</td><td>........</td><td>SetN</td></tr><tr><td/><td>syllables</td><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF5": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td/><td>Name</td><td colspan=\"3\">Channel Gender Quantity</td><td>Train(hr)</td><td>Test (hr)</td></tr><tr><td/><td>TW01-M0</td><td/><td>Female</td><td>50</td><td>5.92</td><td>0.29</td></tr><tr><td/><td>TW01-M1</td><td/><td>Male</td><td>50</td><td>5.44</td><td/></tr><tr><td>ForSDAT</td><td>MD01-M0 MD01-M1</td><td>MIC</td><td>Female Male</td><td>50 50</td><td>5.65 5.42</td><td>0.27</td></tr><tr><td/><td>TW02-M0</td><td/><td>Female</td><td>233</td><td>10.10</td><td>0.70</td></tr><tr><td/><td>TW02-M1</td><td/><td>Male</td><td>277</td><td>11.66</td><td/></tr><tr><td/><td>TW02-T0</td><td>TEL</td><td>Female</td><td>580</td><td>29.21</td><td>0.95</td></tr><tr><td/><td>TW02-T1</td><td/><td>Male</td><td>412</td><td>19.37</td><td/></tr></table>"
}
}
}
}