Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O03-3001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:01:02.871514Z"
},
"title": "A Corpus-based Chinese Syllable-to-Character System",
"authors": [
{
"first": "Chien-Pang",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Chiao Tung University",
"location": {
"addrLine": "1001 Ta Hsueh Rd",
"settlement": "Hsinchu",
"country": "Taiwan 30050 R.O.C"
}
},
"email": ""
},
{
"first": "Tyne",
"middle": [],
"last": "Liang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Chiao Tung University",
"location": {
"addrLine": "1001 Ta Hsueh Rd",
"settlement": "Hsinchu",
"country": "Taiwan 30050 R.O.C"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "One of the popular input systems is based on Chinese phonetic symbols. Designing such kind of a syllable-to-character (STC) input system involves two major issues, namely, fault tolerance handling and homonym resolution. In this paper, the fault tolerance mechanism is constructed on the basis of a user-defined confusing set and a modified bucket indexing scheme is incorporated so as to satisfy real-time requirement. Meanwhile the homonym resolution is handled by binding force and heuristic selection rules. Both the system performance and tolerance ability are justified with real corpus in terms of searching speed and character conversion accuracy rate. Experimental results show that the proposed sche me can achieve 93.54% accuracy for zero-error syllable inputs and 80.13% for zero-tone syllable inputs. Furthermore both robustness and tolerance of the proposed system are proved for high input error rates.",
"pdf_parse": {
"paper_id": "O03-3001",
"_pdf_hash": "",
"abstract": [
{
"text": "One of the popular input systems is based on Chinese phonetic symbols. Designing such kind of a syllable-to-character (STC) input system involves two major issues, namely, fault tolerance handling and homonym resolution. In this paper, the fault tolerance mechanism is constructed on the basis of a user-defined confusing set and a modified bucket indexing scheme is incorporated so as to satisfy real-time requirement. Meanwhile the homonym resolution is handled by binding force and heuristic selection rules. Both the system performance and tolerance ability are justified with real corpus in terms of searching speed and character conversion accuracy rate. Experimental results show that the proposed sche me can achieve 93.54% accuracy for zero-error syllable inputs and 80.13% for zero-tone syllable inputs. Furthermore both robustness and tolerance of the proposed system are proved for high input error rates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Among various kinds of Chinese input methods, the most popular one is based on phonetic symbols. This is because most of Chinese-speaking users are taught to use phonetic symbols in their elementary schools when they learn Chinese. However a syllable-to-character (STC) system is inherently associated with the serious homonym and similarly-pronounced phoneme problems. This is because a single syllable may correspond to several Chinese characters and there are indeed several Mandarin syllables which are sounded similarly. So it is not easy for users or acoustic recognizer to distinguish them when they are used. We call t hese syllables as confusing syllables. For example, syllable (shih4) and (szu4) are sounded similarly in speaking and listening, and a user might treat (shih4) as szu4at typing or pronouncing. Thus robust fault tolerance ability of a STC system has to be concerned so as to improve the phoneme-to-character conversion accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years, various approaches have been proposed to construct a Chinese STC system either for speech input or keyboard input. For speech input, Chang [1994] used vector quantization to cluster words into classes when training Hidden Markov model so that words in the same class share the model's parameters. Contrast to the character N-gram based Markov model, a word N-gram based Markov model was proposed by Yang [1998] . Though Markov-based models are easy for implementation, they require large training corpus and large storage for large numbers of parameters.",
"cite_spans": [
{
"start": 150,
"end": 162,
"text": "Chang [1994]",
"ref_id": "BIBREF7"
},
{
"start": 416,
"end": 427,
"text": "Yang [1998]",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Furthermore, the parameters of Markov model are needed to be fixed, so they reflect the characters of training corpus only. Rather than using Markov model, Lin [1995] used mutual information to find the relation between base syllables and applied",
"cite_spans": [
{
"start": 156,
"end": 166,
"text": "Lin [1995]",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Heuristic Divide-and-Conquer Maximum Match (H-DCMM) Algorithm to detect prosodic-segment in a sentence. To train the robustness of prosodic-segment detection, a segmental K-means algorithm is also used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As for syllable-based keyboard input, Gie [1990] used a hand-crafted dictionary for matching syllables of phrases and a set of impression rules for homonym selection.",
"cite_spans": [
{
"start": 38,
"end": 48,
"text": "Gie [1990]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Gie [1991] , homonyms f or new phrases are furtherly dealt by using a dictionary and occurrence frequencies. On the other hand, Lai [2000] used maximum likelihood ratio and good-tuning estimation to handle characters with multiple syllables. Lin [2002] combined N-gram model and selection rules for dealing with multiple PingIn codes. Unlike statistical approaches, context sensitive method proposed by Hsu [1995] was applied in a Chinese STC system called \"Going.\" The system relies heavily on semantic pattern matching which can reduce the huge amount of data processing required for homophonic character selection. The conversion accuracy rate is close to 96%. In [Tsai and Hsu 2002] , a semantically-oriented approach was also presented by using both no un-verb event-frame word-pairs and s tatistical calculation. The experimental results show that their overall syllable-to-word accuracy can be 96.5%.",
"cite_spans": [
{
"start": 3,
"end": 13,
"text": "Gie [1991]",
"ref_id": "BIBREF13"
},
{
"start": 131,
"end": 141,
"text": "Lai [2000]",
"ref_id": "BIBREF15"
},
{
"start": 249,
"end": 255,
"text": "[2002]",
"ref_id": null
},
{
"start": 406,
"end": 416,
"text": "Hsu [1995]",
"ref_id": "BIBREF14"
},
{
"start": 670,
"end": 689,
"text": "[Tsai and Hsu 2002]",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper a corpus-based STC system to support high tolerance is presented and it can be used as a keyboard input method as well as a post-processor incorporated with an acoustic system. To support high tolerance ability, we used a bucket-based searching mechanism so that the searching time of confusing syllable is reduced. The presented homonym resolution is based on binding force informatio n and selection rules. Various tests are implemented to justify the system performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In zero-tolerance test, our character conversion accuracy is 93.54% out of 1052 characters. For zero-tone testing, the character conversion accuracy is 80.13%. In input syllables with 20% and 40% confusing set member replacement, the character conversion accuracy is 83.08% and 78.23% respectively. The feasibility and robustness of fault tolerance handling to a STC system are also proved by the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The outline of the paper is as follows. Section 2 introduces the preliminary background of Chinese syllable structure. Section 3 describes the system architecture and section 4 presents the proposed searching mechanism. Section 5 explains our selection module and section 6 reports various experimental tests. Finally Section 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "gives the conclusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "According to [Wu 1998 ], a general Mandarin syllable structure contains four parts consonant, head of dip hthong, vowel and tone. There are twenty-one consonants, sixteen vowels, and five tones. Since users usually pronounce head of diphthong and vowel simultaneously, so the syllable structure can be simplified to combine head of diphthong and vowel such as and [Chen 1998 ]. ",
"cite_spans": [
{
"start": 13,
"end": 21,
"text": "[Wu 1998",
"ref_id": "BIBREF22"
},
{
"start": 364,
"end": 374,
"text": "[Chen 1998",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mandarin syllable and Confusing set 2.1 Sets of syllables in Mandarin",
"sec_num": "2"
},
{
"text": "The confusing sets are the groups of syllables, which are recognized to be the same by the human or the acoustic recognizer. For example, (fei1) and (hui1) are confusing syllables for many Chinese-speaking people in Taiwan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confusing set",
"sec_num": "2.2"
},
{
"text": "Suppose Table 2 is the statistical results from an acoustic recognizer. Then the confusing sets of phonemes can be found by using the find-connected-components algorithm [Thomas 1998 ] in which phonemes are vertices of a graph and the confusing sets are those edges whose recognition probabilities are greater than a threshold. For example, two confusing sets of phonemes, { } and { , } are generated from Table 2 when their probabilities are greater than a given threshold at 25%. ",
"cite_spans": [
{
"start": 170,
"end": 182,
"text": "[Thomas 1998",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 8,
"end": 15,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 406,
"end": 413,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Confusing set",
"sec_num": "2.2"
},
{
"text": "The confusing sets of syllable are obtained by using Cartesian product on two confusing sets of consonants and vowels, (an example shown in Table 3 ). Then a bucket B( ) will contain the grams from C( ) of consonant confusing set and V( ) of vowel confusing set. Fig. 1 is an example of bucket of bigram syllable confusing set, and its corresponding bucket is B(08140607). ",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 263,
"end": 269,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Bucket of confusing set",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V(01) Nil V(06) V(11) V(02) , V(07) V(12) V(03) V(08) V(13) V(04) V(09) V(14) V(05) V(10) V(15) C(08) V(14) C(06) V(07) , ,",
"eq_num": ", , , ,"
}
],
"section": "Bucket of confusing set",
"sec_num": "2.3"
},
{
"text": "), and another syllable sequence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SylSeq1=c 1 v 1 c 2 v 2 c 3 v 3 which belongs to B(N C1 N V1 N C2 N V2 N C3 N V3",
"sec_num": null
},
{
"text": "SylSeq2=c 1 'v 1 'c 2 'v 2 'c 3 'v 3 ' which belongs to B(N C1 'N V1 'N C2 'N V2 'N C3 'N V3 ').",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SylSeq1=c 1 v 1 c 2 v 2 c 3 v 3 which belongs to B(N C1 N V1 N C2 N V2 N C3 N V3",
"sec_num": null
},
{
"text": "SylSeq2 has two base syllable distance from SylSeq1 if there exists any two mismatch pairs of consonant or vowel confusing sets, like N C1 N C1 ' and N V2 N V2 '. Similarly, there will be K-distance if there are K mismatch pairs between SylSeq1 and SylSeq2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SylSeq1=c 1 v 1 c 2 v 2 c 3 v 3 which belongs to B(N C1 N V1 N C2 N V2 N C3 N V3",
"sec_num": null
},
{
"text": "To find the grams with minimum base syllable distance from a given gram, we start to find the bucket first which the grams belong to. Our searching is done with the string matching algorithm proposed by Du and Chang [1994] . We start from the buckets with zero syllable distance. If there is no such gram in these buckets, we increase base syllable distance by 1. The maximum distance is defined to be 2 in this paper. We use index structure to memorize these buckets for every base syllable distance.",
"cite_spans": [
{
"start": 203,
"end": 222,
"text": "Du and Chang [1994]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bucket index",
"sec_num": "4.2"
},
{
"text": "Let [ , ] ( , ) denote a extension bucket index. Symbol and are the buckets whose errors are at any position except and ; symbol is the base syllable distance and \u2208{1, 2}; symbol represents bigram ( =2) or trigram bucket index ( = 3). For example, extension bucket index [1, 2] (1,3) is a trigram index with one base syllable distance, and contains the buckets whose errors are at any position except the first and second ones. Therefore, [1, 2] (1,3) contains the following buckets:",
"cite_spans": [
{
"start": 4,
"end": 9,
"text": "[ , ]",
"ref_id": null
},
{
"start": 271,
"end": 274,
"text": "[1,",
"ref_id": null
},
{
"start": 275,
"end": 277,
"text": "2]",
"ref_id": "BIBREF4"
},
{
"start": 439,
"end": 442,
"text": "[1,",
"ref_id": null
},
{
"start": 443,
"end": 445,
"text": "2]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bucket index",
"sec_num": "4.2"
},
{
"text": "B(O 1 O 2 XO 4 O 5 O 6 ), B(O 1 O 2 O 3 XO 5 O 6 ), B(O 1 O 2 O 3 O 4 XO 6 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bucket index",
"sec_num": "4.2"
},
{
"text": ", and In fact extension bucket index [1, 2] (1,3) and [5, 6] (1,3) together will include all buckets with one base syllable distance. The combination of extension bucket index set which contains all the buckets is called a covering extension bucket index.",
"cite_spans": [
{
"start": 37,
"end": 40,
"text": "[1,",
"ref_id": null
},
{
"start": 41,
"end": 43,
"text": "2]",
"ref_id": "BIBREF4"
},
{
"start": 54,
"end": 57,
"text": "[5,",
"ref_id": null
},
{
"start": 58,
"end": 60,
"text": "6]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bucket index",
"sec_num": "4.2"
},
{
"text": "B(O 1 O 2 O 3 O 4 O 5 X) (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bucket index",
"sec_num": "4.2"
},
{
"text": "Similarly, extension bucket index [1, 2] (1,3) and [5, 6] (1, 3) are the members of trigram covering extension bucket index with one base syllable distance. Thus, there exists more than one solution in finding covering extension bucket index. In fact, finding the covering extension bucket index is a NP-complete problem [Garey and Johnson 1979] .",
"cite_spans": [
{
"start": 34,
"end": 37,
"text": "[1,",
"ref_id": null
},
{
"start": 38,
"end": 40,
"text": "2]",
"ref_id": "BIBREF4"
},
{
"start": 51,
"end": 54,
"text": "[5,",
"ref_id": null
},
{
"start": 55,
"end": 57,
"text": "6]",
"ref_id": null
},
{
"start": 58,
"end": 61,
"text": "(1,",
"ref_id": null
},
{
"start": 62,
"end": 64,
"text": "3)",
"ref_id": null
},
{
"start": 321,
"end": 345,
"text": "[Garey and Johnson 1979]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bucket index",
"sec_num": "4.2"
},
{
"text": "Since the length of syllable sequence is short and the number of errors is small, it is easy to find the covering extension bucket index. Thus, searching buckets can be done in real time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bucket index",
"sec_num": "4.2"
},
{
"text": "The designed selection module is based on sliding window whose size is set to be five in the proposed system. Let C(S i-4 ), C(S i-3 ), C(S i-2 ), and C(S i-1 ) be the characters in front of C(S i ) at inputting syllable S i . Then the ranking scheme shown as equation 1is used to rank mono grams C(S i ), bigrams C(S i-1 )C(S i ) and trigrams",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection Module",
"sec_num": "5"
},
{
"text": "C(S i-2 )C(S i-1 )C(S i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection Module",
"sec_num": "5"
},
{
"text": "which exist in the gram database and each type of the grams with the top values will be treated as our candidate outputs and will be placed at corresponding positions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection Module",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\uf8f3 \uf8f2 \uf8f1 \u00d7 = bigram is g if g BF g P trigram or monogram is g if g P g Rank ) ( ) ( ) ( ) (",
"eq_num": "(1)"
}
],
"section": "Selection Module",
"sec_num": "5"
},
{
"text": "In (Eq. 1) p(g ) is the occurrence probability of g in the training corpus and the BF(g)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection Module",
"sec_num": "5"
},
{
"text": "is the binding force for two characters C i, C i+1 composing bigram g [Sproat 1990 ] and it is calculated as following equation:",
"cite_spans": [
{
"start": 70,
"end": 82,
"text": "[Sproat 1990",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selection Module",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ") ( ) ( ) ( log ) ( 1 1 2 1 + + + = i i i i i i C P C P C C P C C BF",
"eq_num": "(2)"
}
],
"section": "Selection Module",
"sec_num": "5"
},
{
"text": "The n selection rules applied to select the candidate grams are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection Module",
"sec_num": "5"
},
{
"text": "The experiment s were implemented to justify the system feasibility and tolerance ability. Our training data includes CKIP word database which contains 78,410 words from length 1 to length 9 and Chinatimes News on the website One experiment is to measure the response time of searching a word in a database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "6"
},
{
"text": "A database without bucket indexing 'no-bucket' is compared with 'bucket 9x15 ' which consists of nigh consonant and fifteen vowel confusing sets as listed in Table 3 of Section 2. The searching time of the databases with bucket indexing mechanism is less than one second. Table 4 shows the best case of searching time and there B(50K) means 50K bigrams, T(11K) means 11K trigram and so on. sentences randomly selected from the testing data and each sentence has 10.5 characters in average. We use two commercial STC systems for comparison, namely",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 165,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 272,
"end": 279,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "6"
},
{
"text": "Microsoft IME 2002a ( XP), and Going 6.5 ( ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "6"
},
{
"text": "We compare the accuracy in various tolerance rates which is defined as Eq. 3. In this experiment, we disabled the system-defined confusing phonemes of MS 2002a, because its confusing mechanism and sets are quite different from ours. (3) On the other hand experiments to investigate the correlation between tolerance rate and positions were also implemented. The tolerance position is selected by testing users randomly. Both Table 6 and Table 7 show that the proposed STC system indeed supports robust fault tolerance ability. ",
"cite_spans": [],
"ref_spans": [
{
"start": 425,
"end": 432,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 437,
"end": 444,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "6"
},
{
"text": "In this paper a high tolerant STC system useful for traditional Chinese input was presented. The proposed fault tolerance mechanism is constructed on the basis of a user-defined confusing set and a modified bucket indexing scheme is incorporated so as to satisfy real-time requirement. Meanwhile the homonym resolution is handled by binding force and heuristic selection rules. The performance of the presented system is also justified and compared with various tests. However the drawbacks with the proposed system are its lack of semantic and syntactic checking at output selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Hence errors like \" ( )\", \" ( ) \" will occur. So how to strengthen the selection module with more linguistic reasoning will be our next step to design an intelligent STC system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "-2 )C(S i-1 ) exists in gram database, then if it has overlapping C(S i-1 ) or C(S i-2 )C(S i-1 ) with C(S i-2 )C(S i-1 )C(S i ), then output C(S i-2 )C(S i-1 )C(S i )",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "1. If either C(S i-4 )C(S i-3 )C(S i-2 ) or C(S i-3 )C(S i-2 )C(S i-1 ) exists in gram database, then if it has overlapping C(S i-1 ) or C(S i-2 )C(S i-1 ) with C(S i-2 )C(S i-1 )C(S i ), then output C(S i-2 )C(S i-1 )C(S i ), otherwise abort C(S i-2 )C(S i-1 )C(S i )",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "-2 ) nor C(S i-3 )C(S i-2 )C(S i-1 ) is in trigram database, then 1.2.1 if both BF(C(S i-3 )C(S i-2 )) and BF(C(S i-2 )C(S i-1 )) is less than a threshold",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "2. If neither C(S i-4 )C(S i-3 )C(S i-2 ) nor C(S i-3 )C(S i-2 )C(S i-1 ) is in trigram database, then 1.2.1 if both BF(C(S i-3 )C(S i-2 )) and BF(C(S i-2 )C(S i-1 )) is less than a threshold, then output C(S i-2 )C(S i-1 )C(S i )",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "-2 )) or BF(C(S i-2 )C(S i-1 )) is greater than a threshold, and there exists overlapping C(S i-1 ) or C(S i-2 )C(S i-1 ) with",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "2.2 if either BF(C(S i-3 )C(S i-2 )) or BF(C(S i-2 )C(S i-1 )) is greater than a threshold, and there exists overlapping C(S i-1 ) or C(S i-2 )C(S i-1 ) with",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "-2 )) or BF(C(S i-2 )C(S i-1 )) is greater than a threshold but without any overlapping C(S i-1 ) or C(S i-2 )C(S i-1 ) with",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "2.3 if either BF(C(S i-3 )C(S i-2 )) or BF(C(S i-2 )C(S i-1 )) is greater than a threshold but without any overlapping C(S i-1 ) or C(S i-2 )C(S i-1 ) with",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "S i ) in database or C(S i-2 )C(S i-1 )C(S i ) is aborted, then 2.1. if C(S i-3 )C(S i-2 )C(S i-1 ) exists in database, then check if C(S i-3 )C(S i-2 )C(S i-1 ) has overlapping C(S i-1 ) with candidate C(S i-1 )C(S i )",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "If there is no C(S i-2 )C(S i-1 )C(S i ) in database or C(S i-2 )C(S i-1 )C(S i ) is aborted, then 2.1. if C(S i-3 )C(S i-2 )C(S i-1 ) exists in database, then check if C(S i-3 )C(S i-2 )C(S i-1 ) has overlapping C(S i-1 ) with candidate C(S i-1 )C(S i ), then output C(S i-1 )C(S i ), otherwise output the candidate C(S i )",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "-1 ) is not in database but C(S i-2 )C(S i-1 ) is, then check: if C(S i-2 )C(S i-1 ) has overlapping C(S i-1 ) with the candidate C(S i-1 )C(S i ) then output C(S i-1 )C(S i ), else if BF",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "2.2. if C(S i-3 )C(S i-2 )C(S i-1 ) is not in database but C(S i-2 )C(S i-1 ) is, then check: if C(S i-2 )C(S i-1 ) has overlapping C(S i-1 ) with the candidate C(S i-1 )C(S i ) then output C(S i-1 )C(S i ), else if BF (C(S i-2 )C(S i-1 )) threshold then output candidate C(S i-1 )C(S i );",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "-1 )), then output candidate C(S i )",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "else if BF (C(S i-1 )C(S i )) BF(C(S i-2 )C(S i-1 )), then output candidate C(S i ).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A Word-Class-Based Chinese Language Model and its Adaptation for Mandrain Speech Recognition",
"authors": [
{
"first": "T",
"middle": [
"Z"
],
"last": "Chang",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang T. Z. 1994. A Word-Class-Based Chinese Language Model and its Adaptation for Mandrain Speech Recognition, Master Thesis, National Taiwan University.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural Network-based Continuous Mandarin Speech Recognition System",
"authors": [
{
"first": "J",
"middle": [
"T"
],
"last": "Chen",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen J. T. 1998. Neural Network-based Continuous Mandarin Speech Recognition System, Master Thesis, National Chiao Tung University.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Chinese Knowledge Information Processing Group (CKIP) Corpus 3.0",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinese Knowledge Information Processing Group (CKIP) Corpus 3.0. http://godel.iis.sinica.edu.tw/CKIP/",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An Approach to Designing Very Fast Approximate String Matching Algorithms",
"authors": [
{
"first": "M",
"middle": [
"W"
],
"last": "Du",
"suffix": ""
},
{
"first": "S",
"middle": [
"C"
],
"last": "Chang",
"suffix": ""
}
],
"year": 1994,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "6",
"issue": "",
"pages": "620--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Du M. W. and Chang S. C. 1994. An Approach to Designing Very Fast Approximate String Matching Algorithms, IEEE Transactions on Knowledge and Data Engineering, 6, 4, 620-633.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Computers and Intractability. A Guide to the Theory of NP-Completeness",
"authors": [
{
"first": "M",
"middle": [
"R"
],
"last": "Garey",
"suffix": ""
},
{
"first": "D",
"middle": [
"S"
],
"last": "Johnson",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Garey M. R. and Johnson D. S. 1979. Computers and Intractability. A Guide to the Theory of NP-Completeness, Freeman, San Francisco.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Phonetic Chinese Input System Based on Impression Principle",
"authors": [
{
"first": "C",
"middle": [
"X"
],
"last": "Gie",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gie C. X. 1990. A Phonetic Chinese Input System Based on Impression Principle, Master Thesis, National Taiwan University.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Phonetic Input System for Chinese Characters Using A Word Dictionary and Statistics",
"authors": [
{
"first": "T",
"middle": [
"H"
],
"last": "Gie",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gie T. H. 1991. A Phonetic Input System for Chinese Characters Using A Word Dictionary and Statistics, Master Thesis, National Taiwan University.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Chinese Parsing in a Phoneme-to-Character Conversion System based on Semantic Pattern Matching",
"authors": [
{
"first": "W",
"middle": [
"L"
],
"last": "Hsu",
"suffix": ""
}
],
"year": 1995,
"venue": "International Journal on Computer Processing of Chinese and Oriental Languages",
"volume": "40",
"issue": "",
"pages": "227--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hsu W. L. 1995. Chinese Parsing in a Phoneme-to-Character Conversion System based on Semantic Pattern Matching, International Journal on Computer Processing of Chinese and Oriental Languages, 40, 227-236.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The Preliminary Study of phonetic symbol-to-Chinese character Conversion",
"authors": [
{
"first": "S",
"middle": [
"C"
],
"last": "Lai",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lai S. C. 2000. The Preliminary Study of phonetic symbol-to-Chinese character Conversion, Master Thesis, National Tsing Hua University.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Golden Mandarin ( ) -A Real Time Mandarin Dictation Machine for Chinese Language with Vary Large Vocabulary",
"authors": [
{
"first": "L",
"middle": [
"S"
],
"last": "Lee",
"suffix": ""
},
{
"first": "C",
"middle": [
"Y"
],
"last": "Tseng",
"suffix": ""
},
{
"first": "H",
"middle": [
"Y"
],
"last": "Gu",
"suffix": ""
},
{
"first": "F",
"middle": [
"H"
],
"last": "Liu",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Chang",
"suffix": ""
},
{
"first": "Y",
"middle": [
"H"
],
"last": "Lin",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "S",
"middle": [
"L"
],
"last": "Tu",
"suffix": ""
},
{
"first": "S",
"middle": [
"H"
],
"last": "Hsieh",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Chen",
"suffix": ""
}
],
"year": 1993,
"venue": "IEEE Transactions on Speech and Audio Proceeding",
"volume": "1",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee L. S., Tseng C.Y., Gu H. Y., Liu F. H., Chang C. H., Lin Y. H., Lee Y., Tu S. L., Hsieh S. H. and Chen C. H. 1993. Golden Mandarin ( ) -A Real Time Mandarin Dictation Machine for Chinese Language with Vary Large Vocabulary, IEEE Transactions on Speech and Audio Proceeding, 1, 2.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Prosodic-Segment Based Chinese Language Processing for Continous Mandarin Speech Recognition with very large Vocabulary",
"authors": [
{
"first": "S",
"middle": [
"W"
],
"last": "Lin",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin S. W. 1995. Prosodic-Segment Based Chinese Language Processing for Continous Mandarin Speech Recognition with very large Vocabulary, Master Thesis, National Taiwan University.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Mandarin Input System Compatible With Multiple Pinyin Methods",
"authors": [
{
"first": "J",
"middle": [
"X"
],
"last": "Lin",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin J. X. 2002. A Mandarin Input System Compatible With Multiple Pinyin Methods, Master Thesis, National Chung Hsing University.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Applying an NVEF Word-Pair Identifier to the Chinese Syllable-To -Word Conversion Problem",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Tsai",
"suffix": ""
},
{
"first": "W",
"middle": [
"L"
],
"last": "Hsu",
"suffix": ""
}
],
"year": 2002,
"venue": "The 19 th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsai J. L. and Hsu W. L. 2002. Applying an NVEF Word-Pair Identifier to the Chinese Syllable-To -Word Conversion Problem, The 19 th International Conference on Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A Statistic Method for Finding Word Boundaries in Chinese Text",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sproat",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Shih",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Process of Chinese and O riental Languages",
"volume": "4",
"issue": "",
"pages": "336--349",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sproat R. and Shih C. 1990. A Statistic Method for Finding Word Boundaries in Chinese Text, Computer Process of Chinese and O riental Languages, 4, 336-349.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Introduction To Algorithms",
"authors": [
{
"first": "H",
"middle": [
"C"
],
"last": "Thomas",
"suffix": ""
},
{
"first": "E",
"middle": [
"L"
],
"last": "Charles",
"suffix": ""
},
{
"first": "L",
"middle": [
"R"
],
"last": "Ronald",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "440--443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas H. C., Charles E. L. and Ronald L. R. 1998. Introduction To Algorithms, McGraw-Hill Book Company, New York St. Louis San Francisco Montreal Toronto, 440-443.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A Bucket Indexing Scheme for Error Tolerant Chinese Phrase Matching",
"authors": [
{
"first": "X",
"middle": [
"X"
],
"last": "Wu",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. X. Wu. 1998. A Bucket Indexing Scheme for Error Tolerant Chinese Phrase Matching. Master Thesis, National Chiao-Tung University.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Further Studies for Practical Chinese Language Modeling",
"authors": [
{
"first": "K",
"middle": [
"C"
],
"last": "Yang",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang K. C. 1998. Further Studies for Practical Chinese Language Modeling, Master Thesis, National Taiwan University.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "a bucket example of B(08140607).",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "shows the system flowchart c ontaining foreground process and background process. In the background process, we used news documents collected from the Chinatimes website ( http://news.chinatimes.com/) in March 2001 and segmented these documents into grams. Next, the Mandarin syllables for each gram were generated by our syllable generation method. We also encoded the grams by its confusing set which is obtained from acoustic statistic data. Then grams with confusing set information are stored into gram database.The foreground process consists of fault tolerance matching module and selection module. Fault tolerance matching module encodes the phonetic symbol sequence and searches the corresponding grams that have minimum error distance with phonetic symbol sequence in the corpus database. Then corresponding unigrams, bigrams, and trigrams will be searched and passed into selection module. Selection module is constructed on the basis of selection rules to decide the output gram. The binding force information is calculation is done with CKIP word database contains 78,410 Mandarin words and their corresponding syllables. Finally, the output gram will replace the characters of character sequence from the tail.Let N C,' N V denote a confusing set number for consonant/vowel confusing set respectively. We define base syllable to be a syllable without tone. Then a bucket B(N C N V ) will contain those grams having the syllable confusing set N C N V .A base syllable distance is the number of different consonant or vowel confusing set pairs between two base syllables. Suppose a base syllable sequence The system architecture.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "http://news.chinatimes.com/) containing 6,582 articles in March 2001. The testing data are collected from Chinetimes News on the website containing 7,828 articles in April 2001. The system development and testing environment is Windows 98 on P 450mHz PC with 128MG Ram.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"text": "",
"html": null,
"num": null
},
"TABREF1": {
"content": "<table><tr><td>(a) Consonants</td><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">01 Nil 02</td><td>03</td><td>04</td><td>05</td></tr><tr><td/><td>06</td><td>07</td><td>08</td><td>09</td><td>10</td></tr><tr><td/><td>11</td><td>12</td><td>13</td><td>14</td><td>15</td></tr><tr><td/><td>16</td><td>17</td><td>18</td><td>19</td><td>20</td></tr><tr><td/><td>21</td><td>22</td><td/><td/><td/></tr><tr><td>(b) Vowels</td><td/><td/><td/><td/><td/></tr><tr><td>01</td><td>Nil</td><td>02</td><td>03</td><td>04</td><td>05</td></tr><tr><td>06</td><td/><td>07</td><td>08</td><td>09</td><td>10</td></tr><tr><td>11</td><td/><td>12</td><td>13</td><td>14</td><td>15</td></tr><tr><td>16</td><td/><td>17</td><td>18</td><td>19</td><td>20</td></tr><tr><td>21</td><td/><td>22</td><td>23</td><td>24</td><td>25</td></tr><tr><td>26</td><td/><td>27</td><td>28</td><td>29</td><td>30</td></tr><tr><td>31</td><td/><td>32</td><td>33</td><td>34</td><td>35</td></tr><tr><td>36</td><td/><td>37</td><td>38</td><td>39</td><td/></tr></table>",
"type_str": "table",
"text": "Consonants and vowels.",
"html": null,
"num": null
},
"TABREF2": {
"content": "<table><tr><td colspan=\"3\">Phoneme Result Prob. Result Prob. Result Prob.</td></tr><tr><td>0.75</td><td>0.2</td><td>0.05</td></tr><tr><td>0.6</td><td>0.4</td><td/></tr><tr><td>0.7</td><td>0.3</td><td/></tr></table>",
"type_str": "table",
"text": "An example of acoustic data.",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table><tr><td colspan=\"4\">(a) Confusing sets of consonant</td><td/><td/><td/><td/><td/></tr><tr><td>C(01)</td><td>Nil,</td><td>,</td><td>,</td><td>C(04)</td><td>,</td><td>,</td><td>C(07)</td><td>,</td></tr><tr><td>C(02)</td><td>,</td><td/><td/><td>C(05)</td><td>,</td><td/><td>C(08)</td><td>,</td></tr><tr><td>C(03)</td><td>,</td><td/><td/><td>C(06)</td><td>,</td><td>,</td><td>C(09)</td><td>,</td></tr><tr><td colspan=\"4\">(b) Confusing sets of vowel</td><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"text": "An Example of confusing sets.",
"html": null,
"num": null
},
"TABREF5": {
"content": "<table><tr><td/><td colspan=\"4\">B(50K)+T(11K) B(100K)+T(210K) B(200K)+T(410K) B(400K)+T(1350K)</td></tr><tr><td>No bucket</td><td>0.2</td><td>0.77</td><td>1.69</td><td>15.2</td></tr><tr><td>Bucket 9x15</td><td>0.02</td><td>0.03</td><td>0.03</td><td>0.04</td></tr><tr><td colspan=\"5\">Experiments are also implemented for various tolerance tests. There are 100</td></tr></table>",
"type_str": "table",
"text": "Best case of searching time (seconds)",
"html": null,
"num": null
},
"TABREF6": {
"content": "<table><tr><td>Tolerance</td><td>Rate</td><td>=</td><td>\u2211</td><td>Number</td><td>Sentence Sentence a in characters of Numner of g sin confu ed by ble replac er's sylla Total Number of charact</td><td>sets</td><td>in</td><td>a</td><td>Sentence</td></tr></table>",
"type_str": "table",
"text": "shows the testing results with respect to different the accuracy among fo ur systems.",
"html": null,
"num": null
},
"TABREF7": {
"content": "<table><tr><td>Tolerance rate</td><td>0%</td><td>20%</td><td>30%</td><td>40%</td></tr><tr><td>9x15</td><td>83.94%</td><td>83.08%</td><td>81.46%</td><td>78.23%</td></tr><tr><td>Going6.5</td><td>94.30%</td><td>67.97%</td><td>57.80%</td><td>45.34%</td></tr><tr><td>MS 2002a</td><td>94.87%</td><td>69.30%</td><td>56.18%</td><td>43.44%</td></tr></table>",
"type_str": "table",
"text": "the character accuracy of 100 testing sentences.",
"html": null,
"num": null
},
"TABREF8": {
"content": "<table><tr><td/><td colspan=\"3\">Tolerance at Consonant Tolerance at Vowel Tolerance at Any Position</td></tr><tr><td>Tolerance rate = 20%</td><td>91.77</td><td>92.89</td><td>94.33</td></tr><tr><td>Tolerance rate = 35%</td><td>89.3</td><td>85.76</td><td>86.6</td></tr><tr><td>Tolerance rate = 45%</td><td>86.42</td><td>86.27</td><td>86.22</td></tr></table>",
"type_str": "table",
"text": "Character accuracy rate of bucket 9x15 using 30 training sentences.",
"html": null,
"num": null
},
"TABREF9": {
"content": "<table><tr><td/><td colspan=\"3\">Tolerance at Consonant Tolerance at Vowel Tolerance at Any Position</td></tr><tr><td>Tolerance rate = 20%</td><td>87.34</td><td>89.73</td><td>85.93</td></tr><tr><td>Tolerance rate = 30%</td><td>86.4</td><td>85.78</td><td>85.35</td></tr><tr><td>Tolerance rate = 40%</td><td>85.57</td><td>85.99</td><td>83.38</td></tr></table>",
"type_str": "table",
"text": "Character accuracy rate of bucket 9x15 using 30 testing sentences.",
"html": null,
"num": null
}
}
}
}