{ "paper_id": "O15-2003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:10:18.432969Z" }, "title": "A Study on Chinese Spelling Check Using Confusion Sets and N-gram Statistics", "authors": [ { "first": "Chuan-Jie", "middle": [], "last": "Lin", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Wei-Cheng", "middle": [], "last": "Chu", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper proposes an automatic method to build a Chinese spelling check system. Confusion sets were expanded by using two language resources, Shuowen Jiezi and the Four-Corner codes, which improved the coverages of the confusion sets. Nine scoring functions which utilize the frequency data in the Google Ngram Datasets were proposed, where the idea of smoothing was also adopted. Thresholds were also decided in an automatic way. The final system achieved far better than our baseline system in CSC 2013 Evaluation Task.", "pdf_parse": { "paper_id": "O15-2003", "_pdf_hash": "", "abstract": [ { "text": "This paper proposes an automatic method to build a Chinese spelling check system. Confusion sets were expanded by using two language resources, Shuowen Jiezi and the Four-Corner codes, which improved the coverages of the confusion sets. Nine scoring functions which utilize the frequency data in the Google Ngram Datasets were proposed, where the idea of smoothing was also adopted. Thresholds were also decided in an automatic way. The final system achieved far better than our baseline system in CSC 2013 Evaluation Task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Automatic spelling check is a basic and important technique in building NLP systems. It has been studied since 1960s as Blair (1960) and Damerau (1964) made the first attempt to solve the spelling error problem in English. Spelling errors in English can be grouped into two classes: non-word spelling errors and real-word spelling errors.", "cite_spans": [ { "start": 120, "end": 132, "text": "Blair (1960)", "ref_id": "BIBREF1" }, { "start": 137, "end": 151, "text": "Damerau (1964)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "A non-word spelling error occurs when the written string cannot be found in a dictionary, such as in \"fly fron* Paris\". The typical approach is finding a list of candidates from a large dictionary by edit distance or phonetic similarity (Mitton, 1996; Deorowicz & Ciura, 2005; Carlson & Fette, 2007; Chen et al., 2007; Mitton, 2008; Whitelaw et al., 2009) .", "cite_spans": [ { "start": 237, "end": 251, "text": "(Mitton, 1996;", "ref_id": "BIBREF17" }, { "start": 252, "end": 276, "text": "Deorowicz & Ciura, 2005;", "ref_id": "BIBREF11" }, { "start": 277, "end": 299, "text": "Carlson & Fette, 2007;", "ref_id": "BIBREF2" }, { "start": 300, "end": 318, "text": "Chen et al., 2007;", "ref_id": "BIBREF7" }, { "start": 319, "end": 332, "text": "Mitton, 2008;", "ref_id": "BIBREF18" }, { "start": 333, "end": 355, "text": "Whitelaw et al., 2009)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "A real-word spelling error occurs when one word is mistakenly used for another word, such as in \"fly form* Paris\". Typical approaches include using confusion set (Golding & Roth, 1999; Carlson et al., 2001) , contextual information (Verberne, 2002; Islam & Inkpen, 2009) , and others (Pirinen & Linden, 2010; Amorim & Zampieri, 2013) .", "cite_spans": [ { "start": 162, "end": 184, "text": "(Golding & Roth, 1999;", "ref_id": "BIBREF12" }, { "start": 185, "end": 206, "text": "Carlson et al., 2001)", "ref_id": "BIBREF4" }, { "start": 232, "end": 248, "text": "(Verberne, 2002;", "ref_id": "BIBREF22" }, { "start": 249, "end": 270, "text": "Islam & Inkpen, 2009)", "ref_id": "BIBREF13" }, { "start": 284, "end": 308, "text": "(Pirinen & Linden, 2010;", "ref_id": "BIBREF21" }, { "start": 309, "end": 333, "text": "Amorim & Zampieri, 2013)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Spelling error problem in Chinese is quite different. Because there is no word delimiter \uf02a Department of Computer Science and Engineering, National Taiwan Ocean University No. 2, Pei-Ning Road, Keelung, 20224 Taiwan E-mail: (cjlin, wcchu.cse)@ntou.edu.tw in a Chinese sentence and almost every Chinese character can be considered as a one-character word, most of the errors are real-word errors.", "cite_spans": [ { "start": 114, "end": 208, "text": "Science and Engineering, National Taiwan Ocean University No. 2, Pei-Ning Road, Keelung, 20224", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Although that an illegal-character error can happen where writing by hand, i.e. the written symbol is not a legal Chinese character and thus not collected in a dictionary, such an error cannot happen in a digital document because only legal Chinese characters can be typed or shown in computer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Spelling error problem in Chinese is defined as follows: given a sentence, find the locations of misused characters which result in wrong words, and propose the correct characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "There have been many attempts to solve the spelling error problem in Chinese (Chang, 1994; Zhang et al., 2000; Cucerzan & Brill, 2004; Li et al., 2006; Liu et al., 2008) . Among them, lists of visually and phonologically similar characters play an important role in Chinese spelling check (Liu et al., 2011) .", "cite_spans": [ { "start": 77, "end": 90, "text": "(Chang, 1994;", "ref_id": "BIBREF5" }, { "start": 91, "end": 110, "text": "Zhang et al., 2000;", "ref_id": "BIBREF27" }, { "start": 111, "end": 134, "text": "Cucerzan & Brill, 2004;", "ref_id": "BIBREF9" }, { "start": 135, "end": 151, "text": "Li et al., 2006;", "ref_id": "BIBREF14" }, { "start": 152, "end": 169, "text": "Liu et al., 2008)", "ref_id": "BIBREF15" }, { "start": 289, "end": 307, "text": "(Liu et al., 2011)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Two Chinese spelling check evaluation projects have been held: Chinese Spelling Check Evaluation at SIGHAN Bake-off 2013 (Wu et al., 2013) and CLP-2014 Chinese Spelling Check Evaluation (Yu et al., 2014) , including error detection and error correction subtasks. The tasks are organized based on some research works (Wu et al., 2010; Chen et al., 2011; Liu et al., 2011) . Our baseline system participated in both tasks. This paper describes an extended system based on Chinese Spelling Check (shorten as CSC tasks hereafter) 2013 and 2014 datasets.", "cite_spans": [ { "start": 121, "end": 138, "text": "(Wu et al., 2013)", "ref_id": "BIBREF25" }, { "start": 186, "end": 203, "text": "(Yu et al., 2014)", "ref_id": "BIBREF26" }, { "start": 316, "end": 333, "text": "(Wu et al., 2010;", "ref_id": "BIBREF24" }, { "start": 334, "end": 352, "text": "Chen et al., 2011;", "ref_id": "BIBREF8" }, { "start": 353, "end": 370, "text": "Liu et al., 2011)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This paper is organized as follows. Section 2 introduces our baseline system developed during Chinese Spelling Check Task 2013 and 2014. We sought new resources to expand confusion sets as described in Section 3. New scoring functions and threshold decision using Google Ngram frequencies to estimate the likelihood of passages were defined in Section 4. Section 5 shows experimental results with discussions and Section 6 concludes this paper. Figure 1 shows the architecture of our Chinese spelling checking system. A sentence under consideration is first word-segmented. Candidates of spelling errors are replaced by similar characters one by one. The newly created sentences are word segmented again. They are sorted according to sentence generation probabilities measured by word or POS bigram model. If a replacement results in a better sentence, spelling error is reported.", "cite_spans": [], "ref_spans": [ { "start": 445, "end": 453, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In CSC tasks, the set of similar characters is called a confusion set. More information about confusion sets is given in Section 2.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": "2.1" }, { "text": "There are two kinds of spelling-error candidates in our system: one-character words and two-character words. Their replacement procedures are different, as described in Section 2.3 and 2.4. Section 2.5 introduced two rules for filtering out unlikely replacements. N-gram probability models in our baseline system are described in Section 2.6. The procedure to decide locations of errors is given in Section 2.7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Study on Chinese Spelling Check Using 25 Confusion Sets and N-gram Statistics", "sec_num": null }, { "text": "In SIGHAN7 Bake-off 2013 Chinese Spelling Check task, the organizers provided six kinds of confusion sets: 4 sets of phonologically similar characters and 2 sets of visually similar characters. The four sets of phonologically similar characters include characters with the same pronunciation in the same tone (\u540c\u97f3\u540c\u8abf, shorten as SPST hereafter), characters with the same pronunciation but in different tones (\u540c\u97f3\u7570\u8abf, shorten as SPDT hereafter), characters with similar pronunciations in the same tone (\u8fd1\u97f3\u540c\u8abf, shorten as DPST hereafter), and characters with similar pronunciations but in different tones (\u8fd1\u97f3\u7570\u8abf, shorten as DPDT hereafter). For example, phonologically similar characters to the character \u60c5 (whose pronunciation is [qing2] and meaning is 'feeling') are: There are two confusion sets of visually-similar characters. The first one is the set of characters with the same radicals (\u90e8\u9996) with the same number of strokes (\u7b46\u5283) (\u540c\u90e8\u9996\u540c\u7b46 \u756b\u6578, shorten as RStrk hereafter). For example, the radical of the character \u60c5 is \u5fc3 (shown as \u5fc4 inside the character) with 11 strokes. Characters belonging to the radical \u5fc3 with 11 strokes are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Sets", "sec_num": "2.2" }, { "text": "RStrk: \u60cb\u60a8\u6089\u60c7\u60c6\u60a0\u60a3\u60e6\u60da\u60bc\u60bd\u60d8\u60b8\u60df\u60dc\u60bb\u60b4\u60b5\u607f\u60d5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Sets", "sec_num": "2.2" }, { "text": "The second visually-similar-character set collects characters with similar Cangjie codes (\u5009\u9821\u78bc, shorten as CJie hereafter). Cangjie is a well-known code map of Chinese characters. Each Chinese character is encoded by a combination of at most 5 codes representing basic strokes in its visual structure. Characters who have similar Cangjie codes are likely visually similar. Liu et al. (2011) considered the information of surface structure and stroke similarity to create this confusion set. For example, the Cangjie code of the character \u60c5 ([qing2], 'feeling') is PQMB, where \"P \u5fc4\" denotes its radical part (\u5fc4) and \"QMB \u30ad\u4e00\u6708\" denotes its body part (\u9752). So its similar characters are: CJie: ", "cite_spans": [ { "start": 372, "end": 389, "text": "Liu et al. (2011)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Confusion Sets", "sec_num": "2.2" }, { "text": "\u6e05[EQMB] \u6674[AQMB] \u5029", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Sets", "sec_num": "2.2" }, { "text": "After doing word segmentation on the original sentence, every one-character word is considered as candidate where error occurs. These candidates are one-by-one replaced by similar characters in their confusion sets to see if a new sentence is more acceptable. \"\u537b\", \"\u7279\" and \"\u7e8c\" are one-character words so they are candidates of spelling errors. The confusion set of the character \"\u537b\" includes \u8173\u6b32\u53e9\u5378... and the confusion set of the character \"\u7279\" includes \u6301\u6642\u6043\u5cd9\u4f8d... Replacing these one-character words with similar characters one-by-one will produce the following new sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "One-Character Word Replacement", "sec_num": "2.3" }, { "text": ". ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "One-Character Word Replacement", "sec_num": "2.3" }, { "text": "Our observation on the training sets finds that some errors occur in two-character words, which means that a string containing an incorrect character is also a legal word. Examples are \"\u8eab\u624b\" ([shen1-shou3], 'skills') versus \"\u751f\u624b\" ([sheng1-shou3], 'amateur'), and \"\u4eba\u54e1\" ([ren2-yuan2], 'member') vs. \"\u4eba\u7de3\" ([ren2-yuan2], 'relation').", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Character Word Replacement", "sec_num": "2.4" }, { "text": "To handle such kinds of spelling errors, we created confusion sets for all known words by the following method. The resource for creating word-level confusion set is Academia Sinica Balanced Corpus (ASBC for short hereafter, cf. Chen et al., 1996) .", "cite_spans": [ { "start": 229, "end": 247, "text": "Chen et al., 1996)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Two-Character Word Replacement", "sec_num": "2.4" }, { "text": "For each word appearing in ASBC, each character in the word is substituted with its similar characters one by one. If a newly created word also appears in ASBC, it is collected into the confusion set of this word. Take the word \"\u4eba\u54e1\" as an example. After replacing \"\u4eba\" or \"\u54e1\" with their similar characters, new strings \u4ec1\u54e1, \u58ec\u54e1, \u2026, \u4eba\u7de3, and \u4eba \u97fb are looked up in ASBC. Among them, only \u4eba\u7de3, \u4eba\u733f, \u4eba\u6587, and \u4eba\u4fd1 are legal words thus collected in \u4eba\u54e1's confusion set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Character Word Replacement", "sec_num": "2.4" }, { "text": "For each two-character word, if it has a confusion set, similar words in the set one-by-one substitute the original word to see if a new sentence is more acceptable. where \"\u6559\u5ba4\", \"\u53ea\u8981\", and \"\u4eba\u54e1\" are multi-character words with confusion sets. By replacing \u6559\u5ba4 with \u6559\u58eb, \u6559\u5e2b\u2026, replacing \u53ea\u8981 with \u7947\u8981, \u53ea\u6709, and replacing \u4eba\u54e1 with \u4eba\u7de3, \u4eba\u733f\u2026, the following new sentences will be generated. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Character Word Replacement", "sec_num": "2.4" }, { "text": "Two filter rules are applied before error detection in order to discard apparently incorrect replacements. The rules are defined as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Rules", "sec_num": "2.5" }, { "text": "If a replacement results in a person name, discard it. Our word segmentation system performs named entity recognition at the same time. If the replacing similar character can be considered as a Chinese family name, the consequent characters might be merged into a person name. As most of the spelling errors do not occur in personal names, we simply ignore these replacements. Take C1-1701-2 as an example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule 1: No error in person names", "sec_num": null }, { "text": "...\u6bcf \u4f4d \u7522 \u9f61 \u5a66\u5973...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule 1: No error in person names", "sec_num": null }, { "text": "\"\u9b4f\" is phonologically similar to \"\u4f4d\" and is a Chinese family name. The newly created sentence is segmented as ...\u6bcf \u9b4f\u7522\u9f61(PERSON) \u5a66\u5973...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(every QF pregnancy age woman 'every woman in the age of pregnancy')", "sec_num": null }, { "text": "where \"\u9b4f\u7522\u9f61\" is recognized as a person name so this replacement is discarded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(every Chan-Ling Wei woman: nonsense)", "sec_num": null }, { "text": "For the one-character replacement, if the replaced (original) character is a personal anaphora (\u4f60 'you' \u6211 'I' \u4ed6 'he/she') or numbers from 1 to 10 (\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d\u5341), discard the replacement. We assume that a writer seldom misspell such words. Take B1-0122-2 as an example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule 2: Stopword filtering", "sec_num": null }, { "text": "...\u6211 \u6703 \u5728 \u4e8c \u865f \u51fa\u53e3 \u7b49 \u4f60...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule 2: Stopword filtering", "sec_num": null }, { "text": "Although \"\u4e8c\" is a one-character word, it is in our stoplist therefore no replacement is performed on this word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(I will at two number exit wait you 'I will wait for you at Exit No. 2')", "sec_num": null }, { "text": "A basic hypothesis is that a correct replacement will generate a \"better\" sentence which has higher probability than the original one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-Gram Probabilities", "sec_num": "2.6" }, { "text": "The likelihood of a passage being understandable can be estimated as sentence generation probability by language models. We tried smoothed word-unigram, word-bigram, and POS-bigram models in our baseline system. The training corpus used to build language models is ASBC. As usual, we use log probabilities instead.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-Gram Probabilities", "sec_num": "2.6" }, { "text": "Besides applying rules in which the probabilities were compared directly, we also treated them as features to train a SVM classifier which guessed whether a replacement was correct or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-Gram Probabilities", "sec_num": "2.6" }, { "text": "In our system, error detection and correction greatly rely on sentence generation probabilities. Therefore, all the newly created sentences should also be word segmented. If a new sentence results in a better word segmentation, it is very likely that the original character is misused and this replacement is correct. But if no replacement is better than the original sentence, it is reported as \"no error\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Detection", "sec_num": "2.7" }, { "text": "The detail of our error detection algorithm is delivered here. The original sentence is first divided into several sub-sentences by six sentence-delimiting punctuation marks: comma, period, exclamation, question mark, colon, and semicolon. The following steps are performed on each sub-sentence, referred to as original passage hereafter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Detection", "sec_num": "2.7" }, { "text": "Divide the original sentence into several passages by the sentence-delimiting punctuation marks 2. Perform word segmentation on the original passages 3. Measure the likelihood of the original passages by language models", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "For each one-character word in each original passage", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "(1) Skip the word if it is a person name or a stopword (filtering rules)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "(2) Replace the word with its similar characters in the confusion sets to generate un-segmented passages, one new passage for one similar character", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "(3) Perform word segmentation on the new passages", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "For each two-character word in each original passage", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "(1) If the word appears in the two-character confusion set, replace the word with its similar words in the two-character confusion sets to generate un-segmented passages, one new passage for one similar word If no new passage has a higher score than its original passage, report \"no error\" in this original passage 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "Consider only the new passage with the highest score", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "(1) If its score comparing to the original one is not higher than a pre-defined threshold, report \"no error\" in this original passage", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "(2) Otherwise, report the location and the similar character (or locations of similar characters in a two-syllable similar word) of the replacement which generates this new passage", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "In our experience, the confusion sets provided by the task organizers do not cover all the errors. The error coverage of the confusion sets is depicted in Table 1 , where TR means training set and TS means test set. The first 9 rows show the coverage of each confusion set, where set 0 to set 5 have been explained in Section 2.2. We can see that the SPST confusion set alone covers 70% of the errors in CSC 2013 datasets but only about half of the errors in CSC 2014 datasets. The second important confusion set is CJie, which covers 30% to 40% of the errors.", "cite_spans": [], "ref_spans": [ { "start": 155, "end": 162, "text": "Table 1", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Confusion Set Expansion", "sec_num": "3." }, { "text": "The last 10 rows of Table 1 show the coverage of the unions of confusion sets. The union of set 0~5 covers 94.59% of the errors. The union of set 0~3+5 has the same coverage as the union of set 0~5, which suggests that RStrk can be ignored.", "cite_spans": [], "ref_spans": [ { "start": 20, "end": 27, "text": "Table 1", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Confusion Set Expansion", "sec_num": "3." }, { "text": "In order to achieve better coverage, we used two resources to expand the confusion sets. One is Shuowen Jiezi and the other is the Four-Corner Encoding System. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Set Expansion", "sec_num": "3." }, { "text": "Shuowen Jiezi 1 (\u8aaa\u6587\u89e3\u5b57) is a dictionary of Chinese characters. Xu Shen (\u8a31\u614e), author of this dictionary, analyzed the characters according to the six lexicographical categories (\u516d\u66f8). One major category is phono-semantic compound characters (\u5f62\u8072), which were created by combining a radical (\u5f62\u7b26) with a phonetic component (\u8072\u7b26). Characters with same phonetic components were collected to expand confusion sets, because they are by definition phonologically and visually similar. For example, the following characters share the same phonetic component \"\u5bfa\" ([si4] , 'temple') thus become confusion candidates (their actual pronunciation are given in brackets):", "cite_spans": [ { "start": 549, "end": 555, "text": "([si4]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Confusion Set from Shuowen Jiezi", "sec_num": "3.1" }, { "text": "SWen: \u4f8d[si4]\u6301[chi2]\u6043[shi4]\u7279[te4]\u6642[shi2]...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Set from Shuowen Jiezi", "sec_num": "3.1" }, { "text": "It happens a phonetic component might not be atomic, which means it also has its own phonetic component. For example, \u6f54's phonetic component is \u7d5c, but \u7d5c's phonetic component is \u4e2f. We tried two creation methods. The first one was created by collecting characters with the same phonetic component (referred to as SWen1), and the second one was the closure of SWen1 (referred to as SWen2). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Set from Shuowen Jiezi", "sec_num": "3.1" }, { "text": "The Four-Corner System 2 (\u56db\u89d2\u865f\u78bc) is an encoding system for Chinese characters. Digits 0~9 represent some typical shapes in character strokes. A Chinese character is encoded into 4 digits which represent the shapes found in its 4 corners. We collect characters in the same Four-Corner codes to expand confusion sets, because they are by definition visually similar. For example, the following characters are all encoded as 6080 in the Four-Corner System (shorten as Cor4 hereafter):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Set from the Four-Corner System", "sec_num": "3.2" }, { "text": "Cor4: \u53ea\u56da\u8c9d\u8db3\u7085\u662f\u54e1\u7570\u8cb7\u5713\u571a Set 6 in Table 1 represents Cor4. Unfortunately unions including Cor4 do not cover more errors than set0~3+5+7. It is hard to say if The Four-Corner System is helpful or not.", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 34, "text": "Table 1", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Confusion Set from the Four-Corner System", "sec_num": "3.2" }, { "text": "To make a larger two-character confusion set, unigrams in the Chinese Google Ngram dataset were used instead of ASBC. But some issues should be handles before dataset creation, which are discussed in Section 3.3.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Character Confusion Set Expansion", "sec_num": "3.3" }, { "text": "Chinese Web 5-gram 3 is real data released by Google Inc. who collected from all webpages in the World Wide Web which are unigram to 5-grams. Frequencies of these ngrams are also provided. Some examples from the Chinese Web 5-gram dataset are given here: There are several issues with regard to using the Chinese Web 5-gram dataset in this task. First, the Chinese Web 5-gram dataset includes both Traditional and Simplified Chinese ngrams, but our experimental datasets are written in Traditional Chinese. To make full use of this dataset, we decide to translate every Simplified Chinese words into Traditional Chinese. Our translation method was simply table-lookup on the Simplified-to-Traditional Chinese word mappings provided by Wikipedia 4 . Note that the translation may not be perfect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Google Ngram Dataset Preprosessing", "sec_num": "3.3.1" }, { "text": "After translation, some ngrams become identical, such as \u96fb\u8996 and \u7535\u89c6 ('television') and all the Chinese Google Ngrams shown in the previous examples. Identical words are combined into one entry and their frequencies are merged.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Google Ngram Dataset Preprosessing", "sec_num": "3.3.1" }, { "text": "The two-character confusion set in our baseline system was trained from ASBC. We tried to use unigram set in the Chinese Web 5-gram dataset to create a larger two-character confusion set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Set Expansion by Google Ngram", "sec_num": "3.3.2" }, { "text": "The procedure is the same as in the baseline system development: collect all the two-character words in the Chinese Web unigram set, replace each character by its similar characters, collect all the new strings which also appear in the Chinese Web unigram set as the original word's two-character confusion set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Set Expansion by Google Ngram", "sec_num": "3.3.2" }, { "text": "In CSC 2014 training data, there are cases that both characters in a two-character word are misused, such as \u4e5f\u662f ([ye3-shi4], 'also') vs. \u591c\u5e02 ([ye4-shi4], 'night market'). We also performed such kind of replacement and collected legal similar words into the two-character confusion set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Set Expansion by Google Ngram", "sec_num": "3.3.2" }, { "text": "In CSC tasks held in 2013 and 2014, we tried bigram probability model to predict errors in sentences. The language generation model was trained from Academia Sinica Balanced", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Passage Likelihood Scoring", "sec_num": "4." }, { "text": "Corpus. We found the volume and vocabulary of ASBC was not large enough. So we turn to use Chinese Google Ngram dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Sets and N-gram Statistics", "sec_num": null }, { "text": "Given a sentence (word-segmented, with or without errors) S = {w 1 , w 2 , \u2026 w m }, let Gram(S, n) be the set of all n-grams containing in the sentence S, i.e. Gram(S, n) = {(w i , w i+1 , \u2026 w i+n-1 )| 1\uf0a3 i \uf0a3 m-n+1}. We define Google Ngram Frequency gnf(g) of a n-gram to be its frequency count provided in the Chinese Web 5-gram dataset. If it does not appear in that dataset, its value is defined as 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ngram Scoring Functions", "sec_num": "4.1" }, { "text": "Five scoring functions GS * (S) were used to measure the likelihood of a sentence. Equation 1 is the definition of raw frequency score GS raw (S) which sums up the frequencies of all n-grams. Equation 2 and 3 give the definitions of log frequency score GS logn (S, n) and GS log (S) which sums up the logarithm of frequencies of all n-grams. Because large frequency tends to dominate the scores and then leads to bias, hopefully logarithm values can provide a moderate scoring. Note that we skip the ngrams which do not appear in the Chinese Web 5-gram dataset when calculating the log frequency score (or in another word, its log score is set to be 0).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ngram Scoring Functions", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf028 \uf029 \uf028 \uf029 5 2 , ( ) raw n g Gram Sn GS S gnf g \uf03d \uf0ce \uf0e6 \uf0f6 \uf0e7 \uf0f7 \uf03d \uf0e7 \uf0f7 \uf0e8 \uf0f8 \uf0e5 \uf0e5 (1) \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 log , , l o g n g Gram S n GS S n gnf g \uf0ce \uf03d \uf0e5 (2) \uf028 \uf029 \uf028 \uf029 5 log 2 , log n n GS S GS S n \uf03d \uf03d \uf0e5", "eq_num": "(3)" } ], "section": "Ngram Scoring Functions", "sec_num": "4.1" }, { "text": "It is obvious that matching of a higher gram is more welcome than of a lower gram. To favor higher grams, we define the third scoring function length-weighted log frequency score GS len (S) which multiplies the log frequency score with n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ngram Scoring Functions", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 5 2 , log len n g G r a mSn GS S n gnf g \uf03d \uf0ce \uf0e6 \uf0f6 \uf0e7 \uf0f7 \uf03d \uf0b4 \uf0e7 \uf0f7 \uf0e8 \uf0f8 \uf0e5 \uf0e5", "eq_num": "(4)" } ], "section": "Ngram Scoring Functions", "sec_num": "4.1" }, { "text": "We further tried two average scores where scores of the same n are averaged before summation. Equation 5 and 6 illustrate the logarithm and length-weighted versions, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ngram Scoring Functions", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "& Wei-Cheng Chu \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 5 log 2 , 1 log , av n g G r a m S n GS S gnf g Gram S n \uf03d \uf0ce \uf0e6 \uf0f6 \uf0e7 \uf0f7 \uf03d \uf0b4 \uf0e7 \uf0f7 \uf0e8 \uf0f8 \uf0e5 \uf0e5 (5) \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 5 2 , log , lenav n g G r a m S n n GS S gnf g Gram S n \uf03d \uf0ce \uf0e6 \uf0f6 \uf0e7 \uf0f7 \uf03d \uf0b4 \uf0e7 \uf0f7 \uf0e8 \uf0f8 \uf0e5 \uf0e5", "eq_num": "(6)" } ], "section": "Chuan-Jie Lin", "sec_num": null }, { "text": "We also tried a smoothing-like function to handle zero frequency. If a ngram does not appear in the Chinese Web 5-gram dataset, its log score is set to a negative constant \uf065 . The smoothed log frequency score gnf'(g) is defined as Equation 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chuan-Jie Lin", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 if 0 otherwise log gnf g gnf g gnf g \uf065 \uf03d \uf0ec \uf0ef \uf0a2 \uf03d \uf0ed \uf0ef \uf0ee", "eq_num": "(7)" } ], "section": "Chuan-Jie Lin", "sec_num": null }, { "text": "Figure 1 demonstrates the detailed information and steps of compute the values of two of the scoring functions, log frequency score and length-weighted log frequency score, with or without smoothing, by using the first passage of B1-0143-1 as an example. As we can see, the smoothed length-weighted log frequency score can successfully identify the correct answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chuan-Jie Lin", "sec_num": null }, { "text": "A replacement is considered to be \"correct\" if the score of the generated new passage is higher than the original's to a certain degree. As described in Section 2.7, a pre-defined threshold is used to ensure that the new passage is far better than the original passage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Threshold Learning", "sec_num": "4.2" }, { "text": "In CSC 2013 and 2014, this threshold was set by consulting classification rules learned by decision tree. In this paper, we try to observe the efficiency of thresholds in a more systematical way as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Threshold Learning", "sec_num": "4.2" }, { "text": "Two kinds of thresholds were considered. The first one is for the score difference of the scores of the new passage and the original passage. Because the new passage must have a higher score than the original one, this value is always positive. The second one is for the ratio of the score difference to the original passage's score. Because scores may be negative, we take its absolute value instead, i.e. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Threshold Learning", "sec_num": "4.2" }, { "text": "(Rpl1) = (log(gnf(\u4f60 \u9084)) + log(gnf(\u9084 \u8a18\u5f97))+\u2026+ log(gnf(\u8ab2 \u55ce)) + (log(gnf(\u4f60 \u9084 \u8a18\u5f97)) +\u2026+ log(gnf(\u7684 \u8ab2 \u55ce))+ (log(gnf(\u4f60 \u9084 \u8a18\u5f97 \u6211\u5011)) +\u2026+ log(gnf(\u6a23 \u7684 \u8ab2 \u55ce)) + (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Threshold Learning", "sec_num": "4.2" }, { "text": "A threshold is trained in the steps as follows. Under a scoring function, all replacements are sorted according to the score difference (or ratio). Largest values are ranked higher. Since each replacement is known to be \"correct\" or \"incorrect\", precision, recall, and F-score at each rank can be decided. Choose the difference (or ratio) which achieves the highest F-score as the threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Sets and N-gram Statistics", "sec_num": null }, { "text": "Best F-scores under different scoring functions, smoothing strategies, and training data are shown in Table 2 (a) and 2(b), where the first columns represent scoring functions introduced in Section 4.1. Meanings of labels in the second rows are as follows:", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 109, "text": "Table 2", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Confusion Sets and N-gram Statistics", "sec_num": null }, { "text": "OL: no smoothing, at most one error report at one location OP: no smoothing, at most one error report at one passage ML: smoothing, at most one error report at one location MP: smoothing, at most one error report at one passage As we can see in Table 2 , smoothing and logarithm did improve the performance. Using thresholds of score differences was better than using thresholds of ratios. Among the 9 scoring functions, length-weighted log frequency score GS len outperformed other functions. However, averaging at each n level harmed the performance.", "cite_spans": [], "ref_spans": [ { "start": 245, "end": 252, "text": "Table 2", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Confusion Sets and N-gram Statistics", "sec_num": null }, { "text": "To our surprise, bigram model GS logn (2) was not very useful. However, 4-gram model GS logn (4) alone could achieve pretty good performance. Moreover, the characteristics of CSC 2013 training set and CSC 2014 training set are quite different. F-cores on CSC 2014 data sets were much lower.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion Sets and N-gram Statistics", "sec_num": null }, { "text": "Four benchmarks are used to evaluate our systems: the training set and test set in Chinese Spelling Check Evaluation at SIGHAN Bake-off 2013 (Wu et al., 2013) , and the training set and test set in CLP-2014 Chinese Spelling Check Evaluation (Yu et al., 2014) . They are referred to as CSC 2013 and 2014 datasets in this paper. Number of topics and errors containing in these datasets are listed in Table 3 . ", "cite_spans": [ { "start": 141, "end": 158, "text": "(Wu et al., 2013)", "ref_id": "BIBREF25" }, { "start": 241, "end": 258, "text": "(Yu et al., 2014)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 398, "end": 405, "text": "Table 3", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "There are two subtasks in CSC Task: error detection and error correction. Error detection subtask evaluates the correctness of detected error locations. Error correction subtask evaluates the correctness of locations and proposed corrections. Note that the unit of \"correctness\" is topic. It only counts the topics whose errors are all successfully corrected with no false alarm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.2" }, { "text": "All combinations of system settings have been evaluated on all the datasets. Table 4 shows the runs achieving the best F1-scores according to each subtask, dataset, and scoring functions. The labels of system settings are defined as follows (cf. Section 3.2):", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 84, "text": "Table 4", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.3" }, { "text": "Ranking and threshold setting diff: ranking by the score difference ratio: ranking by the score ratio Almost all results support similar conclusions as we made in Section 4.2: the best system uses the smoothed length-weighted log frequency score, ranking by score differences without threshold (GS len ,diff,M,N). Thresholds are not helpful except on CSC 2014 test set. By observing the text in the benchmarks, it seems that the sentences in CSC 2014 datasets were written by non-Chinese-native speakers. It means that (1) even the corrected sentences may not be natural enough, so ngram model cannot predict successfully; (2) some errors are so common that appear in many sentences, so hand-crafted rules may be more successful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.3" }, { "text": "In this paper, we proposed two resources to expand confusion sets which improved the error coverage up to 97.17% in CSC training set. We also proposed a method to build a larger two-character confusion set. Nine scoring functions using Google Ngram frequency information were also introduced. Among them, length-weighted log frequency score greatly improved our baseline system on CSC 2013 datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "Although that the methods proposed in this paper do not perform well enough on CSC 2014 datasets, we still think that our method can cooperate with hand-crafted rules (as top CSC systems did in CSC 2014), which becomes our future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "\u56db\u89d2\u865f\u78bc\u5217\u8868 http://code.web.idv.hk/misc/four.php 3 https://catalog.ldc.upenn.edu/LDC2010T06", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://zh.wikipedia.org/wiki/Wikipedia:\u7e41\u7c21\u8655\u7406", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Effective Spell Checking Methods Using Clustering Algorithms. Recent Advances in Natural Language Processing", "authors": [ { "first": "R", "middle": [ "C" ], "last": "De Amorim", "suffix": "" }, { "first": "M", "middle": [], "last": "Zampieri", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "7--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "de Amorim, R.C., & Zampieri, M. (2013). Effective Spell Checking Methods Using Clustering Algorithms. Recent Advances in Natural Language Processing, 7-13.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A program for correcting spelling errors", "authors": [ { "first": "C", "middle": [], "last": "Blair", "suffix": "" } ], "year": 1960, "venue": "Information and Control", "volume": "3", "issue": "", "pages": "60--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blair, C. (1960). A program for correcting spelling errors. Information and Control, 3, 60-67.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Memory-Based Context-Sensitive Spelling Correction at Web Scale", "authors": [ { "first": "A", "middle": [], "last": "Carlson", "suffix": "" }, { "first": "I", "middle": [], "last": "Fette", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 6th International Conference on Machine Learning and Applications", "volume": "", "issue": "", "pages": "166--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlson, A., & Fette, I. (2007). Memory-Based Context-Sensitive Spelling Correction at Web Scale. In Proceedings of the 6th International Conference on Machine Learning and Applications, 166-171.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Scaling up context-sensitive text correction", "authors": [ { "first": "A", "middle": [], "last": "Carlson", "suffix": "" }, { "first": "J", "middle": [], "last": "Rosen", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 13th Innovative Applications of Artificial Intelligence Conference", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlson, A., Rosen, J., & Roth, D. (2001). Scaling up context-sensitive text correction. In Proceedings of the 13th Innovative Applications of Artificial Intelligence Conference, 45-50.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A pilot study on automatic chinese spelling error correction", "authors": [ { "first": "C", "middle": [ "H" ], "last": "Chang", "suffix": "" } ], "year": 1994, "venue": "Journal of Chinese Language and Computing", "volume": "4", "issue": "", "pages": "143--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, C.H. (1994). A pilot study on automatic chinese spelling error correction. Journal of Chinese Language and Computing, 4, 143-149.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Sinica Corpus: Design Methodology for Balanced Corpora", "authors": [ { "first": "K", "middle": [ "J" ], "last": "Chen", "suffix": "" }, { "first": "C", "middle": [ "R" ], "last": "Huang", "suffix": "" }, { "first": "L", "middle": [ "P" ], "last": "Chang", "suffix": "" }, { "first": "H", "middle": [ "L" ], "last": "Hsu", "suffix": "" } ], "year": 1996, "venue": "Proceeding of the 11th Pacific Asia Conference on Language, Information and Computation", "volume": "", "issue": "", "pages": "167--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, K.J., Huang, C.R., Chang, L.P., & Hsu, H.L. (1996). Sinica Corpus: Design Methodology for Balanced Corpora. In Proceeding of the 11th Pacific Asia Conference on Language, Information and Computation, 167-176.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improving Query Spelling Correction Using Web Search Results", "authors": [ { "first": "Q", "middle": [], "last": "Chen", "suffix": "" }, { "first": "M", "middle": [], "last": "Li", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Conference on Empirical Methods in Natural Language (EMNLP-2007)", "volume": "", "issue": "", "pages": "181--189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Q., Li, M., & Zhou, M. (2007). Improving Query Spelling Correction Using Web Search Results. In Proceedings of the 2007 Conference on Empirical Methods in Natural Language (EMNLP-2007), 181-189.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improve the detection of improperly used Chinese characters in students' essays with error model", "authors": [ { "first": "Y", "middle": [ "Z" ], "last": "Chen", "suffix": "" }, { "first": "S", "middle": [ "H" ], "last": "Wu", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Yang", "suffix": "" }, { "first": "T", "middle": [], "last": "Ku", "suffix": "" }, { "first": "G", "middle": [ "D" ], "last": "Chen", "suffix": "" } ], "year": 2011, "venue": "International Journal of Continuing Engineering Education and Life-Long Learning", "volume": "21", "issue": "1", "pages": "103--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Y.Z., Wu, S.H., Yang, P.C., Ku, T., & Chen, G.D. (2011). Improve the detection of improperly used Chinese characters in students' essays with error model. International Journal of Continuing Engineering Education and Life-Long Learning, 21(1), 103-116.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Spelling correction as an iterative process that exploits the collective knowledge of web users", "authors": [ { "first": "S", "middle": [], "last": "Cucerzan", "suffix": "" }, { "first": "E", "middle": [], "last": "Brill", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "293--300", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cucerzan, S., & Brill, E. (2004). Spelling correction as an iterative process that exploits the collective knowledge of web users. In Proceedings of EMNLP, 293-300.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A technique for computer detection and correction of spelling errors", "authors": [ { "first": "F", "middle": [], "last": "Damerau", "suffix": "" } ], "year": 1964, "venue": "Communications of the ACM", "volume": "7", "issue": "", "pages": "171--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Damerau, F. (1964). A technique for computer detection and correction of spelling errors. Communications of the ACM, 7, 171-176.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Correcting Spelling Errors by Modelling Their Causes", "authors": [ { "first": "S", "middle": [], "last": "Deorowicz", "suffix": "" }, { "first": "M", "middle": [ "G" ], "last": "Ciura", "suffix": "" } ], "year": 2005, "venue": "International Journal of Applied Mathematics and Computer Science", "volume": "15", "issue": "2", "pages": "275--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deorowicz, S., & Ciura, M. G. (2005). Correcting Spelling Errors by Modelling Their Causes. International Journal of Applied Mathematics and Computer Science, 15(2), 275-285.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A winnow-based approach to context-sensitive spelling correction", "authors": [ { "first": "A", "middle": [], "last": "Golding", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" } ], "year": 1999, "venue": "Machine Learning", "volume": "34", "issue": "", "pages": "107--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Golding, A., & Roth, D. (1999). A winnow-based approach to context-sensitive spelling correction. Machine Learning, 34(1-3), 107-130.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Real-word spelling correction using googleweb 1t 3-grams", "authors": [ { "first": "A", "middle": [], "last": "Islam", "suffix": "" }, { "first": "D", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1241--1249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Islam, A., & Inkpen, D. (2009). Real-word spelling correction using googleweb 1t 3-grams. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP-2009), 1241-1249.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Exploring distributional similarity based models for query spelling correction", "authors": [ { "first": "M", "middle": [], "last": "Li", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "M", "middle": [ "H" ], "last": "Zhu", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "1025--1032", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, M., Zhang, Y., Zhu, M.H., & Zhou, M. (2006). Exploring distributional similarity based models for query spelling correction. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, 1025-1032.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Professor or screaming beast? Detecting words misuse in Chinese", "authors": [ { "first": "W", "middle": [], "last": "Liu", "suffix": "" }, { "first": "B", "middle": [], "last": "Allison", "suffix": "" }, { "first": "L", "middle": [], "last": "Guthrie", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, W., Allison, B., & Guthrie, L. (2008). Professor or screaming beast? Detecting words misuse in Chinese. The 6th edition of the Language Resources and Evaluation Conference.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Visually and phonologically similar characters in incorrect Chinese words: Analyses, identification, and applications", "authors": [ { "first": "C", "middle": [ "L" ], "last": "Liu", "suffix": "" }, { "first": "M", "middle": [ "H" ], "last": "Lai", "suffix": "" }, { "first": "K", "middle": [ "W" ], "last": "Tien", "suffix": "" }, { "first": "Y", "middle": [ "H" ], "last": "Chuang", "suffix": "" }, { "first": "S", "middle": [ "H" ], "last": "Wu", "suffix": "" }, { "first": "C", "middle": [ "Y" ], "last": "Lee", "suffix": "" } ], "year": 2011, "venue": "ACM Transactions on Asian Language Information Processing", "volume": "10", "issue": "2", "pages": "1--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, C.L., Lai, M.H., Tien, K.W., Chuang, Y.H., Wu, S.H., & Lee, C.Y. (2011). Visually and phonologically similar characters in incorrect Chinese words: Analyses, identification, and applications. ACM Transactions on Asian Language Information Processing, 10(2), 1-39.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "English Spelling and the Computer", "authors": [ { "first": "R", "middle": [], "last": "Mitton", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitton, R. (1996). English Spelling and the Computer. Harlow, Essex: Longman Group.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Ordering the Suggestions of a Spellchecker Without Using Context", "authors": [ { "first": "R", "middle": [], "last": "Mitton", "suffix": "" } ], "year": 2008, "venue": "Natural Language Engineering", "volume": "15", "issue": "2", "pages": "173--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitton, R. (2008). Ordering the Suggestions of a Spellchecker Without Using Context. Natural Language Engineering, 15(2), 173-192.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Confusion Sets and N-gram Statistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Confusion Sets and N-gram Statistics", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Creating and weighting hunspell dictionaries as finite-state automata. Investigationes Linguisticae", "authors": [ { "first": "T", "middle": [], "last": "Pirinen", "suffix": "" }, { "first": "K", "middle": [], "last": "Linden", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pirinen, T., & Linden, K. (2010). Creating and weighting hunspell dictionaries as finite-state automata. Investigationes Linguisticae, 21.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Context-sensitive spell checking based on word trigram probabilities", "authors": [ { "first": "S", "middle": [], "last": "Verberne", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Verberne, S. (2002). Context-sensitive spell checking based on word trigram probabilities, Master thesis, University of Nijmegen.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Using the Web for Language Independent Spellchecking and Autocorrection", "authors": [ { "first": "C", "middle": [], "last": "Whitelaw", "suffix": "" }, { "first": "B", "middle": [], "last": "Hutchinson", "suffix": "" }, { "first": "G", "middle": [ "Y" ], "last": "Chung", "suffix": "" }, { "first": "G", "middle": [], "last": "Ellis", "suffix": "" } ], "year": 2009, "venue": "Proceedings Of Conference On Empirical Methods In Natural Language Processing", "volume": "", "issue": "", "pages": "890--899", "other_ids": {}, "num": null, "urls": [], "raw_text": "Whitelaw, C., Hutchinson, B., Chung, G.Y., & Ellis, G. (2009). Using the Web for Language Independent Spellchecking and Autocorrection. In Proceedings Of Conference On Empirical Methods In Natural Language Processing (EMNLP-2009), 890-899.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Reducing the False Alarm Rate of Chinese Character Error Detection and Correction", "authors": [ { "first": "S", "middle": [ "H" ], "last": "Wu", "suffix": "" }, { "first": "Y", "middle": [ "Z" ], "last": "Chen", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Yang", "suffix": "" }, { "first": "T", "middle": [], "last": "Ku", "suffix": "" }, { "first": "C", "middle": [ "L" ], "last": "Liu", "suffix": "" } ], "year": 2010, "venue": "Proceedings of CIPS-SIGHAN Joint Conference on Chinese Language Processing", "volume": "", "issue": "", "pages": "54--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, S.H., Chen, Y.Z., Yang, P.C., Ku, T., & Liu, C.L. (2010). Reducing the False Alarm Rate of Chinese Character Error Detection and Correction. In Proceedings of CIPS-SIGHAN Joint Conference on Chinese Language Processing (CLP 2010), 54-61.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Chinese Spelling Check Evaluation at SIGHAN Bake-off 2013", "authors": [ { "first": "S", "middle": [ "H" ], "last": "Wu", "suffix": "" }, { "first": "C", "middle": [ "L" ], "last": "Liu", "suffix": "" }, { "first": "L", "middle": [ "H" ], "last": "Lee", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th SIGHAN Workshop on Chinese Language Processing (SIGHAN'13)", "volume": "", "issue": "", "pages": "35--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, S.H., Liu, C.L., & Lee, L.H. (2013). Chinese Spelling Check Evaluation at SIGHAN Bake-off 2013. In Proceedings of the 7th SIGHAN Workshop on Chinese Language Processing (SIGHAN'13), 35-42.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Overview of SIGHAN 2014 Bake-off for Chinese Spelling Check", "authors": [ { "first": "L", "middle": [ "C" ], "last": "Yu", "suffix": "" }, { "first": "L", "middle": [ "H" ], "last": "Lee", "suffix": "" }, { "first": "Y", "middle": [ "H" ], "last": "Tseng", "suffix": "" }, { "first": "H", "middle": [ "H" ], "last": "Chen", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 3rd CIPS-SIGHAN Joint Conference on Chinese Language Processing", "volume": "", "issue": "", "pages": "126--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu, L.C., Lee, L.H., Tseng, Y.H., & Chen, H.H. (2014). Overview of SIGHAN 2014 Bake-off for Chinese Spelling Check. In Proceedings of the 3rd CIPS-SIGHAN Joint Conference on Chinese Language Processing (CLP'14), 126-132.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Automatic detecting/correcting errors in Chinese text by an approximate word-matching algorithm", "authors": [ { "first": "L", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "C", "middle": [ "N" ], "last": "Huang", "suffix": "" }, { "first": "H", "middle": [ "H" ], "last": "Pan", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "248--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, L., Zhou, M., Huang, C.N., & Pan, H.H. (2000). Automatic detecting/correcting errors in Chinese text by an approximate word-matching algorithm. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, 248-254.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Architecture of NTOU Chinese Spelling Check System" }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Take ID=00058 in the Bakeoff 2013 CSC Datasets as an example. The original sentence is ... \u5728\u6559\u5ba4\u88e1\u53ea\u8981\u4eba\u54e1\u597d... and it is segmented as ... \u5728 \u6559\u5ba4 \u88e1 \u53ea\u8981 \u4eba\u54e1 \u597d..." }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "... \u5728\u6559\u58eb\u88e1\u53ea\u8981\u4eba\u54e1\u597d... ... \u5728\u6559\u5e2b\u88e1\u53ea\u8981\u4eba\u54e1\u597d... ... \u5728\u6559\u5ba4\u88e1\u7947\u8981\u4eba\u54e1\u597d... ... \u5728\u6559\u5ba4\u88e1\u53ea\u8981\u4eba\u7de3\u597d... (correct) ... \u5728\u6559\u5ba4\u88e1\u53ea\u8981\u4eba\u733f\u597d..." }, "FIGREF3": { "num": null, "uris": null, "type_str": "figure", "text": "English meaning: \u5728 in, \u6559\u5ba4 classroom, \u6559\u58eb priest, \u6559\u5e2b teacher, \u88e1 inside, \u53ea\u8981 as-long-as, \u7947\u8981 as-long-as (variant), \u4eba\u54e1 member, \u4eba\u7de3 relations, \u4eba\u733f ape, \u597d good) (Original Sentence: in classroom inside as-long-as member good 'as long as there are good members in the classroom\u2026') (Correct sentence: \u5728\u6559\u5ba4\u88e1\u53ea\u8981\u4eba\u7de3\u597d 'in the classroom, as long as you have good relations with the others...') A Study on Chinese Spelling Check Using 29 Confusion Sets and N-gram Statistics" }, "FIGREF4": { "num": null, "uris": null, "type_str": "figure", "text": "(b) Details of Scoring Steps A Study on Chinese Spelling Check Using 39" }, "FIGREF5": { "num": null, "uris": null, "type_str": "figure", "text": "The metrics are evaluated in both levels by the following metrics: Accuracy = (TP + TN) / (TP + TN + FP + FN) Precision = TP / (TP + FP) Recall = TP / (TP+ FN) F1-Score = 2 * Precision * Recall / (Precision + Recall)" }, "FIGREF6": { "num": null, "uris": null, "type_str": "figure", "text": "most one error in one topic, no threshold Q: at most one error in one topic, filtered by threshold P: at most one error in one passage, filtered by threshold L: at most one error at each location, filtered by threshold More precisely," }, "TABREF3": { "content": "
...\u5b30\u5152\u500b\u6578\u537b\u6301\u7e8c\u4e0b\u6ed1... (correct)
...\u5b30\u5152\u500b\u6578\u537b\u6642\u7e8c\u4e0b\u6ed1...
......
(English meaning: \u5b30\u5152 infant, \u500b\u6578 number, \u537b but, \u8173 foot, \u6b32 desire,
\u7279 particular, \u7e8c continue, \u6301\u7e8c keep, \u6642 time, \u4e0b\u6ed1 decrease)
(Original sentence: infant number but special continue decrease
'but the number of infants particularly continues to decrease')
(Correct sentence: \u5b30\u5152\u500b\u6578\u537b\u6301\u7e8c\u4e0b\u6ed1 'but the number of infants keeps decreasing')
", "type_str": "table", "html": null, "num": null, "text": "..\u5b30\u5152\u500b\u6578\u8173\u7279\u7e8c\u4e0b\u6ed1... ...\u5b30\u5152\u500b\u6578\u6b32\u7279\u7e8c\u4e0b\u6ed1..." }, "TABREF5": { "content": "
Confusion SetTR2013 TS2013 TR2014 TS2014
set0: SPST70.0972.1347.9247.41
set1: SPDT15.1017.5046.5247.03
set2: DPST3.704.995.154.68
set3: DPDT3.704.678.417.71
set4: RStrk9.123.170.380.88
set5: CJie40.4636.1829.7231.10
set6: Cor414.816.891.841.52
set7: SWen117.0919.2411.4812.64
set8: SWen218.2319.6411.9112.90
", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF6": { "content": "
A Study on Chinese Spelling Check Using33
Confusion Sets and N-gram Statistics
", "type_str": "table", "html": null, "num": null, "text": "SWen1 and SWen2. Although they alone do not provide good coverage, unions including SWen sets can cover up to 97.15% errors in CSC 2013 Training set.Closure set only cover one more error in CSC 2014 Training set. In order not to introduce too much noise, the closure SWen set is not recommended." }, "TABREF8": { "content": "
A Study on Chinese Spelling Check Using37
Confusion Sets and N-gram Statistics
B1-0143-1 \u59b3\u9084\u8a18\u5f97\u6211\u5011\u5728\u9ad8\u4e2d\u5728\u5df2\u6a23\u7684\u8ab2\u55ce List of scores
Org, Segmented: \u59b3 \u9084 \u8a18\u5f97 \u6211\u5011 \u5728 \u9ad8\u4e2d \u5728 \u5df2 \u6a23 \u7684 \u8ab2 \u55ce GS log GS len GS' log GS' len
Rpl1, \u59b3\u2192\u4f60, Segmented: \u4f60 \u9084 \u8a18\u5f97 \u6211\u5011 \u5728 \u9ad8\u4e2d \u5728 \u5df2 \u6a23 \u7684 \u8ab2 \u55ce Org 201.304 499.469 1.304 -290.531
Rpl2, \u6a23\u2192\u807d, Segmented: \u59b3 \u9084 \u8a18\u5f97 \u6211\u5011 \u5728 \u9ad8\u4e2d \u5728 \u5df2 \u807d \u7684 \u8ab2 \u55ce Rpl1 221.456 575.321 31.456 -164.679 Rpl3, \u5df2\u2192\u4e00, Segmented: \u59b3 \u9084 \u8a18\u5f97 \u6211\u5011 \u5728 \u9ad8\u4e2d \u5728 \u4e00\u6a23 \u7684 \u8ab2 \u55ce Rpl2 227.394 572.386 57.394 -127.614
Rpl3 (correct) 203.263 513.261 43.263 -126.739 = (log(gnf(\u59b3 \u9084)) + log(gnf(\u9084 \u8a18\u5f97)) +\u2026+ log(gnf(\u8ab2 \u55ce)) + (English meanings: (Org:'Do you still remember that we were in the patterned class in high school?') Scoring details: GS Log (Org) (log(gnf(\u59b3 \u9084 \u8a18\u5f97)) +\u2026+ log(gnf(\u7684 \u8ab2 \u55ce)) + (Rpl1:'Do you still remember that we were in the patterned class in high school?') (log(gnf(\u59b3 \u9084 \u8a18\u5f97 \u6211\u5011)) +\u2026+ log(gnf(\u6a23 \u7684 \u8ab2 \u55ce)) + (Rpl2: 'Do you still remember that we were in the listened class in high school?') (log(gnf(\u59b3 \u9084 \u8a18\u5f97 \u6211\u5011 \u5728)) +\u2026+ log(gnf(\u5df2 \u6a23 \u7684 \u8ab2 \u55ce)) (Rpl3: 'Do you still remember that we were in the same class in high school?') = 12.729 + 15.962 + 13.536 +\u2026+ 15.003 + 14.807 + 0 + Google Ngram Information: Bigram gnf log Trigram gnf 10.014 + 12.486 + 10.620 +\u2026+ 0 + 0 + log 6.798 + 9.696 + 5.472 + 0 +\u2026+ 0 + 0 +
\u59b3 \u9084337282 12.729 \u59b3 \u9084 \u8a18\u5f97 0 + 4.357 + 0 +\u2026+ 022344 10.014
\u4f60 \u9084 = 135.124+39.857+21.967+4.357 = 201.304 27319449 17.123 \u4f60 \u9084 \u8a18\u5f971127456 13.935
GS Log\u9084 \u8a18\u5f978552177 15.962 \u9084 \u8a18\u5f97 \u6211\u5011264628 12.486
\u8a18\u5f97 \u6211\u5011756252 13.536 \u8a18\u5f97 \u6211\u5011 \u572840942 10.620
\u6211\u5011 \u572824371694 17.009 \u5728 \u9ad8\u4e2d \u57288436.737
\u5728 \u9ad8\u4e2d838050 13.639 \u5728 \u5df2 \u807d614.111
\u9ad8\u4e2d \u5728100156 11.514 \u5728 \u4e00\u6a23 \u7684194229.874
\u5728 \u5df21193110 13.992 \u5df2 \u807d \u768419917.596
\u5728 \u4e00\u6a2341218 10.627 \u807d \u7684 \u8ab283429.029
\u5df2 \u6a2310256.932 Trigram with gnf(.)=0
\u5df2 \u807d121888 11.710 \u6211\u5011 \u5728 \u9ad8\u4e2d, \u9ad8\u4e2d \u5728 \u5df2,
\u6a23 \u7684 \u807d \u76843280256 15.003 5830567 15.579\u9ad8\u4e2d \u5728 \u4e00\u6a23, \u5728 \u5df2 \u6a23, \u5df2 \u6a23 \u7684, \u6a23 \u7684 \u8ab2, \u4e00\u6a23 \u7684 \u8ab2, \u7684 \u8ab2 \u55ce
\u4e00\u6a23 \u768435523054 17.386
\u7684 \u8ab22695074 14.807
\u8ab2 \u55ce0---
4-gram with gnf(.) > 0gnflog5-gram with gnf(.) > 0gnflog
\u59b3 \u9084 \u8a18\u5f97 \u6211\u50118966.798 \u4f60 \u9084 \u8a18\u5f97 \u6211\u5011 \u57282846 7.954
\u4f60 \u9084 \u8a18\u5f97 \u6211\u501143508 10.680 \u9084 \u8a18\u5f97 \u6211\u5011 \u5728 \u9ad8\u4e2d78 4.357
\u9084 \u8a18\u5f97 \u6211\u5011 \u5728162609.696
\u8a18\u5f97 \u6211\u5011 \u5728 \u9ad8\u4e2d2385.472
Figure 1. (a) Examples of Google Ngram Information in Scoring
", "type_str": "table", "html": null, "num": null, "text": "| (score new -score org ) / score org |. \u59b3 you(female), \u4f60 you, \u9084 still, \u8a18\u5f97 remember, \u6211\u5011 we, \u5728 in, \u9ad8\u4e2d high-school, \u5df2 already, \u6a23 pattern, \u807d listen, \u4e00\u6a23 same, \u7684 DE, \u8ab2 class, \u55ce Qpunc)" }, "TABREF10": { "content": "
DifferenceRatio
F-scoreOLOPMLMPOLOPMLMP
GS raw3.233.23------3.392.36------
GS logn (2)0.430.431.111.180.550.610.760.94
GS logn (3) 10.74 10.27 22.25 22.226.187.49 12.68 17.09
GS logn (4) 15.16 15.28 33.81 33.12 10.85 12.09 17.85 19.59
GS logn (5) 10.289.63 21.38 21.969.799.66 11.50 13.02
GS log6.676.74 33.78 35.783.364.19 20.69 25.87
GS logav26.60 28.25 30.92 33.16 20.32 25.62 24.58 30.35
GS len9.939.86 42.75 44.064.835.50 25.52 31.34
GS lenav27.38 28.34 30.06 33.74 19.53 24.51 26.05 29.34
", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF11": { "content": "
A Study on Chinese Spelling Check Using41
Confusion Sets and N-gram Statistics
Dataset#Topics#Errors
CSC 2013 Training350351
CSC 2013 Test10001464
CSC 2014 Training34345280
CSC 2014 Test531791
", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF12": { "content": "", "type_str": "table", "html": null, "num": null, "text": "(a)~4(d) shows the experimental results of error detection evaluated on CSC 2013 training set, CSC 2013 test set, CSC 2014 training set, and CSC 2014 test set, respectively. Table 4(e)~4(h) shows the experimental results of error correction evaluated on CSC 2013 training set, CSC 2013 test set, CSC 2014 training set, and CSC 2014 test set, respectively." }, "TABREF13": { "content": "
ScoringSystemPRFAcc
GS rawratio,O,N100.007.7114.327.71
GS logn (2)diff,M,N100.009.7117.719.71
GS logn (3)diff,M,N100.0030.0046.1530.00
GS logn (4)diff,M,N100.0030.0046.1530.00
GS logn (5)diff,M,N100.0018.5731.3318.57
GS logdiff,M,N100.0042.0059.1542.00
GS logavdiff,M,N100.0037.7154.7737.71
GS lendiff,M,N100.0046.5763.5546.57
GS lenavdiff,M,N100.0036.0052.9436.00
(b) Error-Detection, CSC2013 Test Set
ScoringSystemPRFAcc
GS rawratio,O,N100.004.809.164.80
GS logn (2)diff,M,N100.005.109.715.10
GS logn (3)diff,M,N100.0018.4031.0818.40
GS logn (4)diff,M,N100.0018.2030.8018.20
GS logn (5)diff,M,Q100.0011.9021.2711.90
GS logdiff,M,N100.0025.9041.1425.90
", "type_str": "table", "html": null, "num": null, "text": "" } } } }