{ "paper_id": "O13-5002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:03:48.677591Z" }, "title": "Integrating Dictionary and Web N-grams for Chinese Spell Checking", "authors": [ { "first": "Jian-Cheng", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Hsun-Wen", "middle": [], "last": "Chiu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "chiuhsunwen@gmail.com" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "jason.jschang@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Chinese spell checking is an important component of many NLP applications, including word processors, search engines, and automatic essay rating. Nevertheless, compared to spell checkers for alphabetical languages (e.g., English or French), Chinese spell checkers are more difficult to develop because there are no word boundaries in the Chinese writing system and errors may be caused by various Chinese input methods. In this paper, we propose a novel method for detecting and correcting Chinese typographical errors. Our approach involves word segmentation, detection rules, and phrase-based machine translation. The error detection module detects errors by segmenting words and checking word and phrase frequency based on compiled and Web corpora. The phonological or morphological typographical errors found then are corrected by running a decoder based on the statistical machine translation model (SMT). The results show that the proposed system achieves significantly better accuracy in error detection and more satisfactory performance in error correction than the state-of-the-art systems.", "pdf_parse": { "paper_id": "O13-5002", "_pdf_hash": "", "abstract": [ { "text": "Chinese spell checking is an important component of many NLP applications, including word processors, search engines, and automatic essay rating. Nevertheless, compared to spell checkers for alphabetical languages (e.g., English or French), Chinese spell checkers are more difficult to develop because there are no word boundaries in the Chinese writing system and errors may be caused by various Chinese input methods. In this paper, we propose a novel method for detecting and correcting Chinese typographical errors. Our approach involves word segmentation, detection rules, and phrase-based machine translation. The error detection module detects errors by segmenting words and checking word and phrase frequency based on compiled and Web corpora. The phonological or morphological typographical errors found then are corrected by running a decoder based on the statistical machine translation model (SMT). The results show that the proposed system achieves significantly better accuracy in error detection and more satisfactory performance in error correction than the state-of-the-art systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Chinese spell checking is a task involving automatically detecting and correcting typographical errors (typos), roughly corresponding to misspelled words in English. In this paper, we define typos as Chinese characters that are misused due to shape or phonological similarity. Liu et al. (2011) shows that people tend to unintentionally generate typos that sound similar (e.g., *\u63aa\u6298 [cuo zhe] and \u632b\u6298 [cuo zhe]), or look similar (e.g., *\u56fa\u96e3 [gu nan] and \u56f0\u96e3 [kun nan]). On the other hand, some typos found on the Web (such as forums Data-driven, statistical spell checking approaches appear to be more robust and perform better. Statistical methods tend to use a large monolingual corpus to create a language model to validate the correction hypotheses. Considering\"\u5fc3\u662f\"[xin shi], the two characters\"\u5fc3\" [xin] and\"\u662f\"[shi] are a bigram with high frequency in a monolingual corpus, so we may determine that\"\u5fc3\u662f\"[xin shi] is not a typo after all.", "cite_spans": [ { "start": 277, "end": 294, "text": "Liu et al. (2011)", "ref_id": "BIBREF8" }, { "start": 798, "end": 803, "text": "[xin]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, we propose a model that combines rule-based and statistical approaches to detect errors and generate the most appropriate corrections in Chinese text. Once an error is identified by the rule-based detection model, we use the statistic machine translation (SMT) model (Koehn, 2010) to provide the most appropriate correction. Rule-based models tend to ignore context, so we use SMT to deal with this problem. Our model treats spelling correction as a kind of translation, where typos are translated into correctly spelled words according to the translation probability and language model probability. Consider the same case\"\u5fc3\u662f\u5f88 \u91cd\u8981\u7684\u3002\"[xin shi hen zhong yao de]. The string\"\u5fc3\u662f\"[xin shi] would not be incorrectly replaced with\"\u5fc3\u4e8b\"[xin shi] because we would consider\"\u5fc3\u662f\" [ xin shi] to be highly probable, according to the language model. The rest of the paper is organized as follows. We present the related work in the next section. Then, we describe the proposed model for automatically detecting the spelling errors and correcting the found errors in Section 3. Section 4 and Section 5 present the experimental data, results, and performance analysis. We conclude in Section 6.", "cite_spans": [ { "start": 282, "end": 295, "text": "(Koehn, 2010)", "ref_id": "BIBREF6" }, { "start": 781, "end": 782, "text": "[", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Chinese spell checking is a task involving automatically detecting and correcting typos in a given Chinese sentence. Previous work typically takes the approach of combining a confusion set and a language model. A rule-based approach depends on dictionary knowledge and a Integrating Dictionary and Web N-grams for Chinese Spell Checking 19 confusion set, a collection set of certain characters consisting of visually and phonologically similar characters. On the other hand, statistical-based methods usually use a language model, which is generated from a reference corpus. A statistical language model assigns a probability to a sentence of words by means of n-gram probability to compute the likelihood of a corrected sentence. Chang (1995) proposed a system that replaces each character in the sentence based on the confusion set and estimates the probability of all modified sentences according to a bigram language model built from a newspaper corpus before comparing the probability before and after substitution. They used a confusion set consisting of pairs of characters with similar shape that were collected by comparing the original text and its OCR results. Similarly, Zhuang et al. (2004) proposed an effective approach using OCR to recognize a possible confusion set. In addition, Zhuang et al. (2004) also used a multi-knowledge based statistical language model, the n-gram language model, and Latent Semantic Analysis. Nevertheless, the experiments by Zhuang et al. (2004) seem to show that the simple n-gram model performs the best.", "cite_spans": [ { "start": 731, "end": 743, "text": "Chang (1995)", "ref_id": "BIBREF0" }, { "start": 1183, "end": 1203, "text": "Zhuang et al. (2004)", "ref_id": "BIBREF17" }, { "start": 1297, "end": 1317, "text": "Zhuang et al. (2004)", "ref_id": "BIBREF17" }, { "start": 1470, "end": 1490, "text": "Zhuang et al. (2004)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2." }, { "text": "In recent years, Chinese spell checkers have incorporated word segmentation. The method proposed by Huang et al. (2007) incorporates the Sinica Word Segmentation System (Ma & Chen, 2003) to detect typos. With a character-based bigram language model and the rule-based methods of dictionary knowledge and confusion sets, the method determines whether the word is a typo or not. There are many more systems that use word segmentation to detect errors. For example, in Hung and Wu (2009) , the given sentence is segmented using a bigram language model. In addition, the method also uses a confusion set and common error templates manually edited and provided by the Ministry of Education in Taiwan (MOE, 1996) . Chen and Wu (2010) modified the system proposed by Hung and Wu (2009) by combining statistic-based methods and a template matching module generated automatically to detect and correct typos based on the language model. Closer to our method, Wu et al. (2010) adopted the noise channel model, a framework used both in spell checkers and in machine translation systems. The system combined a statistic-based method and template matching with the help of a dictionary and a confusion set. They also used word segmentation to detect errors, but they did not use existing word segmentation, as Huang et al. (2007) did, because that might regard a typo as a new word. They used a backward longest first approach to segment sentences with an online dictionary sponsored by MOE (MOE, 2007) , and a templates with a confusion set provided by Liu et al. (2009) . The system also treated Chinese spell checking as a kind of translation by combining the template module and translation module to get a higher precision or recall.", "cite_spans": [ { "start": 100, "end": 119, "text": "Huang et al. (2007)", "ref_id": "BIBREF3" }, { "start": 169, "end": 186, "text": "(Ma & Chen, 2003)", "ref_id": "BIBREF9" }, { "start": 466, "end": 484, "text": "Hung and Wu (2009)", "ref_id": null }, { "start": 695, "end": 706, "text": "(MOE, 1996)", "ref_id": "BIBREF12" }, { "start": 709, "end": 727, "text": "Chen and Wu (2010)", "ref_id": "BIBREF16" }, { "start": 760, "end": 778, "text": "Hung and Wu (2009)", "ref_id": null }, { "start": 950, "end": 966, "text": "Wu et al. (2010)", "ref_id": "BIBREF16" }, { "start": 1297, "end": 1316, "text": "Huang et al. (2007)", "ref_id": "BIBREF3" }, { "start": 1474, "end": 1489, "text": "MOE (MOE, 2007)", "ref_id": "BIBREF11" }, { "start": 1541, "end": 1558, "text": "Liu et al. (2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2." }, { "text": "In our system, we also treat the Chinese spell checking problem as machine translation, but we use a different method of handling word segmentation to detect typos and translation 20 Jian-cheng Wu et al. model , where typos are translated into correctly spelled words.", "cite_spans": [ { "start": 194, "end": 209, "text": "Wu et al. model", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2." }, { "text": "In this section, we describe our solution to the problem of Chinese spell checking. In the error detection phase, the given Chinese sentence is segmented into words. (Section 3.1) The detection module then identifies and marks the words that may be typos. (Section 3.2) In the error correction phase, we use the statistical machine translation (SMT) model to translate the sentences containing typos into correct ones (Section 3.3). In the rest of this section, we describe our solution to this problem in more detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "Unlike English text, in which sentences are sequences of words delimited by spaces, Chinese texts are represented as strings of Chinese characters (called Hanzi) with word delimiters. Therefore, word segmentation is a pre-processing step required for many Chinese NLP applications. In this study, we also perform word segmentation to reduce the search space and probability of false alarms. After segmentation, sequences of two or more singleton words are considered likely to contain an error. Nevertheless, over-segmentation might lead to falsely identified errors, which we will describe in Section 3.2. Considering the sentence\"\u9664\u4e86\u8981\u6709 \u8d85\u4e16\u4e4b\u624d\uff0c\u4e5f\u8981\u6709\u5805\u5b9a\u7684\u610f\u5fd7\"[chu le yao you chao shi zhi cai, ye yao you jian ding de yi zhi], the sentence is segmented into\"\u9664\u4e86/\u8981/\u6709/\u8d85\u4e16/\u4e4b/\u624d/ \uff0c/\u4e5f/\u8981/\u6709/\u5805\u5b9a/\u7684/\u610f\u5fd7.\"The part\"\u8d85\u4e16\u4e4b\u624d\"[chao shi zhi cai] of the sentence is over-segmented and runs the risk of being identified as containing a typo. To solve the problem of over-segmentation, we used additional lexical items to reduce the chance of generating false alarms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modified Chinese Word Segmentation System", "sec_num": "3.1" }, { "text": "Motivated by the observation that a typo often causes over-segmentation in the form of a sequence of single-character words, we target the sequences of single-character words as candidates for typos. To identify the points of typos, we take all n-grams consisting of single-character words in the segmented sentence into consideration. In addition to a Chinese dictionary, we also include a list of web-based n-grams to reduce false alarms due to the limited coverage of the dictionary. When a sequence of singleton words is not found in the dictionary or in the web-based character n-grams, we regard the n-gram as containing a typo. For example,\"\u68ee\u6797 \u7684 \u82b3 \u591a \u7cbe\"[sen lin de fang duo jing] is segmented into consecutive singleton words: bigrams such as \"\u7684 \u82b3\"[de fang], and\"\u82b3 \u591a\"[fang duo] and trigrams such as\"\u7684 \u82b3 \u591a\"[de fang duo] and\"\u82b3 \u591a \u7cbe\"[fang duo jing] are all considered as candidates for typos since those n-grams are not found in the reference list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Detection", "sec_num": "3.2" }, { "text": "Integrating Dictionary and Web N-grams for Chinese Spell Checking 21", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Detection", "sec_num": "3.2" }, { "text": "Once we generate a list of candidates of typos, we attempt to correct typos using a statistical machine translation model to translate typos into correct words. When given a candidate, we first generate all correction hypotheses by replacing each character of the candidate typo with similar characters, one character at a time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Correction", "sec_num": "3.3" }, { "text": "Take the candidate\"\u6c23\u4efd\"[qi fen] as example, the model generates all translation hypotheses according to a visually and phonologically confusion set. Table 1 shows some translation hypotheses. The translation hypotheses then are validated (or pruned from the viewpoint of SMT) using the dictionary. The translation probability tp is a probability indicating how likely a typo is to be translated into a correct word. tp of each correction translation is calculated using the following formula: ", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 155, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Error Correction", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "10 = 0 ( ) ( , )", "eq_num": "log (" } ], "section": "Error Correction", "sec_num": "3.3" }, { "text": "\uf067 \uf0e6 \uf0f6 \uf03d \uf02a \uf0e7 \uf0f7 \uf02d \uf0e8 \uf0f8 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Correction", "sec_num": "3.3" }, { "text": "where freq(trans) is the frequency of translation, freq(candi) is the frequency of the candidate, and \u03b3 is the weight of different error types: visual or phonological.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Correction", "sec_num": "3.3" }, { "text": "Take\"\u6c23\u4efd\"[qi fen] from\"\u4e0d/\u4e00\u6a23/\u7684/\u6c23/\u4efd\"[bu/yi yang/de/qi/fen] for instance, the translations with non-zero tp after filtering are shown in Table 2 . Only two translations are possible for this candidate:\"\u6c23\u61a4\"[qi fen] and\"\u6c23\u6c1b\"[qi fen]. We use a simple, publicly available decoder written in Python to correct potential spelling errors found in the detection module. The decoder reads one Chinese sentence at a time and attempts to \"translate\" the sentence into a correctly spelled one. The decoder translates monotonically without reordering the Chinese words and phrases using two models -the translation probability model and the language model. These two models read from a data directory containing two text files containing a translation model in GIZA++ (Och & Ney, 2003) format and a language model in SRILM (Stolcke et al., 2011) format. These two models are stored in memory for quick access.", "cite_spans": [ { "start": 750, "end": 767, "text": "(Och & Ney, 2003)", "ref_id": "BIBREF13" }, { "start": 805, "end": 827, "text": "(Stolcke et al., 2011)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 133, "end": 140, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Error Correction", "sec_num": "3.3" }, { "text": "The decoder invokes the two modules to load the translation and language models and decodes the input sentences, storing the result in output. The decoder computes the probability of the output sentences according to the models. It works by summing over all possible ways that the model could have generated the corrected sentence from the input sentence. Although, in general, covering all possible corrections in the translation and language models is intractable, a majority of error instances can be \"translated\" effectively via the translation model and the language model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Correction", "sec_num": "3.3" }, { "text": "Our systems were designed to provide wide coverage spell checking for Chinese. As such, we trained our systems using a dictionary, a compiled corpus, and Web scale n-grams. We evaluated our systems on the sentence level. Finally, we used an annotated dataset to provide human judges the ability to evaluate the quality of error detection and correction. In this section, we first present the details of data sources used in training (Section 4.1). Then, Section 4.2 describes the test data. Section 4.3 describes the systems evaluated and compared. The evaluation metrics for the performance of the systems are reported in Section 4.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4." }, { "text": "To train our model, we used several corpora, including Sinica Chinese Balanced Corpus, TWWaC (Taiwan Web as Corpus), a Chinese dictionary, and a confusion set. We describe the data sets in more detail below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": "4.1" }, { "text": "\"Academia Sinica Balanced Corpus of Modern Chinese,\" or \"Sinica Corpus,\" is the first balanced Chinese corpus with part-of-speech tags (Huang et al., 1996) . The current size of the corpus is about 5 million words. Texts are segmented according to the word segmentation standard proposed by the ROC Computational Linguistic Society. Each segmented word is tagged with its part of speech. We used the corpus to generate the frequency of bigrams, trigrams, and 4-grams for training the translation model and to train the n-gram language model.", "cite_spans": [ { "start": 135, "end": 155, "text": "(Huang et al., 1996)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Sinica Corpus", "sec_num": null }, { "text": "We used TWWaC for obtaining more language information. TWWaC is a corpus gathered from the Web under the .tw domain, containing 1,817,260 Web pages that consist of 30 billion Chinese characters. We use the corpus to generate the frequency of all character n-grams for n = 2, 3, 4 (with frequency higher than 10). Table 3 shows the information of n-grams in Sinica Corpus and TWWaC. 848,193 13,745,743 17,191,359 ", "cite_spans": [ { "start": 382, "end": 411, "text": "848,193 13,745,743 17,191,359", "ref_id": null } ], "ref_spans": [ { "start": 313, "end": 320, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "TWWaC (Taiwan Web as Corpus)", "sec_num": null }, { "text": "From the dictionaries and related books published by Ministry of Education (MOE) of Taiwan, we obtained two lists, one is the list of 64,326 distinct Chinese words (MOE, 1997) 1 , and the other one is the list of 48,030 distinct Chinese idioms 2 . We combined the lists into a Chinese dictionary for validating words with lengths of 2 to 17 characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Words and Idioms in a Chinese Dictionary", "sec_num": null }, { "text": "After analyzing erroneous Chinese words, Liu et al. (2011) found that more than 70% of typos were related to the phonologically similar character, about 50% are morphologically similar, and almost 30% are both phonologically and morphologically similar. We used the ratio as the weight for the translation probabilities. In this study, we used two confusion sets generated by Liu et al. (2011) and provided by SIGHAN 7 Bake-off 2013: Chinese Spelling Check Shared Task as a full confusion set, based on loosely similar relation.", "cite_spans": [ { "start": 41, "end": 58, "text": "Liu et al. (2011)", "ref_id": "BIBREF8" }, { "start": 376, "end": 393, "text": "Liu et al. (2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Confusion Set", "sec_num": null }, { "text": "In order to improve the performance, we expanded the sets slightly and also removed some loosely similarly relations. For example, we removed all relations based on non-identical phonologically similarity. After that, we added the similar characters based on similar phonemes in Chinese phonetics, such as\"\u3123\uff0c\u3125\"[en, eng],\"\u3124\uff0c\u3122\" [ang, an] ,\"\u3115\uff0c\u3119\" [shi, si] , and so on. We also modified the similar shape set, so we checked the characters by comparing the characters in Cangjie codes (\u5009\u9821\u78bc) and required strong shape similarly. Two characters differing from each other by at most one symbol in Cangjie code were considered as strongly similar and were retained. For example, the code of\"\u5fb5\"[zheng] and\"\u5fae\" [wei] are strongly similar in shape, since in their corresponding codes\"\u7af9\u4eba\u5c71\u571f\u5927\"and\"\u7af9\u4eba\u5c71 \u5c71\u5927\", differ only in one place.", "cite_spans": [ { "start": 326, "end": 335, "text": "[ang, an]", "ref_id": null }, { "start": 343, "end": 352, "text": "[shi, si]", "ref_id": null }, { "start": 699, "end": 704, "text": "[wei]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Confusion Set", "sec_num": null }, { "text": "We used the official dataset from SIGHAN 7 Bake-off 2013: Chinese Spelling Check to evaluate our systems. This dataset contains two parts: 350 sentences with errors and 350 sentences without errors, extracted from student essays that covered various common errors. The dataset was released in XML format with the information of sentences, wrong position, typos, and correction. A sample is shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test Data", "sec_num": "4.2" }, { "text": "

\u6211\u770b\u904e\u8a31\u591a\u52c7\u6562\u7684\u4eba\uff0c\u4e0d\u6015\u63aa\u6298\u5730\u596e\u9b25\uff0c\u9019\u7a2e\u7cbe\u795e\u503c\u5f97\u6211\u5011\u5b78\u7fd2\u3002

\u63aa\u6298 \u632b\u6298
", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test Data", "sec_num": "4.2" }, { "text": "We found that all of the sentences with errors contain exactly one typo and that most errors were either similar in pronunciation or shape. Therefore, the confusion set was suitable for error correction. We generated the sentence with/without error and the correct answer from XML format. In this data, more than 80% of errors were characters with identical pronunciation, almost 20% of errors were characters with similar shape, and 40% of errors involved both phonological and visual similarity. Hence, we focused on detecting and correcting these two common types of errors in our study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test Data", "sec_num": "4.2" }, { "text": "Recall that we propose a system to detect and correct typos in Chinese based broadly on statistical machine translation. We experimented with different resources as kinds of language models to detect typos: dictionary entries, a compiled corpus, and Web corpus. The four detection systems evaluated are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "Integrating Dictionary and Web N-grams for Chinese Spell Checking 25 -Dictionary (DICT): A dictionary is used to detect unregistered words as errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Corpus (CORP): A word list from a reference corpus is used to detect unseen words as errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Web corpus (WEB): A character n-gram of Web corpus is used to detect unseen n-grams as errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Dictionary and Web corpus (DICT+WEB): A dictionary combining a character n-gram of Web corpus is used to detect unregistered words as errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "To correct typos, we used a character confusion set to transform the detected typos and generate the \"translation\" hypotheses with translation probability. These hypotheses were pruned using a Chinese dictionary before running the MT decoder in order to reduce the load on the decoder. The scope of this confusion set and the weights associated with translation probability clearly influenced the performance of our system. We evaluated and compared four different confusion set and weight settings. The four correction systems evaluated are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Full confusion set (FULL+WT): A broad confusion set with loosely similar relations in character sound and shape was used to generate mapping from a detected typo to its correction. Different weights were used in modeling probability for sound and shape based mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Confusion set with identical sound (SND+WT): A broad confusion set with identical sounds and loosely similar shape relations was used to generate mapping. Different weights were used in modeling probability for sound and shape based mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Restricted confusion set with identical sound and strong similarly shape (SND+SHP): A broad confusion set with identical sounds and strongly similar shape relations was used to generate mapping. Sound and shape were given the same weight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "-Restricted confusion set with different weights (SND+SHP+WT): A broad confusion set with identical sounds and strongly similar shape relations was used to generate mapping. Different weights were used in modeling probability for sound and shape based mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems Compared", "sec_num": "4.3" }, { "text": "To assess the effectiveness of the proposed system, we used test data to experiment with our system. We also exploited several language resources, including TWWaC, Sinica Corpus, a Chinese dictionary, and the confusion set, in the proposed system to detect errors and correct errors. The Chinese Word Segmentation System produces the word segmentation result with the help of a Chinese dictionary to improve the proposed system. To evaluate our system, we used the precision rate and recall rate, which are defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "/ Precision C S \uf03d (2) / Recall C N \uf03d", "eq_num": "(3)" } ], "section": "Evaluation Metrics", "sec_num": "4.4" }, { "text": "where N is the number of error characters, S is the number of characters translated by the proposed system, and C is the number of characters translated correctly by the proposed system. We also compute the corresponding F-score as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.4" }, { "text": "2 Precision Recall Precision Recall F score \uf0b4 \uf0b4 \uf02b \uf02d \uf03d (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.4" }, { "text": "In this section, we report the results of the experimental evaluation using the methodology described in the previous section. We evaluated detection, as well as correction, for many systems with different language resources and settings. During this evaluation, we tested our systems on 350 sentences containing at least one typo, provided in SIGHAN Bake-off 2013: Chinese Spelling Check. Table 4 shows the precision, recall, and F-score for four detection systems, while Table 5 shows the same metrics for four correction systems. .70", "cite_spans": [], "ref_spans": [ { "start": 390, "end": 397, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 473, "end": 480, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5." }, { "text": "As can be seen in Table 4 , using the Web corpus (WEB) achieves higher precision than the dictionary (DICT) or compiled corpus (CORPUS) with slightly lower recall. Using the dictionary (DICT) leads to the highest recall but slightly lower precision. By combining the dictionary and Web corpus (WEB+DICT), we achieve the best precision, recall, and F-score. Table 5 shows that using the full confusion set with loosely similar sound and shape relation leads to the lowest recall and precision in error correction (FULL). By restricting the sound confusion to identical sound and the shape confusion to strongly similar shape, we can improve precision dramatically, with a small increase in recall (SND and SND+SHP).", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 25, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 357, "end": 364, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5." }, { "text": "We can further improve the precision and recall by applying different weights in modeling the probability of sound and shape based hypotheses (SND+SHP+WT). Since typos are more often related to sound confusion than shape, giving higher weight to sound confusion indeed leads to further improvement in both precision and recall. Previous works typically have used only a language model to correct errors, but we compute language model probability and translation probability, resulting in more effective error correction. For this reason, we were placed among the top scoring systems in the SIGHAN Bake-off 2013.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5." }, { "text": "In order to test whether the system can produce false alarms as rarely as possible, when handling the sentences with typos, we tested our systems on a dataset with an additional 350 sentences without typos. The best performing system (SND+SHP+WT) obtained a precision rate of .91, recall rate of .56, and F-score of .69 in correction. The results show that this system is very robust, maintaining a high precision rate in different situations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5." }, { "text": "The recall of our system is limited by the dictionary that we used to correct a typo. For example, the typo\"\u4e03\u5f48\u5834\"[qi tan chang], which is detected by the model, is not corrected to\"\u6f06\u5f48\u5834\"[qi tan chang] because it is a new term and not found in the Chinese dictionary we used. To correct such errors, we could use Web-based character n-grams, which are more likely to contain such new terms or productive compounds not found in a dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5." }, { "text": "Many avenues exist for future research and improvement of our system. For example, new terms can be automatically discovered and added to the Chinese dictionary to improve both detection and correction performance. Part of speech tagging can be performed to provide more information for error detection. Named entities can be recognized in order to avoid false alarms. A supervised statistical classifier can be used to model translation probability more accurately. Additionally, an interesting direction to explore is using Web n-grams in addition to a Chinese dictionary for correcting typos. Yet another direction of research would be to consider errors related to a missing or redundant character.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6." }, { "text": "In summary, we have proposed a novel method for Chinese spell checking. Our approach involves error detection and correction based on the phrasal statistical machine translation framework. The error detection module detects errors by segmenting words and checking word and phrase frequency based on a compiled dictionary and Web corpora. The phonological or morphological spelling errors found then are corrected by running a decoder based on the statistical machine translation model (SMT). The results show that the proposed system achieves significantly better accuracy in error detection and more satisfactory performance in error correction than the state-of-the-art systems. The experimental results show that the method outperforms previous works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6." }, { "text": "Chinese Dictionary http://www.edu.tw/files/site_content/m0001/pin/biau2.htm?open 2 Chinese Idioms http://dict.idioms.moe.edu.tw/cydic/index.htm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Jian-chengWu et al.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A new approach for automatic Chinese spelling correction", "authors": [ { "first": "C.-H", "middle": [], "last": "Chang", "suffix": "" } ], "year": 1995, "venue": "Proceedings of Natural Language Processing Pacific Rim Symposium", "volume": "", "issue": "", "pages": "278--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, C.-H. (1995). A new approach for automatic Chinese spelling correction. In Proceedings of Natural Language Processing Pacific Rim Symposium, 278 -283.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Improve the detection of improperly used Chinese characters with noisy channel model and detection template", "authors": [ { "first": "Y.-Z", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Y.-Z. (2010). Improve the detection of improperly used Chinese characters with noisy channel model and detection template. Master thesis, Chaoyang University of Technology.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Segmentation standard for Chinese natural language processing", "authors": [ { "first": "C.-R", "middle": [], "last": "Huang", "suffix": "" }, { "first": "K.-J", "middle": [], "last": "Chen", "suffix": "" }, { "first": "L.-L", "middle": [], "last": "Chang", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 1996 International Conference on Computational Linguistics (COLING 96)", "volume": "2", "issue": "", "pages": "1045--1048", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C.-R., Chen, K.-j., & Chang, L.-L. (1996). Segmentation standard for Chinese natural language processing. In Proceedings of the 1996 International Conference on Computational Linguistics (COLING 96), 2, 1045 -1048.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Error detection and correction based on Chinese phonemic alphabet in Chinese text", "authors": [ { "first": "C.-M", "middle": [], "last": "Huang", "suffix": "" }, { "first": "M.-C", "middle": [], "last": "Wu", "suffix": "" }, { "first": "C.-C", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Conference on Modeling Decisions for Artificial Intelligence (MDAI IV)", "volume": "", "issue": "", "pages": "463--476", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C.-M., Wu, M.-C., & Chang C.-C. (2007). Error detection and correction based on Chinese phonemic alphabet in Chinese text. In Proceedings of the 4th International Conference on Modeling Decisions for Artificial Intelligence (MDAI IV), 463 -476.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatic Chinese character error detecting system based on n-gram language model and pragmatics knowledge base", "authors": [ { "first": "T.-H", "middle": [], "last": "Hung", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hung, T.-H. (2009). Automatic Chinese character error detecting system based on n-gram language model and pragmatics knowledge base. Master thesis, Chaoyang University of Technology.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A rule based Chinese spelling and grammar detection system utility", "authors": [ { "first": "Y", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2012, "venue": "International Conference on System Science and Engineering (ICSSE)", "volume": "", "issue": "", "pages": "437--440", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang, Y., et al. (2012). A rule based Chinese spelling and grammar detection system utility. 2012 International Conference on System Science and Engineering (ICSSE), 437 -440, 30 June -2 July 2012.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Statistical Machine Translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, P. (2010). Statistical Machine Translation. United Kingdom: Cambridge University Press.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Phonological and logographic influences on errors in written Chinese words", "authors": [ { "first": "C.-L", "middle": [], "last": "Liu", "suffix": "" }, { "first": "K.-W", "middle": [], "last": "Tien", "suffix": "" }, { "first": "M.-H", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Y.-H", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "S.-H", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Seventh Workshop on Asian Language Resources (ALR7), the Forty Seventh Annual Meeting of the Association for Computational Linguistics (ACL'09", "volume": "", "issue": "", "pages": "84--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, C.-L., Tien, K.-W., Lai, M.-H., Chuang, Y.-H., & Wu, S.-H. (2009). Phonological and logographic influences on errors in written Chinese words. In Proceedings of the Seventh Workshop on Asian Language Resources (ALR7), the Forty Seventh Annual Meeting of the Association for Computational Linguistics (ACL'09), 84 -91.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Visually and phonologically similar characters in incorrect Chinese words: Analyses, identification, and applications", "authors": [ { "first": "C.-L", "middle": [], "last": "Liu", "suffix": "" }, { "first": "M.-H", "middle": [], "last": "Lai", "suffix": "" }, { "first": "K.-W", "middle": [], "last": "Tien", "suffix": "" }, { "first": "Y.-H", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "S.-H", "middle": [], "last": "Wu", "suffix": "" }, { "first": "C.-Y", "middle": [], "last": "&lee", "suffix": "" } ], "year": 2011, "venue": "ACM Trans. Asian Lang, Inform. Process", "volume": "10", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, C.-L., Lai, M.-H., Tien, K.-W., Chuang, Y.-H., Wu, S.-H., &Lee, C.-Y. (2011). Visually and phonologically similar characters in incorrect Chinese words: Analyses, identification, and applications. ACM Trans. Asian Lang, Inform. Process, 10(2), Article 10, pages 39, .", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Introduction to CKIP Chinese word segmentation system for the first international Chinese Word Segmentation Bakeoff", "authors": [ { "first": "W.-Y", "middle": [], "last": "Ma", "suffix": "" }, { "first": "K.-J", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL, Second SIGHAN Workshop on Chinese Language Processing", "volume": "17", "issue": "", "pages": "168--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ma, W.-Y., & Chen, K.-J. (2003). Introduction to CKIP Chinese word segmentation system for the first international Chinese Word Segmentation Bakeoff. In Proceedings of ACL, Second SIGHAN Workshop on Chinese Language Processing, 17, 168 -171.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "MOE word frequency table", "authors": [ { "first": "Moe", "middle": [], "last": "", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MOE. (1997). MOE word frequency table, Taiwan: Ministry of Education.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "MOE Dictionary new edition. Taiwan: Ministry of Education", "authors": [ { "first": "Moe", "middle": [], "last": "", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MOE. (2007). MOE Dictionary new edition. Taiwan: Ministry of Education.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Common errors in Chinese writings. Taiwan: Ministry of Education", "authors": [ { "first": "Moe", "middle": [], "last": "", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MOE. (1996). Common errors in Chinese writings. Taiwan: Ministry of Education.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Och, F. J., & Ney, H. (2003). A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1), 19 -51.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A hybrid approach to automatic Chinese text checking and error correction", "authors": [ { "first": "F", "middle": [], "last": "Ren", "suffix": "" }, { "first": "H", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Q", "middle": [], "last": "Zhou", "suffix": "" } ], "year": null, "venue": "Integrating Dictionary and Web N-grams for Chinese Spell Checking", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ren, F., Shi, H., & Zhou, Q. (2001). A hybrid approach to automatic Chinese text checking and error correction. 2001 IEEE International Conference on Systems, Man, and Cybernetics, 3, 1693 -1698, 07 -10 Oct. 2001. Integrating Dictionary and Web N-grams for Chinese Spell Checking 29", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "SRILM at Sixteen: Update and Outlook", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "J", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "W", "middle": [], "last": "Wang", "suffix": "" }, { "first": "V", "middle": [], "last": "Abrash", "suffix": "" } ], "year": 2011, "venue": "Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, A., Zheng, J., Wang, W., & Abrash, V. (2011). SRILM at Sixteen: Update and Outlook. In Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop, Dec. 2011.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Reducing the false alarm rate of Chinese character error detection and correction", "authors": [ { "first": "S.-H", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Y.-X", "middle": [], "last": "Chen", "suffix": "" }, { "first": "P.-C", "middle": [], "last": "Yang", "suffix": "" }, { "first": "T", "middle": [], "last": "Ku", "suffix": "" }, { "first": "C.-L", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2010, "venue": "Proceedings of CIPS-SIGHAN Joint Conference on Chinese Language Processing", "volume": "", "issue": "", "pages": "28--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, S.-H., Chen, Y.-X., Yang, P.-c., Ku, T., & Liu, C.-L. (2010). Reducing the false alarm rate of Chinese character error detection and correction. In Proceedings of CIPS-SIGHAN Joint Conference on Chinese Language Processing (CLP 2010), 54 -61, 28 -29 Aug. 2010.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A Chinese OCR spelling check approach based on statistical language models", "authors": [ { "first": "L", "middle": [], "last": "Zhuang", "suffix": "" }, { "first": "T", "middle": [], "last": "Bao", "suffix": "" }, { "first": "X", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "C", "middle": [], "last": "Wang", "suffix": "" }, { "first": "S", "middle": [], "last": "Naoi", "suffix": "" } ], "year": 2004, "venue": "IEEE International Conference on Systems, Man and Cybernetics", "volume": "5", "issue": "", "pages": "4727--4732", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhuang, L., Bao, T., Zhu, X., Wang, C., & Naoi, S. (2004). A Chinese OCR spelling check approach based on statistical language models. 2004 IEEE International Conference on Systems, Man and Cybernetics, 5, 4727 -4732, 10 -13 Oct. 2004.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "content": "
Replaced\u6c23\u4efd
character
Translations\u6c7d\u4efd\u6ce3\u4efd\u6c23\u5206\u6c23\u5fff
\u5668\u4efd\u5951\u4efd\u6c23\u61a4\u6c23\u7cde
\u4f01\u4efd\u61a9\u4efd\u6c23\u596e\u6c23\u5429
\u8a16\u4efd\u6c2e\u4efd\u6c23\u626e\u6c23\u6c7e
\u8fc4\u4efd\u7ca5\u4efd\u6c23\u82ac\u6c23\u6c1b
", "num": null, "html": null, "text": "" }, "TABREF2": { "type_str": "table", "content": "
TranslationsFrequencyLM probabilitytp
\u6c23\u61a448-4.96-1.20
\u6c23\u6c1b473-3.22-1.11
", "num": null, "html": null, "text": "" }, "TABREF3": { "type_str": "table", "content": "
N-gramSinica Corpus TypesTWWaC Types
2-gram66,7782,
3-gram45,382
4-gram12,294
", "num": null, "html": null, "text": "" }, "TABREF4": { "type_str": "table", "content": "
SystemPrecisionRecallF-score
DICT.91.52.66
CORPUS.90.46.61
WEB.93.47.63
WEB+DICT.95.56.71
Table 5. The comparison of the correction experiment.
SystemPrecisionRecallF-score
FULL+WT.53.51.52
SND+WT.74.57.65
SND+SHP.90.55.68
SND+SHP+WT.95.56
", "num": null, "html": null, "text": "" } } } }