{ "paper_id": "O03-3006", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:02:01.264672Z" }, "title": "Unsupervised Word Segmentation Without Dictionary", "authors": [ { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": { "addrLine": "101, Kuangfu Road", "postCode": "300", "settlement": "Hsinchu", "country": "Taiwan, ROC" } }, "email": "jschang@cs.nthu.edu.tw" }, { "first": "Tracy", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chiao Tung University", "location": { "addrLine": "1001, Ta Hsueh Road, Hsinchu, 300", "country": "Taiwan, ROC" } }, "email": "tracylin@faculity.nctu.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "O03-3006", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "This prototype system demonstrates a novel method of word segmentation based on corpus statistics. Since the central technique we used is unsupervised training based on a large corpus, we refer to this approach as unsupervised word segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The unsupervised approach is general in scope and can be applied to both Mandarin Chinese and Taiwanese. In this prototype, we illustrate its use in word segmentation of Taiwanese Bible written in Hanzi and Romanized characters. Basically, it involves:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Computing mutual information, MI, between Hanzi and Romanized characters A and B. If A and B have a relatively high MI, we lean toward treating AB as a word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Using a greedy method to form words of 2 to 4 characters in the input sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Building an N-gram model from the results of first-round word segmentation Segmenting words based on the N-gram model Iterating between the above two steps: building N-gram and word segmentation Computing mutual information. Using mutual information is motivated by the observation of previous work by Hank and Church (1990) and Sproat and Shih (1990) . If A and B have a relatively high MI that is over a certain threshold, we prefer to identify AB as a word over those having lower MI values. In the experiment with Taiwanese Bible, the system identified Hanzi and Romanized syllables. Out of those, we obtained pairs of consecutive single or double Hanzi characters and Romanized syllables. So those pairs are commonly known as character bigrams, trigrams, and fourgrams. We differed from the common N-gram calculation and treated those as pairs of character sequence in order to apply mutual information statistics. When successive words were formed, they could not contradict with the words determined previously. For instance, given the input \"\u5a66\u4ec1\u4eba\u5c0d\u86c7\u8b1b\uff1a \u300c\u5712\u5167\u6a39\uf9e8\u7684\u679c\u5b50\uf9c6\u901a\u98df,\" we looked up the table storing MI statistics and obtained the information shown in Table2. First, we formed words of two characters. Based on the information in Table 2 , the system formed the words, \u5a66 \u4ec1, \u679c\u5b50, \u901a\u98df, \u5712\u5167, \u6a39\uf9e8. Notice that \u4ec1\u4eba is not selected because of confliction with previous decision about the word \u5a66\u4ec1. Subsequently, we tried to extend the two-syllable words chosen. A word is extended to three or four syllables if the MI is increased and in the corpus over \u03c4 % of instances the two-character words can be extended that way. Currently, we set \u03c4 = 60.", "cite_spans": [ { "start": 311, "end": 324, "text": "Church (1990)", "ref_id": "BIBREF0" }, { "start": 329, "end": 351, "text": "Sproat and Shih (1990)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 1233, "end": 1240, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Admittedly, there is limitation to what distributional regularity based on MI can be exploited for word segmentation and there were still many errors in the first-round word segmentation results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For instance, for the input, \"\u6211\u7948\u79b1\u8036\u548c\u83ef\u8b1b\uff1a \u300e\u4e3b\u8036\u548c\u83ef\u554a \u2026 ,\" the system produced the ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "segmentation of \"\u6211 / \u7948\u79b1 / \u8036\u548c\u83ef / \u8b1b / \uff1a / \u300e / \u4e3b\u8036 / \u548c\u83ef / \u554a /.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "r 0 = N 1 / N 0 r i = (i+1) N i+1 / N i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "After the adjustment step, we obtained the probability for the unigram model as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "P( W ) = r' / N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "where r' is the smoothed count of W For instance, we had the counts after the first-round MI-based segmentation as showed in Table 3 . Word Segmentation based on the N-gram model. We proceeded to redo the word segmentation task on the same corpus with an aim of rectifying the errors occurring in the previous stage. This was done following the standard dynamic programming procedure of Viterbi Algorithm of finding segmentation S satisfying the following optimality condition:", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 134, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "", "sec_num": null }, { "text": "S = ) ( max arg , 1 i ) .. ( 1 \u220f = n i W W W P n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For the example of \"\u6211\u7948\u79b1\u8036\u548c\u83ef\u8b1b\uff1a \u300e\u4e3b\u8036\u548c\u83ef\u554a \u2026 \" given earlier, the system is likely to produce correct segmentation \"\u6211 / \u7948\u79b1 / \u8036\u548c\u83ef / \u8b1b / \uff1a / \u300e / \u4e3b / \u8036\u548c\u83ef / \u554a /\u2026 .\" Table 5 . Probabilities for various segmentations of \"\u4e3b\u8036\u548c\u83ef\" Segmentation, S P(W 1 ) P(W 2 ) P(W 3 ) P(S) \u4e3b, \u8036\u548c\u83ef 0.0018672506 0.0099990557 Our demonstration prototype sheds new lights on the extensively studied problem of word segmentation. The prototype illustrates:", "cite_spans": [], "ref_spans": [ { "start": 154, "end": 161, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It is possible to achieve high-precision word segmentation for a sufficiently large corpus without a dictionary, rivaling human annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The heuristic MI-based approach by Sproat can be extended effective to handle word longer than two characters A more theorectically sound approach based on N-gram model and unsupervised learning based on EM-like algorithm can bring about higher performance than the heuristic approach based on mutual information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Unsupervised, self-organized word segmentation can provide an objective view of word segmentation. This should be considered as a quantitative, corpus-dependent method when setting up a segmentation standard or benchmark for word segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "High-precision segmentation of Hanzi text can be achieved by unsupervised training on a reasonably sized corpus. Unsupervised word segmentation represents an innovative way to acquire lexical units in a large corpus based on lexical distributional regularity. Word segmentation algorithm is standard Viterbi algorithm and is independent of N-gram trained on the corpus, making it easy to change domains. The approach is useful in an indefinite number of areas, and lends itself to customization for a particular user or task. For example, the results can be used to prepare a concordance, as the first steps in many natural language processing systems such as machine translation, information retrieval, or text-to-speech system. Finally, the model explored here can be a basis for self-organized word segmentation and alignment of bilingual Chinese-English corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We acknowledge the support for this study through grants from Ministry of Education, Taiwan (MOE EX-91-E-FA06-4-4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word Association Norms, Mutual Information, and Lexicography", "authors": [ { "first": "K", "middle": [], "last": "Church", "suffix": "" }, { "first": "P", "middle": [], "last": "Hank", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "16", "issue": "1", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Church, K and P. Hank, \"Word Association Norms, Mutual Information, and Lexicography,\" Computational Linguistics, 16:1, 1990, pp. 22-29.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "First International Conference on Language Resources & Evaluation: Proceedings", "authors": [ { "first": "Chinese", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "", "middle": [], "last": "Word Segmentation", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "417--420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, Chinese Word Segmentation, First International Conference on Language Resources & Evaluation: Proceedings, 1998, pp. 417-420.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A statistical method for finding word boundaries in Chinese text", "authors": [ { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Chilin", "middle": [], "last": "Shih", "suffix": "" } ], "year": 1990, "venue": "Computer Processing of Chinese & Oriental Languages", "volume": "4", "issue": "4", "pages": "336--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Sproat, Chilin Shih, 1990, A statistical method for finding word boundaries in Chinese text. Computer Processing of Chinese & Oriental Languages, 4(4): 336-351.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A stochastic finite-state word-segmentation algorithm for chinese", "authors": [ { "first": "R", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Chilin", "middle": [], "last": "Shih", "suffix": "" }, { "first": "William", "middle": [], "last": "Gale", "suffix": "" }, { "first": "Nancy", "middle": [], "last": "Chang", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "3", "pages": "377--404", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Sproat, Chilin Shih, William Gale, and Nancy Chang. 1996. A stochastic finite-state word-segmentation algorithm for chinese. Computational Linguistics, 22(3): 377-404.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "and word segmentation. The improved word segmentation obviously will bring about a better N-gram model for segmentation. Subsequently the improved N-gram will help to produce segmentation results of higher accuracy. The process of improvement usually converges quickly after a couple of iterations.", "type_str": "figure", "uris": null }, "TABREF0": { "text": "", "html": null, "content": "
shows some
", "type_str": "table", "num": null }, "TABREF2": { "text": "", "html": null, "content": "
Good-Turing estimates for unigrams: Adjusted frequencies and probabilities
rN rr*P GT ( . )
", "type_str": "table", "num": null } } } }