{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:33:46.506339Z" }, "title": "Context-Aware Word Segmentation for Chinese Real-World Discourse", "authors": [ { "first": "Kaiyu", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dalian University of Technology {kaiyuhuang", "location": { "country": "liujunpeng" } }, "email": "huangdg@dlut.edu.cn" }, { "first": "Junpeng", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dalian University of Technology {kaiyuhuang", "location": { "country": "liujunpeng" } }, "email": "" }, { "first": "Jingxiang", "middle": [], "last": "Cao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dalian University of Technology {kaiyuhuang", "location": { "country": "liujunpeng" } }, "email": "" }, { "first": "Degen", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dalian University of Technology {kaiyuhuang", "location": { "country": "liujunpeng" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Previous neural approaches achieve significant progress for Chinese word segmentation (CWS) as a sentence-level task, but it suffers from limitations on real-world scenario. In this paper, we address this issue with a context-aware method and optimize the solution at document-level. This paper proposes a three-step strategy to improve the performance for discourse CWS. First, the method utilizes an auxiliary segmenter to remedy the limitation on pre-segmenter. Then the context-aware algorithm computes the confidence of each split. The maximum probability path is reconstructed via this algorithm. Besides, in order to evaluate the performance in discourse, we build a new benchmark consisting of the latest news and Chinese medical articles. Extensive experiments on this benchmark show that our proposed method achieves a competitive performance on a document-level real-world scenario for CWS.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Previous neural approaches achieve significant progress for Chinese word segmentation (CWS) as a sentence-level task, but it suffers from limitations on real-world scenario. In this paper, we address this issue with a context-aware method and optimize the solution at document-level. This paper proposes a three-step strategy to improve the performance for discourse CWS. First, the method utilizes an auxiliary segmenter to remedy the limitation on pre-segmenter. Then the context-aware algorithm computes the confidence of each split. The maximum probability path is reconstructed via this algorithm. Besides, in order to evaluate the performance in discourse, we build a new benchmark consisting of the latest news and Chinese medical articles. Extensive experiments on this benchmark show that our proposed method achieves a competitive performance on a document-level real-world scenario for CWS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Downstream tasks in Chinese natural language processing (NLP) leverage word-level information to construct architectures. In recent years, some technologies gradually replace the word-level information in order to alleviate segmentation errors and word sparse problems . However, it may lose all of the other word-level information (e.g., Part-of-speech, and Dependency parsing). Chinese word segmentation (CWS) is still essential for downstream Chinese NLP tasks. Xue (2003) formalized CWS task as a sequence labeling problem. The performance for CWS has achieved significant progress via statistical machine learning (Zhao and Kit, 2008; Zhao et al., 2010) and neural network methods (Cai et al., 2017; Zhou et al., 2017; Yang et al., 2017; Ma et al., 2018) . In particular, recent years have also seen a new supervised learning paradigm in applying BERT or other pre-training models for sequence labeling problems (Huang et al., 2019; Tian et al., 2020) . Various fine-tuning methods can improve the performance for in-domain CWS significantly and easily. Previous researches almost perform like humans with a nearly 2% error rate via pre-training methods. And neural methods do not rely on the hand-craft feature engineering, compared with statistical machine learning methods. However, recent state-of-the-art neural methods always suffer from two limitations as follow: 1) Effective neural network methods and finetuning methods based on pre-training models need large annotated corpora to train. The performance will take a nosedive under a lowresource scenario.", "cite_spans": [ { "start": 465, "end": 475, "text": "Xue (2003)", "ref_id": "BIBREF9" }, { "start": 619, "end": 639, "text": "(Zhao and Kit, 2008;", "ref_id": "BIBREF16" }, { "start": 640, "end": 658, "text": "Zhao et al., 2010)", "ref_id": "BIBREF15" }, { "start": 686, "end": 704, "text": "(Cai et al., 2017;", "ref_id": "BIBREF0" }, { "start": 705, "end": 723, "text": "Zhou et al., 2017;", "ref_id": "BIBREF18" }, { "start": 724, "end": 742, "text": "Yang et al., 2017;", "ref_id": "BIBREF11" }, { "start": 743, "end": 759, "text": "Ma et al., 2018)", "ref_id": "BIBREF6" }, { "start": 917, "end": 937, "text": "(Huang et al., 2019;", "ref_id": "BIBREF2" }, { "start": 938, "end": 956, "text": "Tian et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2) Neural network methods are good at processing short sentences instead of long sentences or document-level texts, especially for Chinese word segmentation. Although some neural architectures try to alleviate this problem, e.g., long-short term memory (LSTM) neural network, the performance still drops dramatically under a long maximum length of sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Both two limitations are reflected under real-world document-level texts apparently. Many Chinese tasks are discourse processing researches in most cases, e.g., text classification and machine translation. So the effectiveness for real-world documentlevel CWS affects the performance on these tasks. Real-world texts are always time-efficient. Not only cross-domain but also time-validity leads to the issue of out-of-vocabulary (OOV) words. Even though previous researches alleviate the issues for cross-domain CWS, the issue for time-efficient CWS is hard to solve with similar inspiration by these effective methods. Most methods start with data-driven to improve the performance for cross- domain CWS. For instance, previous works incorporate the domain dictionary and pre-training embedding into neural network methods Zhao et al., 2018; Ye et al., 2019) . Annotating a small number of training corpora is the most effective and simple method. However, large annotated corpus cannot be updated immediately to deal with the latest news and dialogues, due to the high cost of the hand-craft annotated work. Similarly, training an effective pre-training model is time-consuming. It is impractical to update the model and annotated corpus with the latest data. Furthermore, previous effective methods always suffer from the weakness of robustness for discourse CWS, as shown in Figure 1 . The word \"Gao Shan\" occurs twice with the context of similar semantics in the discourse. However, it is segmented into different splits. Because of the limitation on the maximum length of input sequences, segmentation consistency is not guaranteed.", "cite_spans": [ { "start": 824, "end": 842, "text": "Zhao et al., 2018;", "ref_id": "BIBREF17" }, { "start": 843, "end": 859, "text": "Ye et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 1379, "end": 1387, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous works for in-domain and cross-domain CWS are always character-level and sentence-level. Li and Xue (2014) proposes an effective method for patent domain CWS via integrating the documentlevel features, but it is still a sentence-level optimization . Yan et al. (2017) utilizes multiple constraint rules to alleviate the issues on specific domains. This paper proposes a context-aware unsupervised method to alleviate the above issues for Chinese word segmentation, instead of adopting multiple constraint rules. And it is not limited to a specific domain. The method is aware of a global receptive field in the entire discourse. It utilizes the document-level information directly to improve the performance for CWS on the real-world discourse scenario. In particular, the words that recur in discourse are rejudged by our proposed method. The uncertain words are also reconsidered.", "cite_spans": [ { "start": 97, "end": 114, "text": "Li and Xue (2014)", "ref_id": "BIBREF4" }, { "start": 258, "end": 275, "text": "Yan et al. (2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The method consists of three steps: a word-lattice based pre-segmenter, a rejudged module, and a context-aware algorithm. First, the pre-segmenter achieves high performance on in-vocabulary words. Then the rejudged module chooses the uncertain splits as potential out-of-vocabulary words. Finally, the core context-aware algorithm utilizes the document-level information to screen the uncertain splits. The sequence labeling task is to find the maximum probability path. A new path is reconstructed through the three steps. To evaluate our method, we build a new benchmark with document-level texts for CWS. It contains the latest news and Chinese medical articles. Extensive experiments show that our proposed method is effective in discourse area and achieves a competitive performance for real-world CWS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To sum up, our main contributions are threefold: 1) To the best of our knowledge, our proposed method is the first work to adopt a document-level unsupervised learning algorithm for CWS in a realworld scenario. It only takes the information about the current discourse itself. The benefit is that there is no need to do maintenance work for external resources constantly. 2) The method acts on the global field of discourse. It can alleviate the issue of segmentation inconsistency effectively. 3) We propose a new benchmark to evaluate the performance for CWS on real-world discourse scenario. 1 2 Methodology Figure 2 shows the entire process of our proposed method. It consists of three steps. In the first two steps, we utilize a pre-segmenter and an auxiliary segmenter to segment the sentences. The two segmenters generate two segmentation results R Figure 2 : The process of the context-aware method and R respectively. It can distribute segmentation splits with different reliability. The pre-segmenter and auxiliary segmenter are all sentence-level CWS methods. In the third step, the context-aware algorithm leverages the document-level information to determine the final boundary of these splits. It is global optimization to revise the results in the prior steps. The final segmentation result R o is obtained by this optimization.", "cite_spans": [], "ref_spans": [ { "start": 611, "end": 619, "text": "Figure 2", "ref_id": null }, { "start": 856, "end": 864, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The word-lattice based method is studied for a long time and it is gradually replaced by neural methods. However, the method retains a high performance on in-vocabulary words (Kudo et al., 2004) . It depends on the robustness of the word lattice. Given an unsegmented sentence S, the method builds an undirected graph with a system lattice. The graph consists of nodes and edges respectively. Every node represents the splits which could be a word, and the edges are paths with the transition probability. CWS is transferred into a task of searching the maximum probability path of the undirected graph. The word-lattice based model is essentially a statistical machine learning method, and the transition probability is trained with a hand-craft feature template inevitably. To simplify this process, we adopt a simple and base feature template that consists of the current word itself and the sliding window with a front and back distance of two. This process does not need much knowledge. The effort and cost are similar to those of building embedding layer in neural methods.", "cite_spans": [ { "start": 175, "end": 194, "text": "(Kudo et al., 2004)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-segmenter", "sec_num": "2.1" }, { "text": "The pre-segmenter can provide the segmentation result of high accuracy on in-vocabulary words. However, the recognition capability of out-ofvocabulary words is weak by the pre-segmenter. We leverage an auxiliary segmenter to rejudge uncertain splits in the first step for the potential to become words. In order to capture the flexible lexical features of characters, the auxiliary segmenter employs a BERT fine-tuning learning paradigm on the character-level CWS. Given the same unsegmented sentence S in the first step, the character sequence of the sentence maps onto corresponding Algorithm 1 The document-level context-aware optimization Input:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auxiliary Segmenter", "sec_num": "2.2" }, { "text": "The pre-segmentation result of characters with", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auxiliary Segmenter", "sec_num": "2.2" }, { "text": "labels R = (t 1 , t 2 , t 3 \u2022 \u2022 \u2022t n );", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auxiliary Segmenter", "sec_num": "2.2" }, { "text": "The auxiliary segmentation result of characters with labels R = (t 1 , t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auxiliary Segmenter", "sec_num": "2.2" }, { "text": "2 , t 3 \u2022 \u2022 \u2022t n ); \u2203R u \u2282 R, \u2203R u \u2282 R .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auxiliary Segmenter", "sec_num": "2.2" }, { "text": "Choose all continuous splits that meet the conditions (t i is the tag (S) and t i is not the tag (S and E)), are called rejudged unit R u ; Threshold value \u03bb; Output:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auxiliary Segmenter", "sec_num": "2.2" }, { "text": "Final segmentation result R o ; 1: N umber = 0 2: for t i in R u do 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auxiliary Segmenter", "sec_num": "2.2" }, { "text": "Take the maximum co-occurrence frequency f i of front and back characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auxiliary Segmenter", "sec_num": "2.2" }, { "text": "p i = log 10 (f i ) + j (p ij * log p ij ) + p i 5: if p i < \u03bb then 6: Remove R i u from R u 7: end if 8: if f i == f i\u22121 then 9: N i = N umber 10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4:", "sec_num": null }, { "text": "else 11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4:", "sec_num": null }, { "text": "N i = N umber + 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4:", "sec_num": null }, { "text": "12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4:", "sec_num": null }, { "text": "N umber = N umber + 1 13:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4:", "sec_num": null }, { "text": "end if 14: end for 15: Update the R i u which has the same N i embeddings. Then the embeddings are encoded by the pre-training BERT encoder. The pre-training model is transferred into CWS task through a linear transfer layer. In the end, the marginal probability P i of each character is computed through a Sof tmax layer. We only extract the marginal probabilities of characters instead of obtaining the final segmentation result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4:", "sec_num": null }, { "text": "The pre-segmenter is more sensitive to word-level splits which are in the system lattice. It does not conjecture the uncertain boundary and leads to a weakness in out-of-vocabulary words. The OOV words are segmented into continuous single words incorrectly. For instance, the word \"\u6838 \u82f7\u9178(nucleotide)\" is not in the lattice and is labeled as \"SSS\", which represents three continuous single words. While the auxiliary segmenter that uses character based method is good at conjecturing the dependence between two close characters. The characters \"\u6838(core)\", \"\u82f7(glycosides)\" and \"\u9178(acid)\" may be predicted as a non-single label by the auxiliary segmenter. Because the \"\u6838(core)\" and \"\u9178(acid)\" have wide ranges of probability as the boundary of the word in this structure, such as \"\u6838\u7cd6(ribose)\" and \"\u6838\u9178(nucleic acid)\". Inspired by this idea, some local paths of the presegmentation result R are been concerned by the rejudged module. And we utilize a context-aware algorithm to determine the edges of these paths. The algorithm is a global optimization rather than the way to integrate the document-level information into sentence-level optimization. The algorithm is shown as Alg. 1 and R o represents the final segmentation result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Algorithm", "sec_num": "2.3" }, { "text": "3 Datasets and Experiments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-aware Algorithm", "sec_num": "2.3" }, { "text": "To evaluate the performance for CWS on discourse, we propose a new benchmark consisting of two domains (named Chinese daily news and Chinese medical article respectively), as shown in Table 1 . A segmentation criterion with fine-grained is adopted, which is close to Peking University (PKU) criterion. This fine-grained criterion is effective in machine translation and prepositional phrase recognition. The training data comes from People's Daily in Jan.1998. The time is far away from the two test data. The size of the new benchmark is shown in Table 1 . In this paper, we adopt preci-sion (P), recall (R), and F value to evaluate each method. In addition, the recalls of in-vocabulary (R iv ) and out-of-vocabulary (R oov ) are considered for evaluation. We choose the median of the range of marginal probability (p i \u2208 [0, 1]) as the threshold \u03bb = 0.5. The hyper-parameter values in our proposed method is empirical from previous related work (Ma et al., 2018) .", "cite_spans": [ { "start": 949, "end": 966, "text": "(Ma et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 184, "end": 192, "text": "Table 1", "ref_id": null }, { "start": 549, "end": 556, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Datasets and Settings", "sec_num": "3.1" }, { "text": "We make comprehensive experiments on the new benchmark. We compare the context-aware segmentation with multiple previous proposed methods, which are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main Result", "sec_num": "3.2" }, { "text": "\u2022 Jieba: a famous Chinese word segmentation tool with domain-specific dictionary. We integrate a medical domain lattice into the tool.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main Result", "sec_num": "3.2" }, { "text": "\u2022 LSTM: an effective and concise model used in Ma et al. (2018) . In order to improve the performance on OOV words, we also integrate pre-training embedding into base model.", "cite_spans": [ { "start": 47, "end": 63, "text": "Ma et al. (2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Main Result", "sec_num": "3.2" }, { "text": "\u2022 BERT: a pre-training model with fine-tuning similarly used in Cui et al. (2019) .", "cite_spans": [ { "start": 64, "end": 81, "text": "Cui et al. (2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Main Result", "sec_num": "3.2" }, { "text": "\u2022 FA-CWS: a fast and accurate neural method with greedy searching by Cai et al. (2017) . The pre-training embedding is based on word2vec.", "cite_spans": [ { "start": 69, "end": 86, "text": "Cai et al. (2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Main Result", "sec_num": "3.2" }, { "text": "\u2022 Lattice-LSTM: a lattice based LSTM with subword encoding proposed by .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main Result", "sec_num": "3.2" }, { "text": "In addition, to verify the effectiveness of document-level optimization, we compare our proposed method with the pre-segmenter and the sentence-level method. The sentence-level method does not utilize any document-level information, and the input is a sentence instead of the discourse. The results of Chinese Daily News and Chinese medical articles are shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 363, "end": 370, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Main Result", "sec_num": "3.2" }, { "text": "From Table 2 , it is observed that our proposed method can improve the performance via document-level optimization for Chinese discourse, compared with the methods using the characterlevel and sentence-level optimization. In addition, the context-aware method does not rely on any external resources. It only extracts the documentlevel information itself and is not domain limited which is different from previous document segmentation researches. The method is practical for the common domain, and it has strong robustness when dealing with a real-world scenario. Compared with previous state-of-the-art character-level and sentence-level works, an obvious improvement is achieved by our proposed method. Furthermore, due to the ability of external resources, the R oov values of \"LSTM\" and \"BERT\" are high when adopting the pre-training embedding. \"Jieba\" utilizes a medical domain dictionary, and achieves a competitive performance for medical domain segmentation. Our proposed method leverages the information of the test discourse itself to achieve comparable performance.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Main Result", "sec_num": "3.2" }, { "text": "Existing methods have a potential weakness in dealing with the OOV issue for the new benchmarks. The context-aware method can alleviate this issue for Chinese discourse. Actually, the factors that directly affect the performance of downstream NLP tasks are keywords in discourse. These words have a high probability of OOV words and frequently occur in a document. For instance, the word \"\u841d \u8389(Lolita)\" occurs more than 10 times in the news about Japanese culture. This word is hard to segment in using pre-segmenter because it is not in the lattice. It is segmented into two single words. The auxiliary segmenter pays attention to recognizing this continuous split. Then the context-aware algorithm recalls the splits as one word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "3.3" }, { "text": "However, it is inevitable that some in-vocabulary words will be affected in the context-aware pro-cessing. The value R iv may drop a little in Table 2 . For instance, the in-vocabulary word \"\u9ad8\u5c71(high mountain)\" and \"\u9ad8(Gao)/ \u5c71(Shan)\" occurs in discourse at the same time. The two splits represent a common noun and a Chinese person name respectively. Both of them are in the lattice. The Chinese name \"\u9ad8(Gao)/ \u5c71(Shan)\" may be segmented as one word incorrectly at the context-aware step. To alleviate this issue, a feasible way is to integrate syntactic knowledge into the model. We will research this idea in the future.", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 150, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Case Study", "sec_num": "3.3" }, { "text": "In this paper, we intuitively propose a contextaware method to boost the segmentation inconsistency in discourse. The time-efficient and domain knowledge are considered via the document-level information. The method is explainable and unsupervised. In summary, our proposed method is empirical but do not used stiff constraint rules. Besides, a new benchmark in discourse is built for evaluation of the document-level Chinese word segmentation. The distribution of words is natural on benchmark. However, the scale of the benchmark is still limited. We will expand them and make it more reliable. And we will try to integrate the knowledge into popular neural models in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "Our code are available at https://github.com/ koukaiu/dlut-nihao", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the reviewers for their helpful comments and suggestions to improve the qual-ity of the paper. The authors gratefully acknowledge the financial support provided by the National Key Research and Development Program of China (2020AAA0108004) and the National Natural Science Foundation of China under (No.U1936109, 61672127).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Fast and accurate neural word segmentation for Chinese", "authors": [ { "first": "Deng", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Zhisong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Xin", "suffix": "" }, { "first": "Yongjian", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Feiyue", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "608--615", "other_ids": { "DOI": [ "10.18653/v1/P17-2096" ] }, "num": null, "urls": [], "raw_text": "Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, and Feiyue Huang. 2017. Fast and accurate neural word segmentation for Chinese. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 608-615, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Pre-training with whole word masking for chinese bert", "authors": [ { "first": "Yiming", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Ziqing", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Shijin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Guoping", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.08101" ] }, "num": null, "urls": [], "raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Toward fast and accurate neural chinese word segmentation with multi-criteria learning", "authors": [ { "first": "Weipeng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xingyi", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Kunlong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Taifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Chu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.04190" ] }, "num": null, "urls": [], "raw_text": "Weipeng Huang, Xingyi Cheng, Kunlong Chen, Taifeng Wang, and Wei Chu. 2019. Toward fast and accurate neural chinese word segmenta- tion with multi-criteria learning. arXiv preprint arXiv:1903.04190.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Applying conditional random fields to japanese morphologiaical analysis", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Kaoru", "middle": [], "last": "Yamamoto", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "230--237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Applying conditional random fields to japanese morphologiaical analysis. In Proceed- ings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 230-237, Barcelona, Spain.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Effective documentlevel features for chinese patent word segmentation", "authors": [ { "first": "Si", "middle": [], "last": "Li", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "199--205", "other_ids": {}, "num": null, "urls": [], "raw_text": "Si Li and Nianwen Xue. 2014. Effective document- level features for chinese patent word segmentation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 199-205.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Is word segmentation necessary for deep learning of chinese representations?", "authors": [ { "first": "Xiaoya", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yuxian", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Xiaofei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Qinghong", "middle": [], "last": "Han", "suffix": "" }, { "first": "Arianna", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3242--3252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoya Li, Yuxian Meng, Xiaofei Sun, Qinghong Han, Arianna Yuan, and Jiwei Li. 2019. Is word segmen- tation necessary for deep learning of chinese repre- sentations? In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3242-3252.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "State-of-the-art chinese word segmentation with bilstms", "authors": [ { "first": "Ji", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "David", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4902--4908", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ji Ma, Kuzman Ganchev, and David Weiss. 2018. State-of-the-art chinese word segmentation with bi- lstms. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4902-4908.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Glyce: Glyph-vectors for chinese character representations", "authors": [ { "first": "Yuxian", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaoya", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ping", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Muyu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qinghong", "middle": [], "last": "Han", "suffix": "" }, { "first": "Xiaofei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2742--2753", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuxian Meng, Wei Wu, Fei Wang, Xiaoya Li, Ping Nie, Fan Yin, Muyu Li, Qinghong Han, Xiaofei Sun, and Jiwei Li. 2019. Glyce: Glyph-vectors for chinese character representations. In Advances in Neural In- formation Processing Systems, pages 2742-2753.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Joint chinese word segmentation and part-of-speech tagging via two-way attentions of auto-analyzed knowledge", "authors": [ { "first": "Yuanhe", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ao", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Quan", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yonggang", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8286--8296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuanhe Tian, Yan Song, Xiang Ao, Fei Xia, Xiao- jun Quan, Tong Zhang, and Yonggang Wang. 2020. Joint chinese word segmentation and part-of-speech tagging via two-way attentions of auto-analyzed knowledge. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 8286-8296.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Chinese word segmentation as character tagging", "authors": [ { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2003, "venue": "Special Issue on Word Formation and Chinese Language Processing", "volume": "8", "issue": "", "pages": "29--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nianwen Xue. 2003. Chinese word segmentation as character tagging. In International Journal of Com- putational Linguistics & Chinese Language Process- ing, Volume 8, Number 1, February 2003: Special Is- sue on Word Formation and Chinese Language Pro- cessing, pages 29-48.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Domain-specific chinese word segmentation with document-level optimization", "authors": [ { "first": "Qian", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Chenlin", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Shoushan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Fen", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Zekai", "middle": [], "last": "Du", "suffix": "" } ], "year": 2017, "venue": "National CCF Conference on Natural Language Processing and Chinese Computing", "volume": "", "issue": "", "pages": "353--365", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qian Yan, Chenlin Shen, Shoushan Li, Fen Xia, and Zekai Du. 2017. Domain-specific chinese word seg- mentation with document-level optimization. In Na- tional CCF Conference on Natural Language Pro- cessing and Chinese Computing, pages 353-365. Springer.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Neural word segmentation with rich pretraining", "authors": [ { "first": "Jie", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Dong", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "839--849", "other_ids": { "DOI": [ "\"10.18653/v1/P17-1078\"" ] }, "num": null, "urls": [], "raw_text": "Jie Yang, Yue Zhang, and Fei Dong. 2017. Neural word segmentation with rich pretraining. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 839-849, \"Vancouver, Canada\". \"As- sociation for Computational Linguistics\".", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Subword encoding in lattice lstm for chinese word segmentation", "authors": [ { "first": "Jie", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shuailong", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2720--2725", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jie Yang, Yue Zhang, and Shuailong Liang. 2019. Sub- word encoding in lattice lstm for chinese word seg- mentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2720-2725.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Improving cross-domain chinese word segmentation with word embeddings", "authors": [ { "first": "Yuxiao", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Weigang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Likun", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "2726--2735", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuxiao Ye, Weigang Li, Yue Zhang, Likun Qiu, and Jian Sun. 2019. Improving cross-domain chinese word segmentation with word embeddings. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 2726-2735.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Neural networks incorporating dictionaries for chinese word segmentation", "authors": [ { "first": "Qi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiaoyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jinlan", "middle": [], "last": "Fu", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qi Zhang, Xiaoyu Liu, and Jinlan Fu. 2018. Neural net- works incorporating dictionaries for chinese word segmentation. In Thirty-Second AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A unified character-based tagging framework for chinese word segmentation", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chang-Ning", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Bao-Liang", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2010, "venue": "ACM Transactions on Asian Language Information Processing", "volume": "9", "issue": "2", "pages": "1--32", "other_ids": { "DOI": [ "10.1145/1781134.1781135" ] }, "num": null, "urls": [], "raw_text": "Hai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2010. A unified character-based tagging frame- work for chinese word segmentation. ACM Trans- actions on Asian Language Information Processing, 9(2):1-32.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Unsupervised segmentation helps supervised learning of character tagging for word segmentation and named entity recognition", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chunyu", "middle": [], "last": "Kit", "suffix": "" } ], "year": 2008, "venue": "The Sixth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "106--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao and Chunyu Kit. 2008. Unsupervised seg- mentation helps supervised learning of character tag- ging for word segmentation and named entity recog- nition. In The Sixth SIGHAN Workshop on Chinese Language Processing, pages 106-111, Hyderabad, India.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Neural networks incorporating unlabeled and partially-labeled data for cross-domain chinese word segmentation", "authors": [ { "first": "Lujun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaoyu", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "IJCAI", "volume": "", "issue": "", "pages": "4602--4608", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lujun Zhao, Qi Zhang, Peng Wang, and Xiaoyu Liu. 2018. Neural networks incorporating unlabeled and partially-labeled data for cross-domain chinese word segmentation. In IJCAI, pages 4602-4608.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural system combination for machine translation", "authors": [ { "first": "Long", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wenpeng", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "378--384", "other_ids": { "DOI": [ "10.18653/v1/P17-2060" ] }, "num": null, "urls": [], "raw_text": "Long Zhou, Wenpeng Hu, Jiajun Zhang, and Chengqing Zong. 2017. Neural system combina- tion for machine translation. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 378-384, Vancouver, Canada. Association for Com- putational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "The limitation on the document-level Chinese word segmentation" }, "TABREF0": { "html": null, "text": "LSTM 92.86 91.61 92.23 93.13 71.28 79.92 83.70 81.77 89.62 60.58 pre-segmenter 90.53 94.77 92.60 97.70 54.69 80.34 89.42 84.64 97.20 67.17 sentence-level 95.22 96.14 95.68 97.45 88.52 83.82 90.84 87.19 97.63 71.45 ours 96.14 96.15 96.15 97.02 91.12 87.30 92.14 89.66 97.16 77.77", "type_str": "table", "num": null, "content": "
CorporaWord Character OOV Rate
Train People's Daily in Jan. 1998 1.2M2.0M
TestChinese Daily News Chinese medical articles42K 40K63K 66K6.8% 27.7%
Table 1: The sizes of the new benchmark
MethodPChinese Daily News R F R ivR oovPChinese Medical articles R F R ivR oov
Jieba87.76 79.85 83.62 80.15 75.85 85.42 79.60 82.41 78.63 82.13
LSTM93.12 91.68 92.40 91.80 91.04 86.29 88.49 87.38 90.88 81.66
BERT93.55 92.49 93.02 93.05 89.17 86.32 87.69 87.00 90.60 79.36
FA-CWS91.53 90.83 91.18 93.10 60.24 86.94 85.69 86.31 90.13 60.24
Lattice-
" }, "TABREF1": { "html": null, "text": "The result of the new benchmark. The highest values are bold.", "type_str": "table", "num": null, "content": "" } } } }