{ "paper_id": "I17-1019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:37:42.081104Z" }, "title": "Addressing Domain Adaptation for Chinese Word Segmentation with Global Recurrent Structure", "authors": [ { "first": "Shen", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "Key Laboratory of Computational Linguistics", "institution": "Peking University", "location": { "postCode": "100871", "settlement": "Beijing", "country": "P.R.China" } }, "email": "huangshenno1@pku.edu.cn" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "", "affiliation": { "laboratory": "Key Laboratory of Computational Linguistics", "institution": "Peking University", "location": { "postCode": "100871", "settlement": "Beijing", "country": "P.R.China" } }, "email": "xusun@pku.edu.cn" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "Key Laboratory of Computational Linguistics", "institution": "Peking University", "location": { "postCode": "100871", "settlement": "Beijing", "country": "P.R.China" } }, "email": "wanghf@pku.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Boundary features are widely used in traditional Chinese Word Segmentation (CWS) methods as they can utilize unlabeled data to help improve the Out-of-Vocabulary (OOV) word recognition performance. Although various neural network methods for CWS have achieved performance competitive with state-of-the-art systems, these methods, constrained by the domain and size of the training corpus, do not work well in domain adaptation. In this paper, we propose a novel BLSTMbased neural network model which incorporates a global recurrent structure designed for modeling boundary features dynamically. Experiments show that the proposed structure can effectively boost the performance of Chinese Word Segmentation, especially OOV-Recall, which brings benefits to domain adaptation. We achieved state-of-the-art results on 6 domains of CNKI articles, and competitive results to the best reported on the 4 domains of SIGHAN Bakeoff 2010 data.", "pdf_parse": { "paper_id": "I17-1019", "_pdf_hash": "", "abstract": [ { "text": "Boundary features are widely used in traditional Chinese Word Segmentation (CWS) methods as they can utilize unlabeled data to help improve the Out-of-Vocabulary (OOV) word recognition performance. Although various neural network methods for CWS have achieved performance competitive with state-of-the-art systems, these methods, constrained by the domain and size of the training corpus, do not work well in domain adaptation. In this paper, we propose a novel BLSTMbased neural network model which incorporates a global recurrent structure designed for modeling boundary features dynamically. Experiments show that the proposed structure can effectively boost the performance of Chinese Word Segmentation, especially OOV-Recall, which brings benefits to domain adaptation. We achieved state-of-the-art results on 6 domains of CNKI articles, and competitive results to the best reported on the 4 domains of SIGHAN Bakeoff 2010 data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Since Chinese writing system does not have explicit word delimiters, word segmentation becomes an essential first step for further Chinese language processing. In recent years, Chinese Word Segmentation (CWS) has experienced great advancement. One mainstream method is to regard word segmentation task as a sequence labeling problem (Xue, 2003; Peng et al., 2004) where each character is assigned a tag indicating its position in the word. This method has been proved effective as it turns word segmentation into a structured discriminative learning task which can be handled by supervised learning algorithms such as Maximum Entropy (ME) (Berger et al., 1996) and Conditional Random Fields (CRF) (Lafferty et al., 2001 ). Furthermore, rich features can be incorporated into these systems to improve their performances and most state-of-the-art systems are still based on feature-based models.", "cite_spans": [ { "start": 333, "end": 344, "text": "(Xue, 2003;", "ref_id": "BIBREF23" }, { "start": 345, "end": 363, "text": "Peng et al., 2004)", "ref_id": "BIBREF18" }, { "start": 639, "end": 660, "text": "(Berger et al., 1996)", "ref_id": "BIBREF0" }, { "start": 697, "end": 719, "text": "(Lafferty et al., 2001", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, neural network models are drawing increasing attention in Natural Language Processing (NLP) tasks. They significantly reduced feature engineering effort and achieved competitive or state-of-the-art results in many NLP tasks. Collobert et al. (2011) developed a general neural network architecture for sequence labeling tasks. Following this work, many neural network models (Zheng et al., 2013; Pei et al., 2014; Chen et al., 2015a,b) have been applied to CWS and some approached state-of-the-art performance.", "cite_spans": [ { "start": 235, "end": 258, "text": "Collobert et al. (2011)", "ref_id": "BIBREF5" }, { "start": 384, "end": 404, "text": "(Zheng et al., 2013;", "ref_id": "BIBREF31" }, { "start": 405, "end": 422, "text": "Pei et al., 2014;", "ref_id": "BIBREF17" }, { "start": 423, "end": 444, "text": "Chen et al., 2015a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, these neural network models, as well as other supervised methods, do not work well in domain adaptation. In recent years, manually annotated training corpus mostly come from the news domain. When it shifts to other domains such as literature or medicine, where there are many domain-related words that rarely appear in other domains, Out-of-Vocabulary (OOV) word recognition becomes an important problem. Moreover, different domains means different language usages and contexts. Therefore, the In-Vocabulary (IV) word segmentation performance is also affected. As a result, CWS accuracies can drop gravely on cross-domain corpora. For example, consider a sentence \"\u4e09\u805a\u6c30\u80fa(melamine) / \u81f4(lead to) / \u5a74 \u5e7c \u513f(baby) / \u6ccc \u5c3f \u7cfb(urinary tract) / \u7ed3 \u77f3(stones)\". Here the word \"\u4e09 \u805a \u6c30 \u80fa(melamine)\" is a chemical that often appears in medicine-related domains while seldom appears in other domains. It is a four-Chinese-character word where each character stands for 'three', 'gather', 'cyanide' and 'amine'. The four characters are totally irrelevant. A supervised CWS system trained on news domain corpus would face great challenges on segmenting this word correctly Several approaches have been proposed to address the domain adaption problem for CWS. One major family proposed to compose boundary features by fitting the relevance of consecutive characters using Accessor Variety (AV) (Feng et al., 2004a,b) , or Chi-square Statistics (Chi2) (Chang and Han, 2010) . Combining the boundary features with other hand-crafted features, these methods were shown to achieve better performance on OOV words.", "cite_spans": [ { "start": 1379, "end": 1401, "text": "(Feng et al., 2004a,b)", "ref_id": null }, { "start": 1436, "end": 1457, "text": "(Chang and Han, 2010)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Inspired by these models, we propose a novel BLSTM-based neural network model which incorporates a global recurrent structure designed to model boundary features dynamically. This structure can learn to utilize the target domain corpus and extract the correlation or irrelevance between characters, which is a reminiscence of the discrete boundary features such as Accessor Variety (AV).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The contributions of this paper are two folds:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 First, we propose a global recurrent structure and incorporate it in the BLSTM-based neural network model for CWS. The structure can capture correlations between characters, and thus is especially efficient for segmenting OOV words and enhancing the performance of CWS on non-news domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We obtain competitive results comparing to the best reported in the literature on the SIGHAN Bakeoff 2010 data, which is a benchmark dataset for cross-domain CWS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We regard Chinese word segmentation task as a character-based sequence labeling problem, by labeling each character a tag from {S, B, E, M}. These tags indicate the position of the character in the segmented word. B, E, M represents Begin, Middle, End of a multi-character segmentation respectively, while S represents a single-character segmentation. Figure 1 illustrates the general BLSTM architecture for Chinese word segmentation. ", "cite_spans": [], "ref_spans": [ { "start": 352, "end": 360, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "BLSTM Architecture for Chinese Word Segmentation", "sec_num": "2" }, { "text": "The outputs of the embedding layer is a concatenation of three parts: character embeddings, bigram embeddings and boundary feature embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embeddings", "sec_num": "2.1" }, { "text": "We adopt the the local window approach which assumes that the tag of a character largely depends on its neighboring characters. For each character c i in a given input sentence c [1:n] , the context characters c [i\u2212w/2:i+w/2] and their corresponding bigrams are chosen to be fed into the networks, where w is the context window size. As most CWS methods do, we will set w = 5 in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embeddings", "sec_num": "2.1" }, { "text": "Given a character set V of size |V |, each character c \u2208 V will be mapped into a d-dimensional embedding space as Emb c (c) \u2208", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embeddings", "sec_num": "2.1" }, { "text": "R d by a lookup table M c \u2208 R d\u00d7|V | . Similarly, each bigram b \u2208 {c 1 c 2 |c 1 \u2208 V, c 2 \u2208 V } will be mapped into a d- dimensional embedding space as Emb b (b) \u2208 R d by a lookup table M b \u2208 R d\u00d7|V |\u00d7|V | .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embeddings", "sec_num": "2.1" }, { "text": "The boundary feature embeddings are hidden vectors computed from the current bigrams and the whole bigarm history, which will be explained in detail in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embeddings", "sec_num": "2.1" }, { "text": "Three kinds of embeddings of the context characters c [i\u22122:i+2] and their corresponding bigrams are then concatenated into a single vector x i \u2208 R H 1 , where H 1 = 5d + 4d + 4d bf . d bf is the number of hidden units output by the boundary feature embeddings. Then, this vector x i is fed into the BLSTM layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embeddings", "sec_num": "2.1" }, { "text": "Following the embedding layer is an one-layer BLSTM network (Graves and Schmidhuber, 2005) . By combining hidden states from two separate LSTM layers, it can incorporate long periods of contextual information from both directions. The LSTM cell is implemented as follows (Graves et al., 2013) :", "cite_spans": [ { "start": 60, "end": 90, "text": "(Graves and Schmidhuber, 2005)", "ref_id": "BIBREF10" }, { "start": 271, "end": 292, "text": "(Graves et al., 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Bidirectional LSTM Network", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i t = \u03c3(W xi x t + W hi h t\u22121 + W ci c t\u22121 + b i ) f t = \u03c3(W xf x t + W hf h t\u22121 + W cf c t\u22121 + b f ) c t = f t c t\u22121 + i t tanh(W xc x t + W hc h t\u22121 + b c ) o t = \u03c3(W xo x t + W ho h t\u22121 + W co c t + b o ) h t = o t tanh(c t )", "eq_num": "(1)" } ], "section": "Bidirectional LSTM Network", "sec_num": "2.2" }, { "text": "where \u03c3 is the logistic sigmoid function, and i, f , o and c are the input gate, forget gate, output gate and the cell respectively, all of which are the same dimension as the hidden output h. The subscripts of the weight matrix describe the meaning as the name suggests. For instance, W xi is the input gate weight matrix for input x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bidirectional LSTM Network", "sec_num": "2.2" }, { "text": "The outputs of the BLSTM layer are the concatenation of a forward hidden sequence \u2192 h and a backward hidden sequence \u2190 h which will be fed to the decoding layer that contains a linear transformation with no non-linear function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bidirectional LSTM Network", "sec_num": "2.2" }, { "text": "f (t i |c [i\u2212w/2:i+w/2] ) = W d ( \u2192 h i \u2295 \u2190 h i ) + b d (2) where W d \u2208 R |T |\u00d7H 2 , b d \u2208 R |T | . H 2 is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bidirectional LSTM Network", "sec_num": "2.2" }, { "text": "the number of hidden units of the outputs for the BLSTM layer. f (t i |c [i\u2212w/2:i+w/2] ) \u2208 R |T | is the score vector for each possible tag. Here in Chinese word segmentation, we set T = {S, B, E, M }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bidirectional LSTM Network", "sec_num": "2.2" }, { "text": "To model the correlations between tags in neighborhoods and jointly decode the best chain of tags for a given sentence, a transition score A ij is introduced to measure the probability of jumping from tag i \u2208 T to tag j \u2208 T (Collobert et al., 2011) . For an input sentence c [1:n] with a tag sequence t [1:n] , a sentence-level score can be formulated as follows:", "cite_spans": [ { "start": 224, "end": 248, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Tag Inference", "sec_num": "2.3" }, { "text": "s(c [1:n] , t [1:n] , \u03b8) = n i=1 (A t i\u22121 t i +f \u03b8 (t i |c [i\u22122:i+2] )) (3) where f \u03b8 (t i |c [i\u22122:i+2] )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tag Inference", "sec_num": "2.3" }, { "text": "indicates the score output for the ith tag computed by the neural network described above with parameters \u03b8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tag Inference", "sec_num": "2.3" }, { "text": "Chinese word segmentation is essentially a task of resolving the relevance of consecutive characters. Lacking knowledge of such relevance, recognizing out-of-domain words has been the bottleneck of domain adaption in CWS. However, Boundary features such as Accessor Variety (AV) (Feng et al., 2004a,b) , Mutual Information (Sun and Xu, 2011) and Chi-square Statistics (Chi2) (Chang and Han, 2010) are features designed to fit such relevance. A significant advantage of boundary features is that they can compute the correlation of characters from a large scale corpora, annotated or not, to boost the OOV word recognition performance. As a result, they are especially effective for cross-domain CWS.", "cite_spans": [ { "start": 279, "end": 301, "text": "(Feng et al., 2004a,b)", "ref_id": null }, { "start": 323, "end": 341, "text": "(Sun and Xu, 2011)", "ref_id": "BIBREF20" }, { "start": 375, "end": 396, "text": "(Chang and Han, 2010)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "In this paper, we propose 5 novel global recurrent structures to generate embeddings that mimic the boundary features for further computing, which needs minimal pre-processing and feature engineering. The structures are designed to capture the intuition that nearby sentences in a singledomain corpus often share certain words. Thus the correlation of characters within or across certain words can be learned, and those involving OOV words notably enhance domain adaption for CWS. GRS-1 The basic structure(GRS-1) is illustrated in Figure 2 . It looks like LSTM-2 (Chen et al., 2015b) when incorporated into the BLSTM model. However, the difference is that common recurrent networks will reset the hidden states every time they process a new sentence in NLP problems while the hidden states in our structure are never reset.", "cite_spans": [ { "start": 564, "end": 584, "text": "(Chen et al., 2015b)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 532, "end": 540, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h k+1,0 , c k+1,0 = h k,n k , c k,n k", "eq_num": "(4)" } ], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "where h k,i and c k,i are the hidden state and cell vector of the kth sentence at the ith step, n k is the length of the kth sentence. For simplicity, in the following part we will ignore the subscript k and always indicate the current sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "As a result of such warm start mechanism, our structure can to some extent record the history information in recent sentences. And some information may last long in the structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "Here we choose the LSTM cell as it can learn to keep relatively long term memory. We follow the equations (1) to implement it and directly take h i as the boundary feature embeddings for the bigram b i = c i c i+1 in the basic structure, where the input is the concatenation of embeddings of a bigram and its corresponding characters", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "Emb b (b i ) \u2295 Emb c (c i ) \u2295 Emb c (c i+1 ). Emb bf (b i ) = h i (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "where h i is the output of the recurrent network at the ith step. We also propose four more variants of the structure that are shown in Figure 3 . GRS-2 To better fit the boundary features, we add a full-connection hidden layer following the recurrent network. The boundary feature embeddings are calculated as follows:", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 144, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Emb bf (b i ) = \u03c3(W bf h i + b bf )", "eq_num": "(6)" } ], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "where \u03c3 is the logistic sigmoid function. GRS-3 Considering the hidden states are noisy and contains much information of other words, we want the hidden values more relevant to the current bigram, so a gate is introduced to the structure. The boundary feature embeddings are calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "E i = Emb c (c i ) \u2295 Emb c (c i+1 ) \u2295 Emb b (b i ) g(b i ) = \u03c3(W g E i + b g ) Emb bf (b i ) = g(b i )h i (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "where \u2295 is the symbol for concatenation. GRS-4 GRS-4 is a combination version of GRS-2 and GRS-3 by adding a full-connection hidden layer following the gated output. GRS-5 GRS-5 is a more complicated version which tries to mimic the Accessor Variety(AV) criterion. AV criterion is a feature describing the number of distinct characters that precede or succeed a certain string s. For simplicity, we only focus on strings with length = 2, in other words, bigrams. Therefore, we substitute the input of GRS-4 with a bigarm and its preceding character to fit its left AV and similarly with a bigram and its succeeding character to fit its right AV. At last, we simply concatenate the two embeddings as the final boundary feature embeddings (Actually they are trigram boundary feature embeddings):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E L i = Emb c (c i\u22121 ) \u2295 Emb b (b i ) E R i = Emb c (c i+1 ) \u2295 Emb b (b i\u22121 ) g L (tri i ) = \u03c3(W L g E L i + b L g ) g R (tri i ) = \u03c3(W R g E R i + b R g ) Emb L bf (tri i ) = \u03c3(W L bf g L (tri i )h L i + b L bf ) Emb R bf (tri i ) = \u03c3(W R bf g R (tri i )h R i + b R bf ) Emb bf (tri i ) = Emb L bf (tri i ) \u2295 Emb R bf (tri i )", "eq_num": "(8)" } ], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "where tri i = c i\u22121 c i c i+1 and other values have the same meanings as above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global Recurrent Structure", "sec_num": "3" }, { "text": "Instead of using the Max-Margin criterion (Taskar et al., 2005) adopted by previous neural network models for CWS (Zheng et al., 2013; Pei et al., 2014; Chen et al., 2015a,b) , we try to directly maximize the log-probability of the correct tag sequence following Lample et al. (2016) :", "cite_spans": [ { "start": 42, "end": 63, "text": "(Taskar et al., 2005)", "ref_id": "BIBREF21" }, { "start": 114, "end": 134, "text": "(Zheng et al., 2013;", "ref_id": "BIBREF31" }, { "start": 135, "end": 152, "text": "Pei et al., 2014;", "ref_id": "BIBREF17" }, { "start": 153, "end": 174, "text": "Chen et al., 2015a,b)", "ref_id": null }, { "start": 263, "end": 283, "text": "Lample et al. (2016)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "log(p(y|X)) = s(X, y) \u2212 log( \u1ef9\u2208Y X e s(X,\u1ef9) ) = s(X, y) \u2212 logadd y\u2208Y X s(X,\u1ef9)", "eq_num": "(9)" } ], "section": "Training", "sec_num": "4" }, { "text": "Figure 3: Four variants of the global recurrent structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "where Y X represents all possible tag sequences for a sentence X. While decoding, we predict the output sequence which obtains the maximum score as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y * = argmax y\u2208Y X s(X,\u1ef9)", "eq_num": "(10)" } ], "section": "Training", "sec_num": "4" }, { "text": "The optimal sequence can be computed using dynamic programming. We use Adam (Kingma and Ba, 2014) to maximize the objective function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "Data. We use the PKU corpus drawn from news domain for the source-domain training. The PKU dataset is provided by SIGHAN Bakeoff 2005 (Emerson, 2005 . We regard the random 90% sentences of the training data as training set and the rest 10% sentences as development set. We also use the test part of the PKU dataset to measure the in-domain segmentation ability of our models. Following Liu et al. (2014)'s settings, our domain adaption experiments are performed on the four testing sets from the SIGHAN Bakeoff 2010 (Zhao and Liu, 2010) whose domains cover finance, computer, medicine and literature. In addition, we manually annotate six more corpora from non-news domains as testing sets, including finance, medicine, geology, agriculture, material and weather domains, which are extracted from abstracts of papers in CNKI 1 . These data are annotated following the guideline proposed by Yu et al. (2001) . The OOV rate of these data are relatively high because they are more academic. Statistics of the training and testing data are shown in the Table 1 .", "cite_spans": [ { "start": 114, "end": 133, "text": "SIGHAN Bakeoff 2005", "ref_id": null }, { "start": 134, "end": 148, "text": "(Emerson, 2005", "ref_id": "BIBREF6" }, { "start": 516, "end": 536, "text": "(Zhao and Liu, 2010)", "ref_id": "BIBREF30" }, { "start": 890, "end": 906, "text": "Yu et al. (2001)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 1049, "end": 1056, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "All datasets are pre-processed by replacing the Chinese idioms and the continuous English letters and digits with a unique token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "We use word2vec 2 to pre-train character embeddings on the training corpus. The bigram embeddings are initialized with the average of the corresponding two characters' embeddings. Discrete Boundary Features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embeddings.", "sec_num": null }, { "text": "The discrete boundary features which will be used in Section 5.3 are extracted from the datasets mentioned above and the Chinese Gigaword corpus 3 , following methods in Sun and Xu (2011)'s paper. Hyper-parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embeddings.", "sec_num": null }, { "text": "The hyper-parameters are tuned according to the experimental results. The detailed values are shown in Table 2 ", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 110, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Embeddings.", "sec_num": null }, { "text": "We evaluate the baseline BLSTM model and our five proposed structures with the parameter settings in Table 2 mains from the CNKI dataset. The results are shown in Table 3 . The BLSTM+GRS-4 model with a gate and an additional full-connection hidden layer achieves the best performances among all domains. Surprisingly, the most delicate structure GRS-5 seems to be of no help to the CWS task.", "cite_spans": [], "ref_spans": [ { "start": 101, "end": 108, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 163, "end": 170, "text": "Table 3", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Model Selection", "sec_num": "5.2" }, { "text": "To examine whether OOV recognition can benefit from GRS, we also look into the IV and OOV recalls of the PKU dataset respectively. Table 4 and Table 5 show that the proposed GRS can effectively improve the segmentation performance on OOV words, which empirically proves its domain adaption ability. BSLTM-2, similar to LSTM-2 (Chen et al., 2015b) , is an architecture comprised of two stacking bidirectional LSTM hidden layers. GRS-4 is short for BLSTM+GRS-4 model. ", "cite_spans": [ { "start": 326, "end": 346, "text": "(Chen et al., 2015b)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 131, "end": 138, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 143, "end": 150, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Model Selection", "sec_num": "5.2" }, { "text": "In this section, We compare our BLSTM+GRS-4 model with previous state-of-the-art methods. Experimental results on the four test domains from SIGHAN Bakeoff 2010 are shown in Ta-ble 6. We also attempt to integrate discrete boundary features into the models. In our experiments, we choose the Accessor Variety(AV) (Feng et al., 2004a,b) which is a feature widely used in traditional Chinese word segmentation. Our F-scores and OOV recalls are competitive to those reported by and Jiang et al. (2013) . However, following 's setting, we choose the PKU dataset as the training corpus while Jiang et al. (2013) 's model is trained on a different corpus. The results are not directly comparable. The results prove the incredible effectiveness of the global recurrent structure on OOV recognition and overall segmentation, comparable to the BLSTM model that directly incorporates discrete AV features. Adding discrete AV features into our model seem not to be a notable improvement, which also confirms that our model already has certain domain adaption ability.", "cite_spans": [ { "start": 312, "end": 334, "text": "(Feng et al., 2004a,b)", "ref_id": null }, { "start": 478, "end": 497, "text": "Jiang et al. (2013)", "ref_id": "BIBREF11" }, { "start": 586, "end": 605, "text": "Jiang et al. (2013)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Final Results", "sec_num": "5.3" }, { "text": "PKU MSRA (Zheng et al., 2013) 92.8 93.9 (Pei et al., 2014) 95.2 97.2 (Chen et al., 2015a) 96.4 97.6 (Chen et al., 2015b) 96.5 97.4 (Chen et al., 2015a)* 94.5 95.4 (Chen et al., 2015b)* 94.8 95.6 (Cai and Zhao, 2016) 95.5 96.5 (Zhang et al., 2016) 95.7 97.7 BLSTM 95.9 97.0 This work 95.9 97.1 Table 7 : Comparison of our model with previous neural models on the PKU and MSRA datasets. Results with * are from runs on their released implementation (Cai and Zhao, 2016) .", "cite_spans": [ { "start": 9, "end": 29, "text": "(Zheng et al., 2013)", "ref_id": "BIBREF31" }, { "start": 40, "end": 58, "text": "(Pei et al., 2014)", "ref_id": "BIBREF17" }, { "start": 69, "end": 89, "text": "(Chen et al., 2015a)", "ref_id": "BIBREF3" }, { "start": 100, "end": 120, "text": "(Chen et al., 2015b)", "ref_id": "BIBREF4" }, { "start": 163, "end": 189, "text": "(Chen et al., 2015b)* 94.8", "ref_id": null }, { "start": 226, "end": 246, "text": "(Zhang et al., 2016)", "ref_id": "BIBREF28" }, { "start": 447, "end": 467, "text": "(Cai and Zhao, 2016)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 293, "end": 300, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "We compare the in-domain experimental results on the PKU and MSRA datasets with previ-Dataset Baseline(F%) GRS-1(F%) GRS-2(F%) GRS-3(F%) GRS-4(F%) GRS- Table 6 : Experimental results of the baseline BLSTM model, best-performance BLSTM+GRS-4 model, models with discrete AV features and models proposed by others on the SIGHAN Bakeoff2010 data.", "cite_spans": [], "ref_spans": [ { "start": 152, "end": 159, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "ous neural models, which is shown in Table 7 . The baseline BLSTM model with no modification or augmentation can achieve comparative results while the GRS does little help to the in-domain Chinese word segmentation task.", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 44, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "We collect and analyze the errors on the Medicine corpus from Sighan Bakeoff 2010 in light of the fact that the results are the worst among the four domains. We calculate accuracies of individual OOV words, where accuracies are simply treated as 0 or 1 for further counting, and categorize them according to their frequencies in the testing corpus. Statistics are shown in Figure 4 . From the trendlines we can infer that in our proposed GRS more occurrences yield higher accuracy while common BLSTM models can rarely benefit from this. That conforms to the intuition of our model that can utilize correlation information of testing corpora. Our model thereupon performs better with the increase of the size of testing corpus as long as the OOV words appear more.", "cite_spans": [], "ref_spans": [ { "start": 373, "end": 381, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.4" }, { "text": "Although the trendline of our model is promising, there are some OOV words that occurs frequently but are wrongly segmented. Some examples are listed in Table 8 . Errors involving \"\u80be \u810f\"(kidney) and \"\u7ef4 \u751f \u7d20C\"(vitamin C) are typical examples of the Combination Ambiguity, where there are some words containing \"\u80be \u810f\" such as \"\u80be\u810f\u75c5\u5b66\"(nephrology). Likewise, \"\u7ef4 \u751f \u7d20\"(vitamin) is a frequent word that confuses our model. \"\u7532 \u578bH1N1\u6d41 \u611f\"(influenza A(H1N1)) reveals another severe problem that most CWS systems confront when processing the mix of Chinese characters and digits, punctuations or letters from other languages. The commonly used methods by treating consecutive digits or letters as one indeed boost the performance on corpora where most characters are Chinese. However, with the increase of characters other than Chinese, it is becoming a problem that should be reconsidered carefully. Table 8 : Some examples of wrongly segmented OOV words with high frequency. ", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 160, "text": "Table 8", "ref_id": null }, { "start": 885, "end": 892, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.4" }, { "text": "Word segmentation has been pursued with considerable efforts in the Chinese NLP community. One mainstream method is regarding word segmentation task as a sequence labeling problem (Xue, 2003; Peng et al., 2004) . Recently, researchers have tended to explore neural network based approaches (Collobert et al., 2011; Zheng et al., 2013; Qi et al., 2014) to reduce efforts of feature engineering. Pei et al. (2014) used a neural tensor model to capture the complicated interactions between tags and context characters. Experiments in his paper also show that bigram embeddings are of great benefit. To incorporate complicated combinations and long-term dependency information of the context characters, gated recursive model (Chen et al., 2015a) and LSTM model (Chen et al., 2015b) were used respectively. Moreover, Xu and Sun (2016) proposed a dependency-based gated recursive model which merges the benefits of the two models above. Coincidentally, Cai and Zhao (2016) and Zhang et al. (2016) both addressed the problem of lacking word-based features that previous neural CWS models have. Cai and Zhao (2016) proposed a novel gated combination neural network which thoroughly eliminates context windows and can utilize complete segmentation history. Zhang et al. (2016) proposed a transition-based neural model which replaces manually designed discrete features with neural features. Domain adaption for Chinese word segmentation has been widely exploited before neural CWS models are proposed. Jiang et al. (2013) utilized the web text(160K Wikipedia) to improves seg-mentation accuracies on several domains. studied type-supervised domain adaptation for Chinese segmentation by making use of domain-specific tag dictionaries and only unlabeled target domain data. proposed a variant CRF model to leverage both fully and partially annotated data transformed from different sources of free annotations consistently.", "cite_spans": [ { "start": 180, "end": 191, "text": "(Xue, 2003;", "ref_id": "BIBREF23" }, { "start": 192, "end": 210, "text": "Peng et al., 2004)", "ref_id": "BIBREF18" }, { "start": 290, "end": 314, "text": "(Collobert et al., 2011;", "ref_id": "BIBREF5" }, { "start": 315, "end": 334, "text": "Zheng et al., 2013;", "ref_id": "BIBREF31" }, { "start": 335, "end": 351, "text": "Qi et al., 2014)", "ref_id": "BIBREF19" }, { "start": 394, "end": 411, "text": "Pei et al. (2014)", "ref_id": "BIBREF17" }, { "start": 722, "end": 742, "text": "(Chen et al., 2015a)", "ref_id": "BIBREF3" }, { "start": 758, "end": 778, "text": "(Chen et al., 2015b)", "ref_id": "BIBREF4" }, { "start": 813, "end": 830, "text": "Xu and Sun (2016)", "ref_id": "BIBREF22" }, { "start": 948, "end": 967, "text": "Cai and Zhao (2016)", "ref_id": "BIBREF1" }, { "start": 972, "end": 991, "text": "Zhang et al. (2016)", "ref_id": "BIBREF28" }, { "start": 1088, "end": 1107, "text": "Cai and Zhao (2016)", "ref_id": "BIBREF1" }, { "start": 1249, "end": 1268, "text": "Zhang et al. (2016)", "ref_id": "BIBREF28" }, { "start": 1494, "end": 1513, "text": "Jiang et al. (2013)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Some researches which focus on making use of unlabeled data for word segmentation also do help to domain adaption. Zhao and Kit (2008) and Zhang et al. (2013a) improved segmentation performance by mutual information between characters, collected from large unlabeled data. Li and Sun (2009) used punctuation information in a large raw corpus to learn a segmentation model, and achieve better recognition of OOV words. Sun and Xu (2011) explored several statistical features derived from both unlabeled data to help improve character-based word segmentation. Zhang et al. (2013b) proposed a semi-supervised approach that dynamically extracts representations of label distributions from both in-domain corpora and out-of-domain corpora.", "cite_spans": [ { "start": 115, "end": 134, "text": "Zhao and Kit (2008)", "ref_id": "BIBREF29" }, { "start": 139, "end": 159, "text": "Zhang et al. (2013a)", "ref_id": "BIBREF25" }, { "start": 273, "end": 290, "text": "Li and Sun (2009)", "ref_id": "BIBREF15" }, { "start": 418, "end": 435, "text": "Sun and Xu (2011)", "ref_id": "BIBREF20" }, { "start": 558, "end": 578, "text": "Zhang et al. (2013b)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this paper, we propose a novel global recurrent structure to model dynamic boundary features and incorporate it in the BLSTM-based neural network model for Chinese Word Segmentation. The structure can capture correlations between characters, and thus is especially effective for segmenting OOV words and enhancing the performance of CWS on non-news domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Perspectives", "sec_num": "7" }, { "text": "The proposed global recurrent structure is not limited to the Chinese word segmentation task. It can be easily adapted to other sequence labeling problems that may benefit from the history information carried in the structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Perspectives", "sec_num": "7" }, { "text": "Although the structure is effective in this task, it's admittedly hard to train a stable model. As our future work, we would like to try some pretraining methods to handle this problem. And we plan to apply our method to other natural language processing tasks, such as Name Entity Recognition (NER). Also, the hybrid model is a great idea to try and we will do it later.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Perspectives", "sec_num": "7" }, { "text": "http://www.cnki.net/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://word2vec.googlecode.com/ 3 https://catalog.ldc.upenn.edu/LDC2003T05", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Our work is supported by National Natural Science Foundation of China (No.61370117 & No.61433015). The corresponding author of this paper is Houfeng Wang.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "Adam", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy ap- proach to natural language processing. Computa- tional Linguistics 22(1):39-71.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural word segmentation learning for chinese", "authors": [ { "first": "Deng", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deng Cai and Hai Zhao. 2016. Neural word segmenta- tion learning for chinese. CoRR abs/1606.04300.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Enhancing domain portability of chinese segmentation model using chi-square statistics and bootstrapping", "authors": [ { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Dongxu", "middle": [], "last": "Han", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "789--798", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baobao Chang and Dongxu Han. 2010. Enhancing do- main portability of chinese segmentation model us- ing chi-square statistics and bootstrapping. In Pro- ceedings of the 2010 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, pages 789-798.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Gated recursive neural network for chinese word segmentation", "authors": [ { "first": "Xinchi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Chenxi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1744--1753", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuanjing Huang. 2015a. Gated recursive neural network for chinese word segmentation. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers). Association for Computation- al Linguistics, Beijing, China, pages 1744-1753.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Sentence modeling with gated recursive neural network", "authors": [ { "first": "Xinchi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Chenxi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Shiyu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "793--798", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Shiyu Wu, and Xuanjing Huang. 2015b. Sentence modeling with gated recursive neural network. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computa- tional Linguistics, Lisbon, Portugal, pages 793-798.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "The Journal of Machine Learning Research", "volume": "999888", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research 999888:2493-2537.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The second international chinese word segmentation bakeoff", "authors": [ { "first": "Thomas", "middle": [ "Emerson" ], "last": "", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Second SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "123--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Emerson. 2005. The second international chi- nese word segmentation bakeoff. In Proceedings of the Second SIGHAN Workshop on Chinese Lan- guage Processing. pages 123-133.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Accessor variety criteria for chinese word extraction", "authors": [ { "first": "Haodi", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaotie", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Weimin", "middle": [], "last": "Zheng", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haodi Feng, Kang Chen, Xiaotie Deng, and Weimin Zheng. 2004a. Accessor variety criteria for chinese word extraction. Computational Linguistics, Vol- ume 30, Number 1, March 2004 .", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unsupervised segmentation of chinese corpus using accessor variety", "authors": [ { "first": "Haodi", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chunyu", "middle": [], "last": "Kit", "suffix": "" }, { "first": "Xiaotie", "middle": [], "last": "Deng", "suffix": "" } ], "year": 2004, "venue": "Natural Language Processing -IJCNLP 2004, First International JointConference", "volume": "", "issue": "", "pages": "694--703", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haodi Feng, Kang Chen, Chunyu Kit, and Xiaotie Deng. 2004b. Unsupervised segmentation of chi- nese corpus using accessor variety. In Natural Lan- guage Processing -IJCNLP 2004, First Internation- al JointConference, Hainan Island, China, March 22-24, 2004, Revised Selected Papers. pages 694- 703.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Speech recognition with deep recurrent neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Abdel", "middle": [], "last": "Rahman", "suffix": "" }, { "first": "Mohamed", "middle": [], "last": "", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves, Abdel rahman Mohamed, and Geof- frey E. Hinton. 2013. Speech recognition with deep recurrent neural networks. CoRR abs/1303.5778.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Framewise phoneme classification with bidirectional lstm and other neural network architectures", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2005, "venue": "Neural Networks", "volume": "18", "issue": "", "pages": "602--610", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional lstm and other neural network architectures. Neural Net- works 18:602-610.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Discriminative learning with natural annotations: Word segmentation as a case study", "authors": [ { "first": "Wenbin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yajuan", "middle": [], "last": "L\u00fc", "suffix": "" }, { "first": "Yating", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2013, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenbin Jiang, Meng Sun, Yajuan L\u00fc, Yating Yang, and Qun Liu. 2013. Discriminative learning with natural annotations: Word segmentation as a case study. In ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR ab- s/1412.6980.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Eighteenth International Conference on Machine Learning. ICM-L '01", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth In- ternational Conference on Machine Learning. ICM- L '01, pages 282-289.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In HLT-NAACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Punctuation as implicit annotations for chinese word segmentation", "authors": [ { "first": "Zhongguo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2009, "venue": "Computational Linguistics", "volume": "35", "issue": "", "pages": "505--512", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhongguo Li and Maosong Sun. 2009. Punctuation as implicit annotations for chinese word segmentation. Computational Linguistics 35:505-512.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Domain adaptation for crf-based chinese word segmentation using free annotations", "authors": [ { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yijia Liu, Yue Zhang, Wanxiang Che, Ting Liu, and Fan Wu. 2014. Domain adaptation for crf-based chi- nese word segmentation using free annotations. In EMNLP.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Maxmargin tensor neural network for chinese word segmentation", "authors": [ { "first": "Wenzhe", "middle": [], "last": "Pei", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "293--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Max- margin tensor neural network for chinese word seg- mentation. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computa- tional Linguistics, Baltimore, Maryland, pages 293- 303.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Chinese Segmentation and New Word Detection using Conditional Random Fields", "authors": [ { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Fangfang", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "562--571", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fuchun Peng, Fangfang Feng, and Andrew Mccallum. 2004. Chinese Segmentation and New Word Detec- tion using Conditional Random Fields. In Proceed- ings of COLING. pages 562-571.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Deep learning for character-based information extraction", "authors": [ { "first": "Yanjun", "middle": [], "last": "Qi", "suffix": "" }, { "first": "G", "middle": [], "last": "Sujatha", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2014, "venue": "ECIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanjun Qi, Sujatha G. Das, Ronan Collobert, and Jason Weston. 2014. Deep learning for character-based in- formation extraction. In ECIR.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Enhancing chinese word segmentation using unlabeled data", "authors": [ { "first": "Weiwei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2011, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Sun and Jia Xu. 2011. Enhancing chinese word segmentation using unlabeled data. In EMNLP.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Learning structured prediction models: a large margin approach", "authors": [ { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Vassil", "middle": [], "last": "Chatalbashev", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2005, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured predic- tion models: a large margin approach. In ICML.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Dependency-based gated recursive neural network for chinese word segmentation", "authors": [ { "first": "Jingjing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingjing Xu and Xu Sun. 2016. Dependency-based gat- ed recursive neural network for chinese word seg- mentation. In ACL.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Chinese Word Segmentation as Character Tagging", "authors": [ { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "8", "issue": "1", "pages": "29--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nianwen Xue. 2003. Chinese Word Segmentation as Character Tagging. Computational Linguistics 8(1):29-48.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Processing norms of modern chinese corpus", "authors": [ { "first": "Shiwen", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jianming", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Xuefeng", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Huiming", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Shiyong", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Honglin", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Weidong", "middle": [], "last": "Zhan", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shiwen Yu, Jianming Lu, Xuefeng Zhu, Huiming Du- an, Shiyong Kang, Honglin Sun, Hui Wang, Qiang Zhao, and Weidong Zhan. 2001. Processing norms of modern chinese corpus. Technical report .", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Improving chinese word segmentation on micro-blog using rich punctuations", "authors": [ { "first": "Longkai", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Li", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhengyan", "middle": [], "last": "He", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ni", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2013, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Longkai Zhang, Li Li, Zhengyan He, Houfeng Wang, and Ni Sun. 2013a. Improving chinese word seg- mentation on micro-blog using rich punctuations. In ACL.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Exploring representations from unlabeled data with co-training for chinese word segmentation", "authors": [ { "first": "Longkai", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Mairgup", "middle": [], "last": "Mansur", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013b. Exploring representations from un- labeled data with co-training for chinese word seg- mentation. In EMNLP.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Type-supervised domain adaptation for joint segmentation and pos-tagging", "authors": [ { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2014. Type-supervised domain adaptation for joint segmentation and pos-tagging. In EACL.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Transition-based neural word segmentation", "authors": [ { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Guohong", "middle": [], "last": "Fu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu. 2016. Transition-based neural word segmentation. In A- CL.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "An empirical comparison of goodness measures for unsupervised chinese word segmentation with a unified framework", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chunyu", "middle": [], "last": "Kit", "suffix": "" } ], "year": 2008, "venue": "IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao and Chunyu Kit. 2008. An empirical com- parison of goodness measures for unsupervised chi- nese word segmentation with a unified framework. In IJCNLP.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The cips-sighan clp2010 chinese word segmentation backoff", "authors": [ { "first": "Hongmei", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Qiu", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongmei Zhao and Qiu Liu. 2010. The cips-sighan clp2010 chinese word segmentation backoff.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Deep learning for Chinese word segmentation and POS tagging", "authors": [ { "first": "Xiaoqing", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Hanyang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Tianyu", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "647--657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for Chinese word segmentation and POS tagging. In Proceedings of the 2013 Confer- ence on Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics, Seattle, Washington, USA, pages 647-657.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "General architecture for Chinese word segmentation." }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Global recurrent structure." }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "OOV word recognition accuracies on the Medicine corpus." }, "TABREF1": { "type_str": "table", "text": "Settings of the hyper-parameters.", "content": "", "html": null, "num": null }, "TABREF4": { "type_str": "table", "text": "IV and OOV recalls on the PKU development data.", "content": "
Methods IV Recall OOV Recall
BLSTM96.3582.67
BLSTM-296.1182.01
GRS-496.2583.96
", "html": null, "num": null }, "TABREF5": { "type_str": "table", "text": "IV and OOV recalls on the PKU test data.", "content": "", "html": null, "num": null }, "TABREF7": { "type_str": "table", "text": "Experimental results of the baseline BLSTM model and our proposed structures on the PKU test data and six domains from the CNKI dataset.", "content": "
MethodFinance F RoovComputer F RoovMedicine F RoovLiterature F RoovAvg-F Avg-Roov
BLSTM94.70 86.02 92.17 81.84 91.34 73.51 92.51 73.80 92.6878.79
BLSTM+AV 95.77 90.91 93.57 82.82 92.50 83.12 93.79 84.60 93.9185.36
GRS-495.81 91.21 93.99 83.81 92.26 83.27 94.33 81.30 94.1084.90
GRS-4+AV 95.77 91.02 93.20 83.97 91.80 82.17 93.50 82.01 93.5784.77
Liu201495.54 88.53 93.93 87.53 92.47 78.28 92.49 76.84 93.6182.80
Jiang2013 93.1691.1993.3493.5392.80
", "html": null, "num": null } } } }