{ "paper_id": "I17-1017", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:40:16.555332Z" }, "title": "Convolutional Neural Network with Word Embeddings for Chinese Word Segmentation", "authors": [ { "first": "Chunqi", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences", "location": {} }, "email": "chqiwang@126.com" }, { "first": "Bo", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Chinese Academy of Sciences", "location": {} }, "email": "xubo@ia.ac.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Character-based sequence labeling framework is flexible and efficient for Chinese word segmentation (CWS). Recently, many character-based neural models have been applied to CWS. While they obtain good performance, they have two obvious weaknesses. The first is that they heavily rely on manually designed bigram feature, i.e. they are not good at capturing n-gram features automatically. The second is that they make no use of full word information. For the first weakness, we propose a convolutional neural model, which is able to capture rich n-gram features without any feature engineering. For the second one, we propose an effective approach to integrate the proposed model with word embeddings. We evaluate the model on two benchmark datasets: PKU and MSR. Without any feature engineering, the model obtains competitive performance-95.7% on PKU and 97.3% on MSR. Armed with word embeddings, the model achieves state-of-the-art performance on both datasets-96.5% on PKU and 98.0% on MSR, without using any external labeled resource.", "pdf_parse": { "paper_id": "I17-1017", "_pdf_hash": "", "abstract": [ { "text": "Character-based sequence labeling framework is flexible and efficient for Chinese word segmentation (CWS). Recently, many character-based neural models have been applied to CWS. While they obtain good performance, they have two obvious weaknesses. The first is that they heavily rely on manually designed bigram feature, i.e. they are not good at capturing n-gram features automatically. The second is that they make no use of full word information. For the first weakness, we propose a convolutional neural model, which is able to capture rich n-gram features without any feature engineering. For the second one, we propose an effective approach to integrate the proposed model with word embeddings. We evaluate the model on two benchmark datasets: PKU and MSR. Without any feature engineering, the model obtains competitive performance-95.7% on PKU and 97.3% on MSR. Armed with word embeddings, the model achieves state-of-the-art performance on both datasets-96.5% on PKU and 98.0% on MSR, without using any external labeled resource.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Unlike English and other western languages, most east Asian languages, including Chinese, are written without explicit word delimiters. However, most natural language processing (NLP) applications are word-based. Therefore, word segmentation is an essential step for processing those languages. CWS is often treated as a characterbased sequence labeling task (Xue et al., 2003; Peng et al., 2004) . Figure 1 gives an intuitive explaination. Linear models, such as Maximum Entropy (ME) (Berger et al., 1996) and Conditional Random Fields (CRF) (Lafferty et al., 2001) , have been widely used for sequence labeling tasks. However, they often depend heavily on well-designed hand-crafted features.", "cite_spans": [ { "start": 359, "end": 377, "text": "(Xue et al., 2003;", "ref_id": "BIBREF39" }, { "start": 378, "end": 396, "text": "Peng et al., 2004)", "ref_id": "BIBREF25" }, { "start": 485, "end": 506, "text": "(Berger et al., 1996)", "ref_id": "BIBREF3" }, { "start": 543, "end": 566, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 399, "end": 407, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, neural networks have been widely used for NLP tasks. Collobert et al. (2011) proposed a unified neural architecture for various sequence labeling tasks. Instead of exploiting handcrafted input features carefully optimized for each task, their system learns internal representations automatically. As for CWS, there are a series of works, which share the main idea with Collobert et al. (2011) but vary in the network architecture. In particular, feed-forward neural network (Zheng et al., 2013) , tensor neural network (Pei et al., 2014) , recursive neural network (Chen et al., 2015a) , long-short term memory (LSTM) (Chen et al., 2015b) , as well as the combination of LSTM and recursive neural network (Xu and Sun, 2016) have been used to derive contextual representations from input character sequences, which are then fed to a prediction layer.", "cite_spans": [ { "start": 63, "end": 86, "text": "Collobert et al. (2011)", "ref_id": "BIBREF7" }, { "start": 379, "end": 402, "text": "Collobert et al. (2011)", "ref_id": "BIBREF7" }, { "start": 484, "end": 504, "text": "(Zheng et al., 2013)", "ref_id": "BIBREF45" }, { "start": 529, "end": 547, "text": "(Pei et al., 2014)", "ref_id": "BIBREF24" }, { "start": 575, "end": 595, "text": "(Chen et al., 2015a)", "ref_id": "BIBREF5" }, { "start": 628, "end": 648, "text": "(Chen et al., 2015b)", "ref_id": "BIBREF6" }, { "start": 715, "end": 733, "text": "(Xu and Sun, 2016)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite of the great success of above models, they have two weaknesses. The first is that they are not good at capturing n-gram features automatically. Experimental results show that their models perform badly when no bigram feature is explicitly used. One of the strengths of neural networks is the ability to learn features automatically. However, this strength has not been well exploited in their works. The second is that they make no use of full word information. Full word information has shown its effectiveness in word-based CWS systems (Andrew, 2006; Zhang and Clark, 2007; Sun et al., 2009) . Recently, Liu et al. (2016) ; utilized word embeddings to boost performance of word-based CWS models. However, for character-based CWS models, word information is not easy to be integrated.", "cite_spans": [ { "start": 546, "end": 560, "text": "(Andrew, 2006;", "ref_id": "BIBREF1" }, { "start": 561, "end": 583, "text": "Zhang and Clark, 2007;", "ref_id": "BIBREF43" }, { "start": 584, "end": 601, "text": "Sun et al., 2009)", "ref_id": "BIBREF33" }, { "start": 614, "end": 631, "text": "Liu et al. (2016)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For the first weakness, we propose a convolutional neural model, which is also character-based. Previous works have shown that convolutional layers have the ablity to capture rich n-gram features (Kim et al., 2016) . We use stacked convolutional layers to derive contextual representations from input sequence, which are then fed into a CRF layer for sequence-level prediction. For the second weakness, we propose an effective approach to incorporate word embeddings into the proposed model. The word embeddings are learned from large auto-segmented data. Hence, this approach belongs to the category of semi-supervised learning.", "cite_spans": [ { "start": 196, "end": 214, "text": "(Kim et al., 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate our model on two benchmark datasets: PKU and MSR. Experimental results show that even without the help of explicit n-gram feature, our model is capable of capturing rich ngram information automatically, and obtains competitive performance -95.7% on PKU and 97.3% on MSR (F score). Furthermore, armed with word embeddings, our model achieves state-of-the-art performance on both datasets -96.5% on PKU and 98.0% on MSR, without using any external labeled resource. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we introduce the architecture from bottom to top.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture", "sec_num": "2" }, { "text": "The first step to process a sentence by deep neural networks is often to transform words or characters into embeddings (Bengio et al., 2003; Collobert et al., 2011) . This transformation is done by lookup table operation. A character lookup table M char \u2208 R |V char |\u00d7d (where |V char | denotes the size of the character vocabulary and d denotes the dimension of embeddings) is associated with all 1 The tensorflow (Abadi et al., 2016) implementation and related resources can be found at https://github. com/chqiwang/convseg. characters. Given a sentence S = (c 1 , c 2 , ..., c L ), after the lookup table operation, we obtain a matrix X \u2208 R L\u00d7d where the i'th row is the character embedding of c i .", "cite_spans": [ { "start": 119, "end": 140, "text": "(Bengio et al., 2003;", "ref_id": "BIBREF2" }, { "start": 141, "end": 164, "text": "Collobert et al., 2011)", "ref_id": "BIBREF7" }, { "start": 398, "end": 399, "text": "1", "ref_id": null }, { "start": 415, "end": 435, "text": "(Abadi et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Lookup Table", "sec_num": "2.1" }, { "text": "Besides the character, other features can be easily incorporated into the model (we shall see word feature in section 3). We associate to each feature a lookup table (some features may share the same lookup table) and the final representation is calculated as the concatenation of all corresponding feature embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lookup Table", "sec_num": "2.1" }, { "text": "Many neural network models have been explored for CWS. However, experimental results show that they are not able to capure n-gram information automatically (Pei et al., 2014; Chen et al., 2015a,b) . To achieve good performance, n-gram feature must be used explicitly. To overcome this weakness, we use convolutional layers (Waibel et al., 1989) to encode contextual information. Convolutional neural networks (CNNs) have shown its great effectiveness in computer vision tasks (Krizhevsky et al., 2012; Simonyan and Zisserman, 2014; He et al., 2016) . Recently, Zhang et al. (2015) applied character-level CNNs to text classification task. They showed that CNNs tend to outpeform traditional n-gram models as the dataset goes larger. Kim et al. (2016) also observed that character-level CNN learns to differentiate between different types of n-grams -prefixes, suffixes and others, automatically.", "cite_spans": [ { "start": 156, "end": 174, "text": "(Pei et al., 2014;", "ref_id": "BIBREF24" }, { "start": 175, "end": 196, "text": "Chen et al., 2015a,b)", "ref_id": null }, { "start": 323, "end": 344, "text": "(Waibel et al., 1989)", "ref_id": "BIBREF35" }, { "start": 476, "end": 501, "text": "(Krizhevsky et al., 2012;", "ref_id": "BIBREF18" }, { "start": 502, "end": 531, "text": "Simonyan and Zisserman, 2014;", "ref_id": "BIBREF28" }, { "start": 532, "end": 548, "text": "He et al., 2016)", "ref_id": "BIBREF12" }, { "start": 561, "end": 580, "text": "Zhang et al. (2015)", "ref_id": "BIBREF42" }, { "start": 733, "end": 750, "text": "Kim et al. (2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Convolutional Layer", "sec_num": "2.2" }, { "text": "Our network is quite simple -only convolutional layers is used (no pooling layer). Gated lin- ear unit (GLU) (Dauphin et al., 2016) is used as the non-linear unit in our convolutional layer, which has been shown to surpass rectified linear unit (ReLU) on the language modeling task. For simplicity, GLU can also be easily replaced by ReLU with performance slightly hurt (with roughly the same number of network parameters). Figure 2 shows the structure of a convolutional layer with GLU. Formally, we define the number of input channels as N , the number of output channels as M , the length of input as L and kernel width as k. The convolutional layer can be written as", "cite_spans": [], "ref_spans": [ { "start": 424, "end": 432, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Convolutional Layer", "sec_num": "2.2" }, { "text": "F (X) = (X * W + b) \u2297 \u03c3(X * V + c)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutional Layer", "sec_num": "2.2" }, { "text": "where * denotes one dimensional convolution operation, X \u2208 R L\u00d7N is the input of this layer, W \u2208", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutional Layer", "sec_num": "2.2" }, { "text": "R k\u00d7N \u00d7M , b \u2208 R M , V \u2208 R k\u00d7N \u00d7M , c \u2208 R M", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutional Layer", "sec_num": "2.2" }, { "text": "are parameters to be learned, \u03c3 is the sigmoid function and \u2297 represents element-wise product. We make F (X) \u2208 R L\u00d7M by augmenting the input X with paddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolutional Layer", "sec_num": "2.2" }, { "text": "A succession of convolutional layers are stacked to capture long distance information. From the perspective of each character, information flows in a pyramid. Figure 3 shows a network with three convolutional layers stacked. On the topmost layer, a linear transformation is used to transform the output of this layer to unnormalized label scores E \u2208 R L\u00d7C , where C is the number of label types.", "cite_spans": [], "ref_spans": [ { "start": 159, "end": 167, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Convolutional Layer", "sec_num": "2.2" }, { "text": "For sequence labeling tasks, it is often beneficial to explicitly consider the correlations between adjacent labels (Collobert et al., 2011) .", "cite_spans": [ { "start": 116, "end": 140, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "CRF Layer", "sec_num": "2.3" }, { "text": "Correlations between adjacent labels can be modeled as a transition matrix T \u2208 R C\u00d7C . Given a sentence S = (c 1 , c 2 , ..., c L ), we have corresponding scores E \u2208 R L\u00d7C given by the convolutional layers. For a label sequence y = (y 1 , y 2 , ..., y L ), we define its unnormalized score to be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRF Layer", "sec_num": "2.3" }, { "text": "s(S, y) = L i=1 E i,y i + L\u22121 i=1 T y i ,y i+1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRF Layer", "sec_num": "2.3" }, { "text": "then the probability of the label sequence is defined as ,y) y \u2208Y e s(S,y ) where Y is the set of all valid label sequences. This actually takes the form of linear chain CRF (Lafferty et al., 2001) . Then the final loss of the proposed model is defined as the negative loglikehood of the ground-truth label sequence y * L(S, y ) = \u2212logP (y |S).", "cite_spans": [ { "start": 57, "end": 60, "text": ",y)", "ref_id": null }, { "start": 174, "end": 197, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "CRF Layer", "sec_num": "2.3" }, { "text": "P (y|S) = e s(S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRF Layer", "sec_num": "2.3" }, { "text": "During training, the loss function is minimized by back propagation. During test, Veterbi algorithm is applied to quickly find the label sequence with maximum probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CRF Layer", "sec_num": "2.3" }, { "text": "Character-based CWS models have the superiority of being flexible and efficient. However, full word information is not easy to be incorporated. There is another type of CWS models: the wordbased models. Models belong to this category utilize not only character-level information, but also word-level (Zhang and Clark, 2007; Andrew, 2006; Sun et al., 2009) . Recent works have shown that word embeddings learned from large autosegmented data lead to great improvements in word-based CWS systems (Liu et al., 2016; . We propose an effective approach to integrate word embeddings with our characterbased model. The integration brings two benefits. On the one hand, full word information can be used. On the other hand, large unlabeled data can be better exploited.", "cite_spans": [ { "start": 300, "end": 323, "text": "(Zhang and Clark, 2007;", "ref_id": "BIBREF43" }, { "start": 324, "end": 337, "text": "Andrew, 2006;", "ref_id": "BIBREF1" }, { "start": 338, "end": 355, "text": "Sun et al., 2009)", "ref_id": "BIBREF33" }, { "start": 494, "end": 512, "text": "(Liu et al., 2016;", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "To use word embeddings, we design a set of word features, which are listed in Table 1 . We associate to the word features a lookup table M word . Then the final representation of c i is defined as (c1, c2, ..., cL) . Only the words that include the current character ci (marked with underline) are considered as word feature. Hence, the number of features can be controlled in a reasonable range. We also restrict the max length of words to 4 since few words contain more than 4 characters in Chinese. Note that the feature space is still tremendous", "cite_spans": [ { "start": 197, "end": 214, "text": "(c1, c2, ..., cL)", "ref_id": null } ], "ref_spans": [ { "start": 78, "end": 85, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "R(c i ) =M char [c i ]\u2295 M word [c i ] \u2295 M word [c i\u22121 c i ] \u2295 \u2022 \u2022 \u2022 \u2295 M word [c i c i+1 c i+2 c i+3 ] Length Features 1 c i 2 c i\u22121 c i c i c i+1 3 c i\u22122 c i\u22121 c i c i\u22121 c i c i+1 c i c i+1 c i+2 4 c i\u22123 c i\u22122 c i\u22121 c i c i\u22122 c i\u22121 c i c i+1 c i\u22121 c i c i+1 c i+2 c i c i+1 c i+2 c i+3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "(O(N 4 ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "where N is the number of characters).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "where \u2295 denotes the concatenation operation. Note that the max length of word features is set to 4, therefore the feature space is extremely large (O(N 4 )). A key step is to shrink the feature space so that the memory cost can be confined within a feasible scope. In the meanwhile, the problem of data sparsity can be eased. The solution is as following. Given the unlabeled data D un and a teacher CWS model, we segment D un with the teacher model and get the auto-segmented data To better exploit the auto-segmented data D un , we adopt an off-the-self tool word2vec 3 (Mikolov et al., 2013) to pretrain the word embeddings. The whole procedure is summarized as following setps:", "cite_spans": [ { "start": 572, "end": 594, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "D un . A vocabulary V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "1. Train a teacher model that does not rely on word feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "2. Segment unlabeled data D with the teacher model and get the auto-segmented data D .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "3. Build a vocabulary V word from D . Replace all words not appear in V word with UNK.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "4. Pretrain word embeddings on D using word2vec.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "5. Train the student model with word feature using the pretrained word embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "Note that no external labeled data is used in this procedure. We do not perform any preprocessing for these datasets, such as replacing continuous digits and English characters with a single token. Dropout Dropout (Srivastava et al., 2014 ) is a very efficient method for preventing overfit, especially when the dataset is small. We apply dropout to our model on all convolutional layers and embedding layers. The dropout rate is fixed to 0.2. Hyper-parameters For both datasets, we use the same set of hyper-parameters, which are presented in Table 2 . For all convolutional layers, we just use the same number of channels. Following the practice of designing very deep CNN in computer vision (Simonyan and Zisserman, 2014), we use a small kernal width, i.e. 3, for all convolutional layers. To avoid computational inefficiency, we use a relatively small dimension, i.e. 50, for word embeddings. Pretraining Character embeddings and word embeddings are pretrained on unlabeled or autosegmented data by word2vec. Since the pretrained embeddings are not task-oriented, they are finetuned during supervised training by normal backpropagation. 5 Optimization Adam algorithm (Kingma and Ba, 2014) is applied to optimize our model. We use default parameters given in the original paper Table 3 : Performance of our models and previous state-of-the-art models. Note that (Chen et al., 2015a,b; Xu and Sun, 2016) used a external Chinese idiom dictionary. To make the comparison fair, we mark them with * . Chen et al. (2015a,b) ; Cai and Zhao (2016) ; Xu and Sun (2016) also preprocessed the datasets by replacing the conitinous English character and digits with a unique token. We mark them with .", "cite_spans": [ { "start": 214, "end": 238, "text": "(Srivastava et al., 2014", "ref_id": "BIBREF29" }, { "start": 1141, "end": 1142, "text": "5", "ref_id": null }, { "start": 1365, "end": 1387, "text": "(Chen et al., 2015a,b;", "ref_id": null }, { "start": 1388, "end": 1405, "text": "Xu and Sun, 2016)", "ref_id": "BIBREF38" }, { "start": 1499, "end": 1520, "text": "Chen et al. (2015a,b)", "ref_id": null }, { "start": 1523, "end": 1542, "text": "Cai and Zhao (2016)", "ref_id": "BIBREF4" }, { "start": 1545, "end": 1562, "text": "Xu and Sun (2016)", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 544, "end": 551, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 1281, "end": 1288, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "and we set batch size to 100. For both datasets, we train no more than 100 epoches. The final models are chosen by their performance on the development set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "Weight normalization (Salimans and Kingma, 2016) is applied for all convolutional layers to accelerate the training procedure and obvious acceleration is observed. Table 3 gives the performances of our models, as well as previous state-of-the-art models. Two proposed models are shown in the table:", "cite_spans": [ { "start": 21, "end": 48, "text": "(Salimans and Kingma, 2016)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 164, "end": 171, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Integration with Word Embeddings", "sec_num": "3" }, { "text": "\u2022 CONV-SEG It is our preliminary model without word embeddings. Character embeddings are pretrained on large unlabeled data. \u2022 WE-CONV-SEG On the basis of CONV-SEG, word embeddings are used. We use CONV-SEG as the teacher model (see section 3). Our preliminary model CONV-SEG achieves competitive performance without any feature engineering. Armed with word embeddings, WE-CONV-SEG obtains state-of-the-art performance on both PKU and MSR datasets without using any external labeled data. WE-CONV-SEG outperforms state-of-the-art neural model gle token and thus their model obtains excellent score on PKU dataset. However, WE-CONV-SEG achieves the same performance on PKU and outperforms their model on MSR, without any data preprocessing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main Results", "sec_num": "4.2" }, { "text": "We also observe that WE-CONV-SEG converges much faster compared to CONV-SEG. Figure 4 presents the learning curves of the two models. It takes 10 to 20 epoches for WE-CONV-SEG to converge while it takes more than 60 epoches for CONV-SEG to converge.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 85, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Main Results", "sec_num": "4.2" }, { "text": "Network depth shows great influence on the performance of deep neural networks. A too shallow network may not fit the training data very well while a too deep network may overfit or is hard to train. We evaluate the performance of the proposed model with varying depth. Figure 5 shows ", "cite_spans": [], "ref_spans": [ { "start": 270, "end": 278, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Network Depth", "sec_num": "4.3" }, { "text": "PKU MSR without pretraining 94.7 96.7 with pretraining 95.7 97.3 Table 4 : Test performances with or without pretraining character embeddings. \"without pretraining\" means that the character embeddings are randomly initialized.", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 72, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Model Options", "sec_num": null }, { "text": "the results. It is obvious that five convolutional layers is a good choise for both datasets. When we increase the depth from 1 to 5, the performance is improved significantly. However, when we increase depth from 5 to 7, even to 11 and 15, the performance is almost unchanged. This phenomenon implies that CWS rarely relies on context larger than 11 6 . With more training data, deeper networks may perform better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Options", "sec_num": null }, { "text": "Previous works have shown that pretraining character embeddings boost the performance of neural CWS models significantly (Pei et al., 2014; Chen et al., 2015a,b; Cai and Zhao, 2016) . We verify this and get a consistent conclusion. Table 4 shows the performances with or without pretraining. Our model obtains significant improvements (+1.0 on PKU and +0.6 on MSR) with pretrained character embeddings.", "cite_spans": [ { "start": 121, "end": 139, "text": "(Pei et al., 2014;", "ref_id": "BIBREF24" }, { "start": 140, "end": 161, "text": "Chen et al., 2015a,b;", "ref_id": null }, { "start": 162, "end": 181, "text": "Cai and Zhao, 2016)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 232, "end": 239, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Pretraining Character Embeddings", "sec_num": "4.4" }, { "text": "Models PKU MSR PKU MSR (Cai and Zhao, 2016) 95.5 96.5 -- (Zheng et al., 2013) 92.8 \u2021 93.9 \u2021 -- (Pei et al., 2014) 94.0 94.9 -- (Chen et al., 2015a) 94.5 \u2020 95.4 \u2020 96.1 * 96.2 * (Chen et al., 2015b) 94. Table 5 : The first/second group summarize results of models without/with bigram feature. The number in the parentheses is the absolute improvement given by explicit bigram feature. Results with * used external dictionary. Results with \u2020 come from Cai and Zhao (2016) . Results with \u2021 come from Pei et al. (2014) . marks word-based models.", "cite_spans": [ { "start": 23, "end": 43, "text": "(Cai and Zhao, 2016)", "ref_id": "BIBREF4" }, { "start": 57, "end": 77, "text": "(Zheng et al., 2013)", "ref_id": "BIBREF45" }, { "start": 95, "end": 113, "text": "(Pei et al., 2014)", "ref_id": "BIBREF24" }, { "start": 127, "end": 147, "text": "(Chen et al., 2015a)", "ref_id": "BIBREF5" }, { "start": 176, "end": 196, "text": "(Chen et al., 2015b)", "ref_id": "BIBREF6" }, { "start": 449, "end": 468, "text": "Cai and Zhao (2016)", "ref_id": "BIBREF4" }, { "start": 496, "end": 513, "text": "Pei et al. (2014)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 201, "end": 208, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Pretraining Character Embeddings", "sec_num": "4.4" }, { "text": "In this section, we test the ability of our model in capturing n-gram features. Since unigram is indispensable and trigram is beyond memory limit, we only consider bigram.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram Features", "sec_num": "4.5" }, { "text": "Bigram feature has shown to play a vital role in character-based neural CWS models (Pei et al., 2014; Chen et al., 2015a,b) . Without bigram feature, previous models perform badly. Table 5 gives a summarization. Without bigram feature, our model outperforms previous character-based models in a large margin (+0.9 on PKU and +1.7 on MSR). Compared with word-based model (Cai and Zhao, 2016) , the improvements are also significant (+0.2 on PKU and +0.8 on MSR).", "cite_spans": [ { "start": 83, "end": 101, "text": "(Pei et al., 2014;", "ref_id": "BIBREF24" }, { "start": 102, "end": 123, "text": "Chen et al., 2015a,b)", "ref_id": null }, { "start": 370, "end": 390, "text": "(Cai and Zhao, 2016)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 181, "end": 188, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "N-gram Features", "sec_num": "4.5" }, { "text": "Then we arm our model with bigram feature. The bigram feature we use is the same with Pei et al. 2014; Chen et al. (2015a,b) . The dimension of bigram embedding is set to 50. Following Pei et al. (2014) ; Chen et al. (2015a,b) , the bigram embeddings are initialized by the average of corresponding pretrained character embeddings. The result model is named AVEBE-CONV-SEG and the performance is shown in Table 5 . Unexpectedly, the performance of AVEBE-CONV-SEG is worse than the preliminary model CONV-SEG that uses no bigram feature (-0.3 on PKU and -0.2 on MSR). This result is dramatically inconsistent with previous works, in which the performance is significantly improved by the method. We also observe that the training cost of AVEBE-CONV-SEG is much lower than CONV-SEG. Hence we can conclude that the inconsistency is casued by overfitting. A reasonable conjecture is that the model CONV-SEG already capture abundant bigram feature automatically, therefore the model is tend to overfit when bigram feature is explicitly added.", "cite_spans": [ { "start": 103, "end": 124, "text": "Chen et al. (2015a,b)", "ref_id": null }, { "start": 185, "end": 202, "text": "Pei et al. (2014)", "ref_id": "BIBREF24" }, { "start": 205, "end": 226, "text": "Chen et al. (2015a,b)", "ref_id": null } ], "ref_spans": [ { "start": 405, "end": 412, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "N-gram Features", "sec_num": "4.5" }, { "text": "A practicable way to overcome overfitting is to introduce priori knowledge. We introduce priori knowledge by using bigram embeddings directly pretrained on large unlabeled data, which is simmillar with . We convert the unlabeled text to bigram sequence and then apply word2vec to pretrain the bigram embeddings directly. The result model is named W2VBE-CONV-SEG, and the performance is also shown in Table 5 . This method leads to substantial improvements (+0.5 on PKU and +0.4 MSR) over AVEBE-CONV-SEG. However, compared to CONV-SEG, there are only slight gains (+0.2 on PKU and MSR).", "cite_spans": [], "ref_spans": [ { "start": 400, "end": 407, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "N-gram Features", "sec_num": "4.5" }, { "text": "All above observations verify that our proposed network has considerable superiority in capturing n-gram, at least bigram features automatically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram Features", "sec_num": "4.5" }, { "text": "Word embeddings lead to significant improvements over the strong baseline model CONV-SEG. The improvements come from the teacher model and the large unlabeled data. A natural question is how much unlabeled data can lead to significant improvements. We study this by halving the unlabeled data. Figure 6 presents the results. As the unlabeled data becomes smaller, the performance remains unchanged at the beginning and then becomes worse. This demonstrates that the mass of unlabeled data is a key factor to achieve high performance. However, even with only 68MB unlabeled data, we can still observe remarkable improvements (+0.4 on PKU and MSR). We also observe that MSR dataset is more robust to the size of unlabeled data than PKU dataset. We conjecture that this is because MSR training set is larger than PKU training set 7 .", "cite_spans": [], "ref_spans": [ { "start": 294, "end": 302, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Word Embeddings", "sec_num": "4.6" }, { "text": "We also study how the teacher's performance influence the student. We train other two mod- 7 There are 2M words in MSR training set but only 1M words in PKU training set. Figure 6 : Test performances with varying size of unlabeled data for pretraining word embeddings. With full size, the model is WE-CONV-SEG. With the 0 size, the model degenerates to CONV-SEG.", "cite_spans": [ { "start": 91, "end": 92, "text": "7", "ref_id": null } ], "ref_spans": [ { "start": 171, "end": 179, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Word Embeddings", "sec_num": "4.6" }, { "text": "teacher student PKU MSR PKU MSR WE-CONV-SEG 95.7 97.4 96.5 98.0 worse teacher 95.4 97.1 96.4 97.9 better teacher 96.5 98.0 96.5 98.0 Table 6 : Performances of student models and teacher models. A previous trained model maybe reused in following so that there are some els that use different teacher models. One of them uses a worse teacher and the other uses a better teacher. The results are shown in Table 6 . As expected, the worse teacher indeed creates a worse student, but the effect is marginal (-0.1 on PKU and MSR). And the better teacher brings no improvements. These facts demonstrate that the student's performance is relatively insensitive to the teacher's ability as long as the teacher is not too weak. Not only the pretrained word embeddings, we also build a vocabulary V word from the large autosegmented data. Both of them should have positive impacts on the improvements. To figure out their contributions quantitatively, we train a contrast model, where the pretrained word embeddings are not used but the word features and the vocabulary are persisted, i.e. the word embeddings are randomly initialized. The results are shown in Table 7 . According to the results, we conclude that the pretrained word embeddings and the vocabulary have roughly equal contributions to the final Models PKU MSR WE-CONV-SEG 96.5 98.0 -word emb 96.1 97.6 -word feature 95.7 97.3 Table 7 : Performances of our models with different word feature options. \"-word emb\" denotes the model in which word features and the vocabulary are used but the pretrained word embeddings are not. \"-word feature\" denotes the model that uses no word feature, i.e. CONV-SEG.", "cite_spans": [], "ref_spans": [ { "start": 133, "end": 140, "text": "Table 6", "ref_id": null }, { "start": 402, "end": 409, "text": "Table 6", "ref_id": null }, { "start": 1150, "end": 1157, "text": "Table 7", "ref_id": null }, { "start": 1380, "end": 1387, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "CWS has been studied with considerable efforts in NLP commutinity. Xue et al. (2003) firstly modeled CWS as a character-based sequence labeling problem. They used a sliding-window maximum entropy classifier to tag Chinese characters into one of four position tags, and then coverted these tags into a segmentation using rules. Following their work, Peng et al. (2004) Our proposed model is also a neural sequence labeling model. The difference from above models lies in that CNN is used to encode contextual information. CNNs have been successfully applied in many NLP tasks, such as text classification (Kalchbrenner et al., 2014; Kim, 2014; Zhang et al., 2015; Conneau et al., 2016) , language modeling (Kim et al., 2016; Pham et al., 2016; , machine translation (Meng et al., 2015; Kalchbrenner et al., 2016; Gehring et al., 2016) . Experimental results show that the convolutional layers are capable to capture more n-gram features than previous introduced networks. Collobert et al. (2011) also proposed a CNN based seuqence labeling model. However, our model is significantly different from theirs since theirs adopt max-pooling to encode the whole sentence into a fixed size vector and use position embeddings to demonstrate which word to be tagged while ours does not. Our model is more efficient due to the sharing structure in lower layers. Contemporary to this work, Strubell et al. (2017) applied dilated CNN to named entity recognition.", "cite_spans": [ { "start": 67, "end": 84, "text": "Xue et al. (2003)", "ref_id": "BIBREF39" }, { "start": 349, "end": 367, "text": "Peng et al. (2004)", "ref_id": "BIBREF25" }, { "start": 604, "end": 631, "text": "(Kalchbrenner et al., 2014;", "ref_id": "BIBREF14" }, { "start": 632, "end": 642, "text": "Kim, 2014;", "ref_id": "BIBREF15" }, { "start": 643, "end": 662, "text": "Zhang et al., 2015;", "ref_id": "BIBREF42" }, { "start": 663, "end": 684, "text": "Conneau et al., 2016)", "ref_id": "BIBREF8" }, { "start": 705, "end": 723, "text": "(Kim et al., 2016;", "ref_id": "BIBREF16" }, { "start": 724, "end": 742, "text": "Pham et al., 2016;", "ref_id": "BIBREF26" }, { "start": 765, "end": 784, "text": "(Meng et al., 2015;", "ref_id": "BIBREF22" }, { "start": 785, "end": 811, "text": "Kalchbrenner et al., 2016;", "ref_id": null }, { "start": 812, "end": 833, "text": "Gehring et al., 2016)", "ref_id": "BIBREF11" }, { "start": 971, "end": 994, "text": "Collobert et al. (2011)", "ref_id": "BIBREF7" }, { "start": 1378, "end": 1400, "text": "Strubell et al. (2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "The integration with word embeddings is inspired by word-based CWS models (Andrew, 2006; Zhang and Clark, 2007; Sun et al., 2009) . Most recently, ; Liu et al. (2016) ; Cai and Zhao (2016) proposed word-based neural models for CWS. Particularly, ; Liu et al. (2016) utilized word embeddings learned from large auto-segmented data, which leads to significant improvements. Different from their word-based models, we integrate word embeddings with the proposed characterbased model. Simillar to this work, Wang et al. (2011) and Zhang et al. (2013) also enhanced character-based CWS systems by utilizing auto-segmented data. However, they didn't use word embeddings, but only used statistics features. Sun (2010) and Wang et al. (2014) combined character-based and wordbased CWS model via bagging and dual decomposition respectively and achieved better performance than single model.", "cite_spans": [ { "start": 74, "end": 88, "text": "(Andrew, 2006;", "ref_id": "BIBREF1" }, { "start": 89, "end": 111, "text": "Zhang and Clark, 2007;", "ref_id": "BIBREF43" }, { "start": 112, "end": 129, "text": "Sun et al., 2009)", "ref_id": "BIBREF33" }, { "start": 149, "end": 166, "text": "Liu et al. (2016)", "ref_id": "BIBREF20" }, { "start": 169, "end": 188, "text": "Cai and Zhao (2016)", "ref_id": "BIBREF4" }, { "start": 248, "end": 265, "text": "Liu et al. (2016)", "ref_id": "BIBREF20" }, { "start": 504, "end": 522, "text": "Wang et al. (2011)", "ref_id": "BIBREF37" }, { "start": 527, "end": 546, "text": "Zhang et al. (2013)", "ref_id": "BIBREF40" }, { "start": 700, "end": 710, "text": "Sun (2010)", "ref_id": "BIBREF31" }, { "start": 715, "end": 733, "text": "Wang et al. (2014)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In this paper, we address the weaknesses of character-based CWS models. We propose a novel neural model for CWS. The model utilizes stacked convolutional layers to derive contextual representations from input sequence, which are then fed to a CRF layer for prediction. The model is capable to capture rich n-gram features automatically. Furthermore, we propose an effective approach to integrate the proposed model with word embeddings, which are pretrained on large auto-segmented data. Evaluation on two benchmark datasets shows that without any feature engineering, much better performance than previous models (also without feature engineering) is obtained. Armed with word embeddings, our model achieves state-of-the-art performance on both datasets, without using any external labeled data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The threshold of frequency is set to 5, which is the default setting of word2vec.3 https://code.google.com/p/word2vec", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.sogou.com/labs/resource/ ca.php5 We also try to use fixed word embeddings as do but no significant difference is observed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Context size is calculated by (k \u2212 1) \u00d7 d + 1, where k and d denotes the kernel size and the number of convolutional layers, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported by the National Key Research & Development Plan of China (No.2013CB329302).Thanks anonymous reviewers for their valuable suggestions. Thanks Wang Geng, Zhen Yang and Yuanyuan Zhao for their useful discussions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Tensorflow: Large-scale machine learning on heterogeneous distributed systems", "authors": [ { "first": "Mart\u00edn", "middle": [], "last": "Abadi", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Barham", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Brevdo", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Citro", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Devin", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1603.04467" ] }, "num": null, "urls": [], "raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. 2016. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A hybrid markov/semi-markov conditional random field for sequence segmentation", "authors": [ { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" } ], "year": 2006, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "465--472", "other_ids": {}, "num": null, "urls": [], "raw_text": "Galen Andrew. 2006. A hybrid markov/semi-markov conditional random field for sequence segmenta- tion. In Conference on Empirical Methods in Nat- ural Language Processing, pages 465-472.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Janvin", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "6", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R Ducharme, jean, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. Journal of Machine Learning Re- search, 3(6):1137-1155.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "L", "middle": [], "last": "Adam", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Berger", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Della Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam L Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy ap- proach to natural language processing. Computa- tional Linguistics, 22(1):39-71.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Neural word segmentation learning for chinese", "authors": [ { "first": "Deng", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2016, "venue": "Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "409--420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deng Cai and Hai Zhao. 2016. Neural word segmenta- tion learning for chinese. In Meeting of the Associa- tion for Computational Linguistics, pages 409-420.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Gated recursive neural network for chinese word segmentation", "authors": [ { "first": "Xinchi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Chenxi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2015, "venue": "ACL (1)", "volume": "", "issue": "", "pages": "1744--1753", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuan- jing Huang. 2015a. Gated recursive neural network for chinese word segmentation. In ACL (1), pages 1744-1753.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Long short-term memory neural networks for chinese word segmentation", "authors": [ { "first": "Xinchi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Chenxi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2015, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1197--1206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015b. Long short-term mem- ory neural networks for chinese word segmenta- tion. In Conference on Empirical Methods in Nat- ural Language Processing, pages 1197-1206.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "1", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(1):2493-2537.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Very deep convolutional networks for natural language processing", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.01781" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Holger Schwenk, Lo\u00efc Barrault, and Yann Lecun. 2016. Very deep convolutional net- works for natural language processing. arXiv preprint arXiv:1606.01781.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Language modeling with gated convolutional networks", "authors": [ { "first": "N", "middle": [], "last": "Yann", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Dauphin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Fan", "suffix": "" }, { "first": "David", "middle": [], "last": "Auli", "suffix": "" }, { "first": "", "middle": [], "last": "Grangier", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2016. Language modeling with gated convolutional networks.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The second international chinese word segmentation bakeoff", "authors": [ { "first": "Thomas", "middle": [ "Emerson" ], "last": "", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the fourth SIGHAN workshop on Chinese language Processing", "volume": "133", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Emerson. 2005. The second international chi- nese word segmentation bakeoff. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing, volume 133.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A convolutional encoder model for neural machine translation", "authors": [ { "first": "Jonas", "middle": [], "last": "Gehring", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Yann", "middle": [ "N" ], "last": "Dauphin", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonas Gehring, Michael Auli, David Grangier, and Yann N. Dauphin. 2016. A convolutional encoder model for neural machine translation.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Deep residual learning for image recognition", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A convolutional neural network for modelling sentences", "authors": [ { "first": "Nal", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "655--665", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 655-665, Baltimore, Maryland. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Character-aware neural language models", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "David", "middle": [], "last": "Sontag", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16", "volume": "", "issue": "", "pages": "2741--2749", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M. Rush. 2016. Character-aware neural lan- guage models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, pages 2741-2749. AAAI Press.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "Diederik", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Imagenet classification with deep convolutional neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "1097--1105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep convo- lutional neural networks. pages 1097-1105.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D. Lafferty, Andrew Mccallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. pages 282-289.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Exploring segment representations for neural segmentation models", "authors": [ { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, and Ting Liu. 2016. Exploring segment representations for neural segmentation models.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Feature-based neural language model and chinese word segmentation", "authors": [ { "first": "Mairgup", "middle": [], "last": "Mansur", "suffix": "" }, { "first": "Wenzhe", "middle": [], "last": "Pei", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2013, "venue": "IJCNLP", "volume": "", "issue": "", "pages": "1271--1277", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mairgup Mansur, Wenzhe Pei, and Baobao Chang. 2013. Feature-based neural language model and chi- nese word segmentation. In IJCNLP, pages 1271- 1277.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Encoding source language with convolutional neural network for machine translation", "authors": [ { "first": "Fandong", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Mingxuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wenbin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1503.01838" ] }, "num": null, "urls": [], "raw_text": "Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, and Qun Liu. 2015. En- coding source language with convolutional neural network for machine translation. arXiv preprint arXiv:1503.01838.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Maxmargin tensor neural network for chinese word segmentation", "authors": [ { "first": "Wenzhe", "middle": [], "last": "Pei", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2014, "venue": "Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "293--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Max- margin tensor neural network for chinese word seg- mentation. In Meeting of the Association for Com- putational Linguistics, pages 293-303.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Chinese segmentation and new word detection using conditional random fields", "authors": [ { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Fangfang", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th international conference on Computational Linguistics", "volume": "562", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detec- tion using conditional random fields. In Proceed- ings of the 20th international conference on Compu- tational Linguistics, page 562. Association for Com- putational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Convolutional neural network language models", "authors": [ { "first": "Germn", "middle": [], "last": "Ngoc Quan Pham", "suffix": "" }, { "first": "Gemma", "middle": [], "last": "Kruszewski", "suffix": "" }, { "first": "", "middle": [], "last": "Boleda", "suffix": "" } ], "year": 2016, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1153--1162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ngoc Quan Pham, Germn Kruszewski, and Gemma Boleda. 2016. Convolutional neural network lan- guage models. In Conference on Empirical Methods in Natural Language Processing, pages 1153-1162.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "authors": [ { "first": "Tim", "middle": [], "last": "Salimans", "suffix": "" }, { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "", "middle": [], "last": "Kingma", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Salimans and Diederik P. Kingma. 2016. Weight normalization: A simple reparameterization to ac- celerate training of deep neural networks.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Very deep convolutional networks for large-scale image recognition", "authors": [ { "first": "Karen", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.1556" ] }, "num": null, "urls": [], "raw_text": "Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Dropout: a simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "Journal of Machine Learning Research", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15(1):1929-1958.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Fast and accurate sequence labeling with iterated dilated convolutions", "authors": [ { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Verga", "suffix": "" }, { "first": "David", "middle": [], "last": "Belanger", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emma Strubell, Patrick Verga, David Belanger, and Andrew Mccallum. 2017. Fast and accurate se- quence labeling with iterated dilated convolutions.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Word-based and character-based word segmentation models: Comparison and combination", "authors": [ { "first": "Weiwei", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters", "volume": "", "issue": "", "pages": "1211--1219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Sun. 2010. Word-based and character-based word segmentation models: Comparison and com- bination. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1211-1219. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Fast online training with frequency-adaptive learning rates for chinese word segmentation and new word detection", "authors": [ { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu Sun, Houfeng Wang, and Wenjie Li. 2012. Fast on- line training with frequency-adaptive learning rates for chinese word segmentation and new word detec- tion.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A discriminative latent variable chinese segmenter with hybrid word/character information", "authors": [ { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yaozhong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Takuya", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2009, "venue": "Human Language Technologies: the 2009 Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "56--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshi- masa Tsuruoka, and Jun'Ichi Tsujii. 2009. A dis- criminative latent variable chinese segmenter with hybrid word/character information. In Human Lan- guage Technologies: the 2009 Conference of the North American Chapter of the Association for Computational Linguistics, pages 56-64.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A conditional random field word segmenter", "authors": [ { "first": "Huihsin", "middle": [], "last": "Tseng", "suffix": "" } ], "year": 2005, "venue": "Fourth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huihsin Tseng. 2005. A conditional random field word segmenter. In In Fourth SIGHAN Workshop on Chi- nese Language Processing.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Phoneme recognition using time-delay neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Waibel", "suffix": "" }, { "first": "Toshiyuki", "middle": [], "last": "Hanazawa", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Kiyohiro", "middle": [], "last": "Shikano", "suffix": "" }, { "first": "Kevin", "middle": [ "J" ], "last": "Lang", "suffix": "" } ], "year": 1989, "venue": "IEEE transactions on acoustics, speech, and signal processing", "volume": "37", "issue": "3", "pages": "328--339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Waibel, Toshiyuki Hanazawa, Geoffrey Hin- ton, Kiyohiro Shikano, and Kevin J Lang. 1989. Phoneme recognition using time-delay neural net- works. IEEE transactions on acoustics, speech, and signal processing, 37(3):328-339.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Two knives cut better than one: Chinese word segmentation with dual decomposition", "authors": [ { "first": "Mengqiu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Voigt", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "193--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mengqiu Wang, Rob Voigt, and Christopher D Man- ning. 2014. Two knives cut better than one: Chinese word segmentation with dual decomposition. In Meeting of the Association for Computational Lin- guistics, pages 193-198.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Improving chinese word segmentation and pos tagging with semi-supervised methods using large auto-analyzed data", "authors": [ { "first": "Yiou", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jun ' Ichi", "middle": [], "last": "Kazama", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "Wenliang", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2011, "venue": "International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yiou Wang, Jun ' Ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Tori- sawa. 2011. Improving chinese word segmentation and pos tagging with semi-supervised methods us- ing large auto-analyzed data. In International Joint Conference on Natural Language Processing.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Dependency-based gated recursive neural network for chinese word segmentation", "authors": [ { "first": "Jingjing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "The 54th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingjing Xu and Xu Sun. 2016. Dependency-based gated recursive neural network for chinese word seg- mentation. In The 54th Annual Meeting of the Asso- ciation for Computational Linguistics, page 567.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Chinese word segmentation as character tagging", "authors": [ { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "8", "issue": "", "pages": "29--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nianwen Xue et al. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing, 8(1):29-48.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Exploring representations from unlabeled data with co-training for chinese word segmentation", "authors": [ { "first": "Longkai", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Mairgup", "middle": [], "last": "Mansur", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013. Exploring representations from un- labeled data with co-training for chinese word seg- mentation.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Transition-based neural word segmentation", "authors": [ { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Guohong", "middle": [], "last": "Fu", "suffix": "" } ], "year": 2016, "venue": "Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "421--431", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu. 2016. Transition-based neural word segmentation. In Meeting of the Association for Computational Lin- guistics, pages 421-431.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Character-level convolutional networks for text classification", "authors": [ { "first": "Xiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "649--657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in neural information pro- cessing systems, pages 649-657.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Chinese segmentation with a word-based perceptron algorithm", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2007, "venue": "ACL 2007, Proceedings of the Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark. 2007. Chinese segmen- tation with a word-based perceptron algorithm. In ACL 2007, Proceedings of the Meeting of the Asso- ciation for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Integrating unsupervised and supervised word segmentation: The role of goodness measures", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chunyu", "middle": [], "last": "Kit", "suffix": "" } ], "year": 2011, "venue": "Information Sciences", "volume": "181", "issue": "1", "pages": "163--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao and Chunyu Kit. 2011. Integrating unsu- pervised and supervised word segmentation: The role of goodness measures. Information Sciences, 181(1):163-183.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Deep learning for chinese word segmentation and pos tagging", "authors": [ { "first": "Xiaoqing", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Hanyang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Tianyu", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2013, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for chinese word segmentation and pos tagging. In Conference on Empirical Methods in Natural Language Processing.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Chinese word segmentation as a sequence labeling task. This figure presents the common BMES (Begining, Middle, End, Singleton) tagging scheme.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Structure of a convolutional layer with GLU. There are five input channels and four output channels in this figure.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Stacked convolutional layers. There is one input layer on the bottom and three convolutional layers on the top. Dashed white circles denote paddings. Black circles and lines mark the pyramid in the perspective of c4.", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "in a large margin (+0.8 on PKU and +0.3 in MSR).Chen et al. (2015b) preprocessed all datasets by replacing Chinese idioms with a sin-Learning curves (dev scores) of our models on PKU (left) and MSR (right).", "type_str": "figure", "num": null, "uris": null }, "FIGREF4": { "text": "1", "type_str": "figure", "num": null, "uris": null }, "FIGREF5": { "text": "Scores on dev set and test set with respect to the number of convolutional layers. The vertical dashed line marks the depth we choose.", "type_str": "figure", "num": null, "uris": null }, "FIGREF6": { "text": "applied CRF to the problem for sequence-level prediction. Recently, under the sequence labeling framework, various neural models have been explored for CWS. Zheng et al.(2013)firstly applied a feed-forward neural network for CWS. Pei et al. (2014) improved upon Zheng et al. (2013) by explicitly modeling the interactions between local context and previous tag. Chen et al. (2015a) proposed a gated recursive neural network (GRNN) to model the combinations of context characters. Chen et al. (2015b) utilized Long short-term memory (LSTM) to capture long distant dependencies. Xu and Sun (2016) combined LSTM and GRNN to efficiently integrate local and long-distance features.", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "num": null, "text": "Word features at position i given a sentence S =", "content": "", "type_str": "table", "html": null }, "TABREF1": { "num": null, "text": "word is generated from D un where low frequency words are discarded 2 . We replace M word [ * ] with M word [UNK] if * / \u2208 V word (UNK denotes the unknown words).", "content": "
", "type_str": "table", "html": null }, "TABREF3": { "num": null, "text": "Hyper-parameters we choose for our model.", "content": "
4 Experiments
4.1 Settings
Datasets We evaluate our model on two bench-
mark datasets, PKU and MSR, from the second
International Chinese Word Segmentation Bake-
", "type_str": "table", "html": null }, "TABREF6": { "num": null, "text": "8 \u2020 95.6 \u2020 96.0 * 96.6", "content": "
(Xu and Sun, 2016)--96.1 * 96.3 *
CONV-SEG95.797.3--
(Pei et al., 2014)95.2 (+1.2) (+2.3) 97.2--
(Chen et al., 2015a)--96.4 * 97.6 * (+0.3) (+1.4)
(Chen et al., 2015b)--96.5 * 97.3 * (+0.5) (+0.7)
AVEBE-CONV-SEG95.4 (-0.3) (-0.2) 97.1--
W2VBE-CONV-SEG95.9 (+0.2) (+0.2) 97.5--
", "type_str": "table", "html": null } } } }