paper_id
stringlengths
9
13
venue
stringclasses
171 values
year
stringclasses
7 values
paper_title
stringlengths
0
188
paper_authors
stringlengths
4
1.01k
paper_abstract
stringlengths
0
5k
paper_keywords
stringlengths
2
679
paper_content
stringlengths
0
100k
review_id
stringlengths
9
12
review_title
stringlengths
0
500
review_rating
stringclasses
92 values
review_text
stringlengths
0
28.3k
review_confidence
stringclasses
21 values
SJU4ayYgl
ICLR.cc/2017/conference
2017
Semi-Supervised Classification with Graph Convolutional Networks
["Thomas N. Kipf", "Max Welling"]
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.
["Deep learning", "Semi-Supervised Learning"]
ABSTRACTWe present a scalable approach for semi-supervised learning on graph-structureddata that is based on an efficient variant of convolutional neural networks whichoperate directly on graphs. We motivate the choice of our convolutional archi-tecture via a localized first-order approximation of spectral graph convolutions.Our model scales linearly in the number of graph edges and learns hidden layerrepresentations that encode both local graph structure and features of nodes. Ina number of experiments on citation networks and on a knowledge graph datasetwe demonstrate that our approach outperforms related methods by a significantmargin.1 I NTRODUCTIONWe consider the problem of classifying nodes (such as documents) in a graph (such as a citationnetwork), where labels are only available for a small subset of nodes. This problem can be framedas graph-based semi-supervised learning, where label information is smoothed over the graph viasome form of explicit graph-based regularization (Zhu et al., 2003; Zhou et al., 2004; Belkin et al.,2006; Weston et al., 2012), e.g. by using a graph Laplacian regularization term in the loss function:L=L0+Lreg;withLreg=Xi;jAijkf(Xi)f(Xj)k2=f(X)>f(X): (1)Here,L0denotes the supervised loss w.r.t. the labeled part of the graph, f()can be a neural network-like differentiable function, is a weighing factor and Xis a matrix of node feature vectors Xi. =DAdenotes the unnormalized graph Laplacian of an undirected graph G= (V;E)withNnodesvi2V, edges (vi;vj)2E, an adjacency matrix A2RNN(binary or weighted) anda degree matrix Dii=PjAij. The formulation of Eq. 1 relies on the assumption that connectednodes in the graph are likely to share the same label. This assumption, however, might restrictmodeling capacity, as graph edges need not necessarily encode node similarity, but could containadditional information.In this work, we encode the graph structure directly using a neural network model f(X;A)andtrain on a supervised target L0for all nodes with labels, thereby avoiding explicit graph-basedregularization in the loss function. Conditioning f()on the adjacency matrix of the graph willallow the model to distribute gradient information from the supervised loss L0and will enable it tolearn representations of nodes both with and without labels.Our contributions are two-fold. Firstly, we introduce a simple and well-behaved layer-wise prop-agation rule for neural network models which operate directly on graphs and show how it can bemotivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011).Secondly, we demonstrate how this form of a graph-based neural network model can be used forfast and scalable semi-supervised classification of nodes in a graph. Experiments on a number ofdatasets demonstrate that our model compares favorably both in classification accuracy and effi-ciency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.1Published as a conference paper at ICLR 20172 F AST APPROXIMATE CONVOLUTIONS ON GRAPHSIn this section, we provide theoretical motivation for a specific graph-based neural network modelf(X;A)that we will use in the rest of this paper. We consider a multi-layer Graph ConvolutionalNetwork (GCN) with the following layer-wise propagation rule:H(l+1)=~D12~A~D12H(l)W(l): (2)Here, ~A=A+INis the adjacency matrix of the undirected graph Gwith added self-connections.INis the identity matrix, ~Dii=Pj~AijandW(l)is a layer-specific trainable weight matrix. ()denotes an activation function, such as the ReLU() = max(0;).H(l)2RNDis the matrix of ac-tivations in the lthlayer;H(0)=X. In the following, we show that the form of this propagation rulecan be motivated1via a first-order approximation of localized spectral filters on graphs (Hammondet al., 2011; Defferrard et al., 2016).2.1 S PECTRAL GRAPH CONVOLUTIONSWe consider spectral convolutions on graphs defined as the multiplication of a signal x2RN(ascalar for every node) with a filter g=diag()parameterized by 2RNin the Fourier domain,i.e.:g?x=UgU>x; (3)whereUis the matrix of eigenvectors of the normalized graph Laplacian L=IND12AD12=UU>, with a diagonal matrix of its eigenvalues andU>xbeing the graph Fourier transformofx. We can understand gas a function of the eigenvalues of L, i.e.g(). Evaluating Eq. 3 iscomputationally expensive, as multiplication with the eigenvector matrix UisO(N2). Furthermore,computing the eigendecomposition of Lin the first place might be prohibitively expensive for largegraphs. To circumvent this problem, it was suggested in Hammond et al. (2011) that g()can bewell-approximated by a truncated expansion in terms of Chebyshev polynomials Tk(x)up toKthorder:g0()KXk=00kTk(~); (4)with a rescaled ~ =2maxIN.maxdenotes the largest eigenvalue of L.02RKis now avector of Chebyshev coefficients. The Chebyshev polynomials are recursively defined as Tk(x) =2xTk1(x)Tk2(x), withT0(x) = 1 andT1(x) =x. The reader is referred to Hammond et al.(2011) for an in-depth discussion of this approximation.Going back to our definition of a convolution of a signal xwith a filterg0, we now have:g0?xKXk=00kTk(~L)x; (5)with ~L=2maxLIN; as can easily be verified by noticing that (UU>)k=UkU>. Note thatthis expression is now K-localized since it is a Kth-order polynomial in the Laplacian, i.e. it dependsonly on nodes that are at maximum Ksteps away from the central node ( Kth-order neighborhood).The complexity of evaluating Eq. 5 is O(jEj), i.e. linear in the number of edges. Defferrard et al.(2016) use this K-localized convolution to define a convolutional neural network on graphs.2.2 L AYER -WISELINEAR MODELA neural network model based on graph convolutions can therefore be built by stacking multipleconvolutional layers of the form of Eq. 5, each layer followed by a point-wise non-linearity. Now,imagine we limited the layer-wise convolution operation to K= 1(see Eq. 5), i.e. a function that islinear w.r.t.Land therefore a linear function on the graph Laplacian spectrum.1We provide an alternative interpretation of this propagation rule based on the Weisfeiler-Lehman algorithm(Weisfeiler & Lehmann, 1968) in Appendix A.2Published as a conference paper at ICLR 2017In this way, we can still recover a rich class of convolutional filter functions by stacking multiplesuch layers, but we are not limited to the explicit parameterization given by, e.g., the Chebyshevpolynomials. We intuitively expect that such a model can alleviate the problem of overfitting onlocal neighborhood structures for graphs with very wide node degree distributions, such as socialnetworks, citation networks, knowledge graphs and many other real-world graph datasets. Addition-ally, for a fixed computational budget, this layer-wise linear formulation allows us to build deepermodels, a practice that is known to improve modeling capacity on a number of domains (He et al.,2016).In this linear formulation of a GCN we further approximate max2, as we can expect that neuralnetwork parameters will adapt to this change in scale during training. Under these approximationsEq. 5 simplifies to:g0?x00x+01(LIN)x=00x01D12AD12x; (6)with two free parameters 00and01. The filter parameters can be shared over the whole graph.Successive application of filters of this form then effectively convolve the kth-order neighborhood ofa node, where kis the number of successive filtering operations or convolutional layers in the neuralnetwork model.In practice, it can be beneficial to constrain the number of parameters further to address overfittingand to minimize the number of operations (such as matrix multiplications) per layer. This leaves uswith the following expression:g?xIN+D12AD12x; (7)with a single parameter =00=01. Note that IN+D12AD12now has eigenvalues inthe range [0;2]. Repeated application of this operator can therefore lead to numerical instabilitiesand exploding/vanishing gradients when used in a deep neural network model. To alleviate thisproblem, we introduce the following renormalization trick :IN+D12AD12!~D12~A~D12, with~A=A+INand~Dii=Pj~Aij.We can generalize this definition to a signal X2RNCwithCinput channels (i.e. a C-dimensionalfeature vector for every node) and Ffilters or feature maps as follows:Z=~D12~A~D12X; (8)where 2RCFis now a matrix of filter parameters and Z2RNFis the convolved signalmatrix. This filtering operation has complexity O(jEjFC), as~AX can be efficiently implementedas a product of a sparse matrix with a dense matrix.3 S EMI-SUPERVISED NODE CLASSIFICATIONHaving introduced a simple, yet flexible model f(X;A)for efficient information propagation ongraphs, we can return to the problem of semi-supervised node classification. As outlined in the in-troduction, we can relax certain assumptions typically made in graph-based semi-supervised learn-ing by conditioning our model f(X;A)both on the data Xand on the adjacency matrix Aof theunderlying graph structure. We expect this setting to be especially powerful in scenarios where theadjacency matrix contains information not present in the data X, such as citation links between doc-uments in a citation network or relations in a knowledge graph. The overall model, a multi-layerGCN for semi-supervised learning, is schematically depicted in Figure 1.3.1 E XAMPLEIn the following, we consider a two-layer GCN for semi-supervised node classification on a graphwith a symmetric adjacency matrix A(binary or weighted). We first calculate ^A=~D12~A~D12ina pre-processing step. Our forward model then takes the simple form:Z=f(X;A) = softmax^AReLU^AXW(0)W(1): (9)3Published as a conference paper at ICLR 2017Cinput layerX1X2X3X4Foutput layerZ1Z2Z3Z4hiddenlayersY1Y41(a) Graph Convolutional Network30 20 10 0 10 20 303020100102030 (b) Hidden layer activationsFigure 1: Left: Schematic depiction of multi-layer Graph Convolutional Network (GCN) for semi-supervised learning with Cinput channels and Ffeature maps in the output layer. The graph struc-ture (edges shown as black lines) is shared over layers, labels are denoted by Yi.Right : t-SNE(Maaten & Hinton, 2008) visualization of hidden layer activations of a two-layer GCN trained onthe Cora dataset (Sen et al., 2008) using 5%of labels. Colors denote document class.Here,W(0)2RCHis an input-to-hidden weight matrix for a hidden layer with Hfeature maps.W(1)2RHFis a hidden-to-output weight matrix. The softmax activation function, defined assoftmax(xi) =1Zexp(xi)withZ=Piexp(xi), is applied row-wise. For semi-supervised multi-class classification, we then evaluate the cross-entropy error over all labeled examples:L=Xl2YLFXf=1YlflnZlf; (10)whereYLis the set of node indices that have labels.The neural network weights W(0)andW(1)are trained using gradient descent. In this work, weperform batch gradient descent using the full dataset for every training iteration, which is a viableoption as long as datasets fit in memory. Using a sparse representation for A, memory requirementisO(jEj), i.e. linear in the number of edges. Stochasticity in the training process is introduced viadropout (Srivastava et al., 2014). We leave memory-efficient extensions with mini-batch stochasticgradient descent for future work.3.2 I MPLEMENTATIONIn practice, we make use of TensorFlow (Abadi et al., 2015) for an efficient GPU-based imple-mentation2of Eq. 9 using sparse-dense matrix multiplications. The computational complexity ofevaluating Eq. 9 is then O(jEjCHF ), i.e. linear in the number of graph edges.4 R ELATED WORKOur model draws inspiration both from the field of graph-based semi-supervised learning and fromrecent work on neural networks that operate on graphs. In what follows, we provide a brief overviewon related work in both fields.4.1 G RAPH -BASED SEMI-SUPERVISED LEARNINGA large number of approaches for semi-supervised learning using graph representations have beenproposed in recent years, most of which fall into two broad categories: methods that use someform of explicit graph Laplacian regularization and graph embedding-based approaches. Prominentexamples for graph Laplacian regularization include label propagation (Zhu et al., 2003), manifoldregularization (Belkin et al., 2006) and deep semi-supervised embedding (Weston et al., 2012).2Code to reproduce our experiments is available at https://github.com/tkipf/gcn .4Published as a conference paper at ICLR 2017Recently, attention has shifted to models that learn graph embeddings with methods inspired bythe skip-gram model (Mikolov et al., 2013). DeepWalk (Perozzi et al., 2014) learns embeddingsvia the prediction of the local neighborhood of nodes, sampled from random walks on the graph.LINE (Tang et al., 2015) and node2vec (Grover & Leskovec, 2016) extend DeepWalk with moresophisticated random walk or breadth-first search schemes. For all these methods, however, a multi-step pipeline including random walk generation and semi-supervised training is required where eachstep has to be optimized separately. Planetoid (Yang et al., 2016) alleviates this by injecting labelinformation in the process of learning embeddings.4.2 N EURAL NETWORKS ON GRAPHSNeural networks that operate on graphs have previously been introduced in Gori et al. (2005);Scarselli et al. (2009) as a form of recurrent neural network. Their framework requires the repeatedapplication of contraction maps as propagation functions until node representations reach a stablefixed point. This restriction was later alleviated in Li et al. (2016) by introducing modern practicesfor recurrent neural network training to the original graph neural network framework. Duvenaudet al. (2015) introduced a convolution-like propagation rule on graphs and methods for graph-levelclassification. Their approach requires to learn node degree-specific weight matrices which does notscale to large graphs with wide node degree distributions. Our model instead uses a single weightmatrix per layer and deals with varying node degrees through an appropriate normalization of theadjacency matrix (see Section 3.1).A related approach to node classification with a graph-based neural network was recently introducedin Atwood & Towsley (2016). They report O(N2)complexity, limiting the range of possible appli-cations. In a different yet related model, Niepert et al. (2016) convert graphs locally into sequencesthat are fed into a conventional 1D convolutional neural network, which requires the definition of anode ordering in a pre-processing step.Our method is based on spectral graph convolutional neural networks, introduced in Bruna et al.(2014) and later extended by Defferrard et al. (2016) with fast localized convolutions. In contrastto these works, we consider here the task of transductive node classification within networks ofsignificantly larger scale. We show that in this setting, a number of simplifications (see Section 2.2)can be introduced to the original frameworks of Bruna et al. (2014) and Defferrard et al. (2016) thatimprove scalability and classification performance in large-scale networks.5 E XPERIMENTSWe test our model in a number of experiments: semi-supervised document classification in cita-tion networks, semi-supervised entity classification in a bipartite graph extracted from a knowledgegraph, an evaluation of various graph propagation models and a run-time analysis on random graphs.5.1 D ATASETSWe closely follow the experimental setup in Yang et al. (2016). Dataset statistics are summarizedin Table 1. In the citation network datasets—Citeseer, Cora and Pubmed (Sen et al., 2008)—nodesare documents and edges are citation links. Label rate denotes the number of labeled nodes that areused for training divided by the total number of nodes in each dataset. NELL (Carlson et al., 2010;Yang et al., 2016) is a bipartite graph dataset extracted from a knowledge graph with 55,864 relationnodes and 9,891 entity nodes.Table 1: Dataset statistics, as reported in Yang et al. (2016).Dataset Type Nodes Edges Classes Features Label rateCiteseer Citation network 3,327 4,732 6 3,703 0:036Cora Citation network 2,708 5,429 7 1,433 0:052Pubmed Citation network 19,717 44,338 3 500 0:003NELL Knowledge graph 65,755 266,144 210 5,414 0:0015Published as a conference paper at ICLR 2017Citation networks We consider three citation network datasets: Citeseer, Cora and Pubmed (Senet al., 2008). The datasets contain sparse bag-of-words feature vectors for each document and a listof citation links between documents. We treat the citation links as (undirected) edges and constructa binary, symmetric adjacency matrix A. Each document has a class label. For training, we only use20 labels per class, but all feature vectors.NELL NELL is a dataset extracted from the knowledge graph introduced in (Carlson et al., 2010).A knowledge graph is a set of entities connected with directed, labeled edges (relations). We followthe pre-processing scheme as described in Yang et al. (2016). We assign separate relation nodesr1andr2for each entity pair (e1;r;e 2)as(e1;r1)and(e2;r2). Entity nodes are described bysparse feature vectors. We extend the number of features in NELL by assigning a unique one-hotrepresentation for every relation node, effectively resulting in a 61,278-dim sparse feature vector pernode. The semi-supervised task here considers the extreme case of only a single labeled exampleper class in the training set. We construct a binary, symmetric adjacency matrix from this graph bysetting entries Aij= 1, if one or more edges are present between nodes iandj.Random graphs We simulate random graph datasets of various sizes for experiments where wemeasure training time per epoch. For a dataset with Nnodes we create a random graph assigning2Nedges uniformly at random. We take the identity matrix INas input feature matrix X, therebyimplicitly taking a featureless approach where the model is only informed about the identity of eachnode, specified by a unique one-hot vector. We add dummy labels Yi= 1for every node.5.2 E XPERIMENTAL SET-UPUnless otherwise noted, we train a two-layer GCN as described in Section 3.1 and evaluate pre-diction accuracy on a test set of 1,000 labeled examples. We provide additional experiments usingdeeper models with up to 10 layers in Appendix B. We choose the same dataset splits as in Yanget al. (2016) with an additional validation set of 500 labeled examples for hyperparameter opti-mization (dropout rate for all layers, L2 regularization factor for the first GCN layer and number ofhidden units). We do not use the validation set labels for training.For the citation network datasets, we optimize hyperparameters on Cora only and use the same setof parameters for Citeseer and Pubmed. We train all models for a maximum of 200 epochs (trainingiterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0:01and early stopping with awindow size of 10, i.e. we stop training if the validation loss does not decrease for 10 consecutiveepochs. We initialize weights using the initialization described in Glorot & Bengio (2010) andaccordingly (row-)normalize input feature vectors. On the random graph datasets, we use a hiddenlayer size of 32 units and omit regularization (i.e. neither dropout nor L2 regularization).5.3 B ASELINESWe compare against the same baseline methods as in Yang et al. (2016), i.e. label propagation(LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifoldregularization (ManiReg) (Belkin et al., 2006) and skip-gram based graph embeddings (DeepWalk)(Perozzi et al., 2014). We omit TSVM (Joachims, 1999), as it does not scale to the large number ofclasses in one of our datasets.We further compare against the iterative classification algorithm (ICA) proposed in Lu & Getoor(2003) in conjunction with two logistic regression classifiers, one for local node features alone andone for relational classification using local features and an aggregation operator as described inSen et al. (2008). We first train the local classifier using all labeled training set nodes and useit to bootstrap class labels of unlabeled nodes for relational classifier training. We run iterativeclassification (relational classifier) with a random node ordering for 10 iterations on all unlabelednodes (bootstrapped using the local classifier). L2 regularization parameter and aggregation operator(count vs.prop, see Sen et al. (2008)) are chosen based on validation set performance for each datasetseparately.Lastly, we compare against Planetoid (Yang et al., 2016), where we always choose their best-performing model variant (transductive vs. inductive) as a baseline.6Published as a conference paper at ICLR 20176 R ESULTS6.1 S EMI-SUPERVISED NODE CLASSIFICATIONResults are summarized in Table 2. Reported numbers denote classification accuracy in percent. ForICA, we report the mean accuracy of 100 runs with random node orderings. Results for all otherbaseline methods are taken from the Planetoid paper (Yang et al., 2016). Planetoid* denotes the bestmodel for the respective dataset out of the variants presented in their paper.Table 2: Summary of results in terms of classification accuracy (in percent).Method Citeseer Cora Pubmed NELLManiReg [3] 60:1 59 :5 70 :7 21 :8SemiEmb [28] 59:6 59 :0 71 :1 26 :7LP [32] 45:3 68 :0 63 :0 26 :5DeepWalk [22] 43:2 67 :2 65 :3 58 :1ICA [18] 69:1 75 :1 73 :9 23 :1Planetoid* [29] 64:7(26s) 75:7(13s) 77:2(25s) 61:9(185s)GCN (this paper) 70:3(7s) 81:5(4s) 79:0(38s) 66:0(48s)GCN (rand. splits) 67:90:5 80:10:5 78:90:7 58:41:7We further report wall-clock training time in seconds until convergence (in brackets) for our method(incl. evaluation of validation error) and for Planetoid. For the latter, we used an implementation pro-vided by the authors3and trained on the same hardware (with GPU) as our GCN model. We trainedand tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracyof 100 runs with random weight initializations. We used the following sets of hyperparameters forCiteseer, Cora and Pubmed: 0.5 (dropout rate), 5104(L2 regularization) and 16(number of hid-den units); and for NELL: 0.1 (dropout rate), 1105(L2 regularization) and 64(number of hiddenunits).In addition, we report performance of our model on 10 randomly drawn dataset splits of the samesize as in Yang et al. (2016), denoted by GCN (rand. splits). Here, we report mean and standarderror of prediction accuracy on the test set split in percent.6.2 E VALUATION OF PROPAGATION MODELWe compare different variants of our proposed per-layer propagation model on the citation networkdatasets. We follow the experimental set-up described in the previous section. Results are summa-rized in Table 3. The propagation model of our original GCN model is denoted by renormalizationtrick (in bold). In all other cases, the propagation model of both neural network layers is replacedwith the model specified under propagation model . Reported numbers denote mean classificationaccuracy for 100 repeated runs with random weight matrix initializations. In case of multiple vari-ables iper layer, we impose L2 regularization on all weight matrices of the first layer.Table 3: Comparison of propagation models.Description Propagation model Citeseer Cora PubmedChebyshev filter (Eq. 5)K= 3PKk=0Tk(~L)Xk69:8 79:5 74:4K= 2 69 :6 81:2 73:81st-order model (Eq. 6) X0+D12AD12X1 68:3 80:0 77:5Single parameter (Eq. 7) (IN+D12AD12)X 69 :3 79:2 77:4Renormalization trick (Eq. 8) ~D12~A~D12X 70:3 81:5 79:01st-order term only D12AD12X 68 :7 80:5 77:8Multi-layer perceptron X 46 :5 55:1 71:43https://github.com/kimiyoung/planetoid7Published as a conference paper at ICLR 20176.3 T RAINING TIME PER EPOCH1k 10k 100k 1M 10M# Edges10-310-210-1100101Sec./epoch*GPUCPUFigure 2: Wall-clock time per epoch for randomgraphs. (*) indicates out-of-memory error.Here, we report results for the mean trainingtime per epoch (forward pass, cross-entropycalculation, backward pass) for 100 epochs onsimulated random graphs, measured in secondswall-clock time. See Section 5.1 for a detaileddescription of the random graph dataset usedin these experiments. We compare results ona GPU and on a CPU-only implementation4inTensorFlow (Abadi et al., 2015). Figure 2 sum-marizes the results.7 D ISCUSSION7.1 S EMI-SUPERVISED MODELIn the experiments demonstrated here, our method for semi-supervised node classification outper-forms recent related methods by a significant margin. Methods based on graph-Laplacian regular-ization (Zhu et al., 2003; Belkin et al., 2006; Weston et al., 2012) are most likely limited due to theirassumption that edges encode mere similarity of nodes. Skip-gram based methods on the other handare limited by the fact that they are based on a multi-step pipeline which is difficult to optimize.Our proposed model can overcome both limitations, while still comparing favorably in terms of ef-ficiency (measured in wall-clock time) to related methods. Propagation of feature information fromneighboring nodes in every layer improves classification performance in comparison to methods likeICA (Lu & Getoor, 2003), where only label information is aggregated.We have further demonstrated that the proposed renormalized propagation model (Eq. 8) offers bothimproved efficiency (fewer parameters and operations, such as multiplication or addition) and betterpredictive performance on a number of datasets compared to a na ̈ıve1st-order model (Eq. 6) orhigher-order graph convolutional models using Chebyshev polynomials (Eq. 5).7.2 L IMITATIONS AND FUTURE WORKHere, we describe several limitations of our current model and outline how these might be overcomein future work.Memory requirement In the current setup with full-batch gradient descent, memory requirementgrows linearly in the size of the dataset. We have shown that for large graphs that do not fit in GPUmemory, training on CPU can still be a viable option. Mini-batch stochastic gradient descent canalleviate this issue. The procedure of generating mini-batches, however, should take into account thenumber of layers in the GCN model, as the Kth-order neighborhood for a GCN with Klayers has tobe stored in memory for an exact procedure. For very large and densely connected graph datasets,further approximations might be necessary.Directed edges and edge features Our framework currently does not naturally support edge fea-tures and is limited to undirected graphs (weighted or unweighted). Results on NELL howevershow that it is possible to handle both directed edges and edge features by representing the originaldirected graph as an undirected bipartite graph with additional nodes that represent edges in theoriginal graph (see Section 5.1 for details).Limiting assumptions Through the approximations introduced in Section 2, we implicitly assumelocality (dependence on the Kth-order neighborhood for a GCN with Klayers) and equal impor-tance of self-connections vs. edges to neighboring nodes. For some datasets, however, it might bebeneficial to introduce a trade-off parameter in the definition of ~A:~A=A+IN: (11)4Hardware used: 16-core Intel RXeon RCPU E5-2640 v3 @ 2.60GHz, GeForce RGTX TITAN X8Published as a conference paper at ICLR 2017This parameter now plays a similar role as the trade-off parameter between supervised and unsuper-vised loss in the typical semi-supervised setting (see Eq. 1). Here, however, it can be learned viagradient descent.8 C ONCLUSIONWe have introduced a novel approach for semi-supervised classification on graph-structured data.Our GCN model uses an efficient layer-wise propagation rule that is based on a first-order approx-imation of spectral convolutions on graphs. Experiments on a number of network datasets suggestthat the proposed GCN model is capable of encoding both graph structure and node features in away useful for semi-supervised classification. In this setting, our model outperforms several recentlyproposed methods by a significant margin, while being computationally efficient.ACKNOWLEDGMENTSWe would like to thank Christos Louizos, Taco Cohen, Joan Bruna, Zhilin Yang, Dave Herman,Pramod Sinha and Abdul-Saboor Sheikh for helpful discussions. This research was funded by SAP.
HJ3LKSSEg
7: Good paper, accept
The paper introduces a method for semi-supervised learning in graphs that exploits the spectral structure of the graph in a convolutional NN implementation. The proposed algorithm has a limited complexity and it is shown to scale well on a large dataset. The comparison with baselines on different datasets show a clear jump of performance with the proposed method. The paper is technically fine and clear, the algorithm seems to scale well, and the results on the different datasets compare very favorably with the different baselines. The algorithm is simple and training seems easy. Concerning the originality, the proposed algorithm is a simple adaptation of graph convolutional networks (ref Defferrard 2016 in the paper) to a semi-supervised transductive setting. This is clearly mentioned in the paper, but the authors could better highlight the differences and novelty wrt this reference paper. Also, there is no comparison with the family of iterative classifiers, which usually compare favorably, both in performance and training time, with regularization based approaches, although they are mostly used in inductive settings. Below are some references for this family of methods. The authors mention that more complex filters could be learned by stacking layers but they limit their architecture to one hidden layer. They should comment on the interest of using more layers for graph classification. Some references on iterative classification Qing Lu and Lise Getoor. 2003. Link-based classification. In ICML, Vol. 3. 496–503. Gideon S Mann and Andrew McCallum. 2010. Generalized expectation criteria for semi-supervised learning with weakly labeled data. The Journal of Machine Learning Research 11 (2010), 955–984. David Jensen, Jennifer Neville, and Brian Gallagher. 2004. Why collective inference improves relational classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 593–598. Joseph J Pfeiffer III, Jennifer Neville, and Paul N Bennett. 2015. Overcoming Relational Learning Biases to Accurately Predict Preferences in Large Scale Networks. In Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 853– 863. Stephane Peters, Ludovic Denoyer, and Patrick Gallinari. 2010. Iterative annotation of multi-relational social networks. In Advances in Social Networks Analysis and Mining (ASONAM), 2010 International Conference on. IEEE, 96–103.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
SJU4ayYgl
ICLR.cc/2017/conference
2017
Semi-Supervised Classification with Graph Convolutional Networks
["Thomas N. Kipf", "Max Welling"]
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.
["Deep learning", "Semi-Supervised Learning"]
ABSTRACTWe present a scalable approach for semi-supervised learning on graph-structureddata that is based on an efficient variant of convolutional neural networks whichoperate directly on graphs. We motivate the choice of our convolutional archi-tecture via a localized first-order approximation of spectral graph convolutions.Our model scales linearly in the number of graph edges and learns hidden layerrepresentations that encode both local graph structure and features of nodes. Ina number of experiments on citation networks and on a knowledge graph datasetwe demonstrate that our approach outperforms related methods by a significantmargin.1 I NTRODUCTIONWe consider the problem of classifying nodes (such as documents) in a graph (such as a citationnetwork), where labels are only available for a small subset of nodes. This problem can be framedas graph-based semi-supervised learning, where label information is smoothed over the graph viasome form of explicit graph-based regularization (Zhu et al., 2003; Zhou et al., 2004; Belkin et al.,2006; Weston et al., 2012), e.g. by using a graph Laplacian regularization term in the loss function:L=L0+Lreg;withLreg=Xi;jAijkf(Xi)f(Xj)k2=f(X)>f(X): (1)Here,L0denotes the supervised loss w.r.t. the labeled part of the graph, f()can be a neural network-like differentiable function, is a weighing factor and Xis a matrix of node feature vectors Xi. =DAdenotes the unnormalized graph Laplacian of an undirected graph G= (V;E)withNnodesvi2V, edges (vi;vj)2E, an adjacency matrix A2RNN(binary or weighted) anda degree matrix Dii=PjAij. The formulation of Eq. 1 relies on the assumption that connectednodes in the graph are likely to share the same label. This assumption, however, might restrictmodeling capacity, as graph edges need not necessarily encode node similarity, but could containadditional information.In this work, we encode the graph structure directly using a neural network model f(X;A)andtrain on a supervised target L0for all nodes with labels, thereby avoiding explicit graph-basedregularization in the loss function. Conditioning f()on the adjacency matrix of the graph willallow the model to distribute gradient information from the supervised loss L0and will enable it tolearn representations of nodes both with and without labels.Our contributions are two-fold. Firstly, we introduce a simple and well-behaved layer-wise prop-agation rule for neural network models which operate directly on graphs and show how it can bemotivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011).Secondly, we demonstrate how this form of a graph-based neural network model can be used forfast and scalable semi-supervised classification of nodes in a graph. Experiments on a number ofdatasets demonstrate that our model compares favorably both in classification accuracy and effi-ciency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.1Published as a conference paper at ICLR 20172 F AST APPROXIMATE CONVOLUTIONS ON GRAPHSIn this section, we provide theoretical motivation for a specific graph-based neural network modelf(X;A)that we will use in the rest of this paper. We consider a multi-layer Graph ConvolutionalNetwork (GCN) with the following layer-wise propagation rule:H(l+1)=~D12~A~D12H(l)W(l): (2)Here, ~A=A+INis the adjacency matrix of the undirected graph Gwith added self-connections.INis the identity matrix, ~Dii=Pj~AijandW(l)is a layer-specific trainable weight matrix. ()denotes an activation function, such as the ReLU() = max(0;).H(l)2RNDis the matrix of ac-tivations in the lthlayer;H(0)=X. In the following, we show that the form of this propagation rulecan be motivated1via a first-order approximation of localized spectral filters on graphs (Hammondet al., 2011; Defferrard et al., 2016).2.1 S PECTRAL GRAPH CONVOLUTIONSWe consider spectral convolutions on graphs defined as the multiplication of a signal x2RN(ascalar for every node) with a filter g=diag()parameterized by 2RNin the Fourier domain,i.e.:g?x=UgU>x; (3)whereUis the matrix of eigenvectors of the normalized graph Laplacian L=IND12AD12=UU>, with a diagonal matrix of its eigenvalues andU>xbeing the graph Fourier transformofx. We can understand gas a function of the eigenvalues of L, i.e.g(). Evaluating Eq. 3 iscomputationally expensive, as multiplication with the eigenvector matrix UisO(N2). Furthermore,computing the eigendecomposition of Lin the first place might be prohibitively expensive for largegraphs. To circumvent this problem, it was suggested in Hammond et al. (2011) that g()can bewell-approximated by a truncated expansion in terms of Chebyshev polynomials Tk(x)up toKthorder:g0()KXk=00kTk(~); (4)with a rescaled ~ =2maxIN.maxdenotes the largest eigenvalue of L.02RKis now avector of Chebyshev coefficients. The Chebyshev polynomials are recursively defined as Tk(x) =2xTk1(x)Tk2(x), withT0(x) = 1 andT1(x) =x. The reader is referred to Hammond et al.(2011) for an in-depth discussion of this approximation.Going back to our definition of a convolution of a signal xwith a filterg0, we now have:g0?xKXk=00kTk(~L)x; (5)with ~L=2maxLIN; as can easily be verified by noticing that (UU>)k=UkU>. Note thatthis expression is now K-localized since it is a Kth-order polynomial in the Laplacian, i.e. it dependsonly on nodes that are at maximum Ksteps away from the central node ( Kth-order neighborhood).The complexity of evaluating Eq. 5 is O(jEj), i.e. linear in the number of edges. Defferrard et al.(2016) use this K-localized convolution to define a convolutional neural network on graphs.2.2 L AYER -WISELINEAR MODELA neural network model based on graph convolutions can therefore be built by stacking multipleconvolutional layers of the form of Eq. 5, each layer followed by a point-wise non-linearity. Now,imagine we limited the layer-wise convolution operation to K= 1(see Eq. 5), i.e. a function that islinear w.r.t.Land therefore a linear function on the graph Laplacian spectrum.1We provide an alternative interpretation of this propagation rule based on the Weisfeiler-Lehman algorithm(Weisfeiler & Lehmann, 1968) in Appendix A.2Published as a conference paper at ICLR 2017In this way, we can still recover a rich class of convolutional filter functions by stacking multiplesuch layers, but we are not limited to the explicit parameterization given by, e.g., the Chebyshevpolynomials. We intuitively expect that such a model can alleviate the problem of overfitting onlocal neighborhood structures for graphs with very wide node degree distributions, such as socialnetworks, citation networks, knowledge graphs and many other real-world graph datasets. Addition-ally, for a fixed computational budget, this layer-wise linear formulation allows us to build deepermodels, a practice that is known to improve modeling capacity on a number of domains (He et al.,2016).In this linear formulation of a GCN we further approximate max2, as we can expect that neuralnetwork parameters will adapt to this change in scale during training. Under these approximationsEq. 5 simplifies to:g0?x00x+01(LIN)x=00x01D12AD12x; (6)with two free parameters 00and01. The filter parameters can be shared over the whole graph.Successive application of filters of this form then effectively convolve the kth-order neighborhood ofa node, where kis the number of successive filtering operations or convolutional layers in the neuralnetwork model.In practice, it can be beneficial to constrain the number of parameters further to address overfittingand to minimize the number of operations (such as matrix multiplications) per layer. This leaves uswith the following expression:g?xIN+D12AD12x; (7)with a single parameter =00=01. Note that IN+D12AD12now has eigenvalues inthe range [0;2]. Repeated application of this operator can therefore lead to numerical instabilitiesand exploding/vanishing gradients when used in a deep neural network model. To alleviate thisproblem, we introduce the following renormalization trick :IN+D12AD12!~D12~A~D12, with~A=A+INand~Dii=Pj~Aij.We can generalize this definition to a signal X2RNCwithCinput channels (i.e. a C-dimensionalfeature vector for every node) and Ffilters or feature maps as follows:Z=~D12~A~D12X; (8)where 2RCFis now a matrix of filter parameters and Z2RNFis the convolved signalmatrix. This filtering operation has complexity O(jEjFC), as~AX can be efficiently implementedas a product of a sparse matrix with a dense matrix.3 S EMI-SUPERVISED NODE CLASSIFICATIONHaving introduced a simple, yet flexible model f(X;A)for efficient information propagation ongraphs, we can return to the problem of semi-supervised node classification. As outlined in the in-troduction, we can relax certain assumptions typically made in graph-based semi-supervised learn-ing by conditioning our model f(X;A)both on the data Xand on the adjacency matrix Aof theunderlying graph structure. We expect this setting to be especially powerful in scenarios where theadjacency matrix contains information not present in the data X, such as citation links between doc-uments in a citation network or relations in a knowledge graph. The overall model, a multi-layerGCN for semi-supervised learning, is schematically depicted in Figure 1.3.1 E XAMPLEIn the following, we consider a two-layer GCN for semi-supervised node classification on a graphwith a symmetric adjacency matrix A(binary or weighted). We first calculate ^A=~D12~A~D12ina pre-processing step. Our forward model then takes the simple form:Z=f(X;A) = softmax^AReLU^AXW(0)W(1): (9)3Published as a conference paper at ICLR 2017Cinput layerX1X2X3X4Foutput layerZ1Z2Z3Z4hiddenlayersY1Y41(a) Graph Convolutional Network30 20 10 0 10 20 303020100102030 (b) Hidden layer activationsFigure 1: Left: Schematic depiction of multi-layer Graph Convolutional Network (GCN) for semi-supervised learning with Cinput channels and Ffeature maps in the output layer. The graph struc-ture (edges shown as black lines) is shared over layers, labels are denoted by Yi.Right : t-SNE(Maaten & Hinton, 2008) visualization of hidden layer activations of a two-layer GCN trained onthe Cora dataset (Sen et al., 2008) using 5%of labels. Colors denote document class.Here,W(0)2RCHis an input-to-hidden weight matrix for a hidden layer with Hfeature maps.W(1)2RHFis a hidden-to-output weight matrix. The softmax activation function, defined assoftmax(xi) =1Zexp(xi)withZ=Piexp(xi), is applied row-wise. For semi-supervised multi-class classification, we then evaluate the cross-entropy error over all labeled examples:L=Xl2YLFXf=1YlflnZlf; (10)whereYLis the set of node indices that have labels.The neural network weights W(0)andW(1)are trained using gradient descent. In this work, weperform batch gradient descent using the full dataset for every training iteration, which is a viableoption as long as datasets fit in memory. Using a sparse representation for A, memory requirementisO(jEj), i.e. linear in the number of edges. Stochasticity in the training process is introduced viadropout (Srivastava et al., 2014). We leave memory-efficient extensions with mini-batch stochasticgradient descent for future work.3.2 I MPLEMENTATIONIn practice, we make use of TensorFlow (Abadi et al., 2015) for an efficient GPU-based imple-mentation2of Eq. 9 using sparse-dense matrix multiplications. The computational complexity ofevaluating Eq. 9 is then O(jEjCHF ), i.e. linear in the number of graph edges.4 R ELATED WORKOur model draws inspiration both from the field of graph-based semi-supervised learning and fromrecent work on neural networks that operate on graphs. In what follows, we provide a brief overviewon related work in both fields.4.1 G RAPH -BASED SEMI-SUPERVISED LEARNINGA large number of approaches for semi-supervised learning using graph representations have beenproposed in recent years, most of which fall into two broad categories: methods that use someform of explicit graph Laplacian regularization and graph embedding-based approaches. Prominentexamples for graph Laplacian regularization include label propagation (Zhu et al., 2003), manifoldregularization (Belkin et al., 2006) and deep semi-supervised embedding (Weston et al., 2012).2Code to reproduce our experiments is available at https://github.com/tkipf/gcn .4Published as a conference paper at ICLR 2017Recently, attention has shifted to models that learn graph embeddings with methods inspired bythe skip-gram model (Mikolov et al., 2013). DeepWalk (Perozzi et al., 2014) learns embeddingsvia the prediction of the local neighborhood of nodes, sampled from random walks on the graph.LINE (Tang et al., 2015) and node2vec (Grover & Leskovec, 2016) extend DeepWalk with moresophisticated random walk or breadth-first search schemes. For all these methods, however, a multi-step pipeline including random walk generation and semi-supervised training is required where eachstep has to be optimized separately. Planetoid (Yang et al., 2016) alleviates this by injecting labelinformation in the process of learning embeddings.4.2 N EURAL NETWORKS ON GRAPHSNeural networks that operate on graphs have previously been introduced in Gori et al. (2005);Scarselli et al. (2009) as a form of recurrent neural network. Their framework requires the repeatedapplication of contraction maps as propagation functions until node representations reach a stablefixed point. This restriction was later alleviated in Li et al. (2016) by introducing modern practicesfor recurrent neural network training to the original graph neural network framework. Duvenaudet al. (2015) introduced a convolution-like propagation rule on graphs and methods for graph-levelclassification. Their approach requires to learn node degree-specific weight matrices which does notscale to large graphs with wide node degree distributions. Our model instead uses a single weightmatrix per layer and deals with varying node degrees through an appropriate normalization of theadjacency matrix (see Section 3.1).A related approach to node classification with a graph-based neural network was recently introducedin Atwood & Towsley (2016). They report O(N2)complexity, limiting the range of possible appli-cations. In a different yet related model, Niepert et al. (2016) convert graphs locally into sequencesthat are fed into a conventional 1D convolutional neural network, which requires the definition of anode ordering in a pre-processing step.Our method is based on spectral graph convolutional neural networks, introduced in Bruna et al.(2014) and later extended by Defferrard et al. (2016) with fast localized convolutions. In contrastto these works, we consider here the task of transductive node classification within networks ofsignificantly larger scale. We show that in this setting, a number of simplifications (see Section 2.2)can be introduced to the original frameworks of Bruna et al. (2014) and Defferrard et al. (2016) thatimprove scalability and classification performance in large-scale networks.5 E XPERIMENTSWe test our model in a number of experiments: semi-supervised document classification in cita-tion networks, semi-supervised entity classification in a bipartite graph extracted from a knowledgegraph, an evaluation of various graph propagation models and a run-time analysis on random graphs.5.1 D ATASETSWe closely follow the experimental setup in Yang et al. (2016). Dataset statistics are summarizedin Table 1. In the citation network datasets—Citeseer, Cora and Pubmed (Sen et al., 2008)—nodesare documents and edges are citation links. Label rate denotes the number of labeled nodes that areused for training divided by the total number of nodes in each dataset. NELL (Carlson et al., 2010;Yang et al., 2016) is a bipartite graph dataset extracted from a knowledge graph with 55,864 relationnodes and 9,891 entity nodes.Table 1: Dataset statistics, as reported in Yang et al. (2016).Dataset Type Nodes Edges Classes Features Label rateCiteseer Citation network 3,327 4,732 6 3,703 0:036Cora Citation network 2,708 5,429 7 1,433 0:052Pubmed Citation network 19,717 44,338 3 500 0:003NELL Knowledge graph 65,755 266,144 210 5,414 0:0015Published as a conference paper at ICLR 2017Citation networks We consider three citation network datasets: Citeseer, Cora and Pubmed (Senet al., 2008). The datasets contain sparse bag-of-words feature vectors for each document and a listof citation links between documents. We treat the citation links as (undirected) edges and constructa binary, symmetric adjacency matrix A. Each document has a class label. For training, we only use20 labels per class, but all feature vectors.NELL NELL is a dataset extracted from the knowledge graph introduced in (Carlson et al., 2010).A knowledge graph is a set of entities connected with directed, labeled edges (relations). We followthe pre-processing scheme as described in Yang et al. (2016). We assign separate relation nodesr1andr2for each entity pair (e1;r;e 2)as(e1;r1)and(e2;r2). Entity nodes are described bysparse feature vectors. We extend the number of features in NELL by assigning a unique one-hotrepresentation for every relation node, effectively resulting in a 61,278-dim sparse feature vector pernode. The semi-supervised task here considers the extreme case of only a single labeled exampleper class in the training set. We construct a binary, symmetric adjacency matrix from this graph bysetting entries Aij= 1, if one or more edges are present between nodes iandj.Random graphs We simulate random graph datasets of various sizes for experiments where wemeasure training time per epoch. For a dataset with Nnodes we create a random graph assigning2Nedges uniformly at random. We take the identity matrix INas input feature matrix X, therebyimplicitly taking a featureless approach where the model is only informed about the identity of eachnode, specified by a unique one-hot vector. We add dummy labels Yi= 1for every node.5.2 E XPERIMENTAL SET-UPUnless otherwise noted, we train a two-layer GCN as described in Section 3.1 and evaluate pre-diction accuracy on a test set of 1,000 labeled examples. We provide additional experiments usingdeeper models with up to 10 layers in Appendix B. We choose the same dataset splits as in Yanget al. (2016) with an additional validation set of 500 labeled examples for hyperparameter opti-mization (dropout rate for all layers, L2 regularization factor for the first GCN layer and number ofhidden units). We do not use the validation set labels for training.For the citation network datasets, we optimize hyperparameters on Cora only and use the same setof parameters for Citeseer and Pubmed. We train all models for a maximum of 200 epochs (trainingiterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0:01and early stopping with awindow size of 10, i.e. we stop training if the validation loss does not decrease for 10 consecutiveepochs. We initialize weights using the initialization described in Glorot & Bengio (2010) andaccordingly (row-)normalize input feature vectors. On the random graph datasets, we use a hiddenlayer size of 32 units and omit regularization (i.e. neither dropout nor L2 regularization).5.3 B ASELINESWe compare against the same baseline methods as in Yang et al. (2016), i.e. label propagation(LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifoldregularization (ManiReg) (Belkin et al., 2006) and skip-gram based graph embeddings (DeepWalk)(Perozzi et al., 2014). We omit TSVM (Joachims, 1999), as it does not scale to the large number ofclasses in one of our datasets.We further compare against the iterative classification algorithm (ICA) proposed in Lu & Getoor(2003) in conjunction with two logistic regression classifiers, one for local node features alone andone for relational classification using local features and an aggregation operator as described inSen et al. (2008). We first train the local classifier using all labeled training set nodes and useit to bootstrap class labels of unlabeled nodes for relational classifier training. We run iterativeclassification (relational classifier) with a random node ordering for 10 iterations on all unlabelednodes (bootstrapped using the local classifier). L2 regularization parameter and aggregation operator(count vs.prop, see Sen et al. (2008)) are chosen based on validation set performance for each datasetseparately.Lastly, we compare against Planetoid (Yang et al., 2016), where we always choose their best-performing model variant (transductive vs. inductive) as a baseline.6Published as a conference paper at ICLR 20176 R ESULTS6.1 S EMI-SUPERVISED NODE CLASSIFICATIONResults are summarized in Table 2. Reported numbers denote classification accuracy in percent. ForICA, we report the mean accuracy of 100 runs with random node orderings. Results for all otherbaseline methods are taken from the Planetoid paper (Yang et al., 2016). Planetoid* denotes the bestmodel for the respective dataset out of the variants presented in their paper.Table 2: Summary of results in terms of classification accuracy (in percent).Method Citeseer Cora Pubmed NELLManiReg [3] 60:1 59 :5 70 :7 21 :8SemiEmb [28] 59:6 59 :0 71 :1 26 :7LP [32] 45:3 68 :0 63 :0 26 :5DeepWalk [22] 43:2 67 :2 65 :3 58 :1ICA [18] 69:1 75 :1 73 :9 23 :1Planetoid* [29] 64:7(26s) 75:7(13s) 77:2(25s) 61:9(185s)GCN (this paper) 70:3(7s) 81:5(4s) 79:0(38s) 66:0(48s)GCN (rand. splits) 67:90:5 80:10:5 78:90:7 58:41:7We further report wall-clock training time in seconds until convergence (in brackets) for our method(incl. evaluation of validation error) and for Planetoid. For the latter, we used an implementation pro-vided by the authors3and trained on the same hardware (with GPU) as our GCN model. We trainedand tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracyof 100 runs with random weight initializations. We used the following sets of hyperparameters forCiteseer, Cora and Pubmed: 0.5 (dropout rate), 5104(L2 regularization) and 16(number of hid-den units); and for NELL: 0.1 (dropout rate), 1105(L2 regularization) and 64(number of hiddenunits).In addition, we report performance of our model on 10 randomly drawn dataset splits of the samesize as in Yang et al. (2016), denoted by GCN (rand. splits). Here, we report mean and standarderror of prediction accuracy on the test set split in percent.6.2 E VALUATION OF PROPAGATION MODELWe compare different variants of our proposed per-layer propagation model on the citation networkdatasets. We follow the experimental set-up described in the previous section. Results are summa-rized in Table 3. The propagation model of our original GCN model is denoted by renormalizationtrick (in bold). In all other cases, the propagation model of both neural network layers is replacedwith the model specified under propagation model . Reported numbers denote mean classificationaccuracy for 100 repeated runs with random weight matrix initializations. In case of multiple vari-ables iper layer, we impose L2 regularization on all weight matrices of the first layer.Table 3: Comparison of propagation models.Description Propagation model Citeseer Cora PubmedChebyshev filter (Eq. 5)K= 3PKk=0Tk(~L)Xk69:8 79:5 74:4K= 2 69 :6 81:2 73:81st-order model (Eq. 6) X0+D12AD12X1 68:3 80:0 77:5Single parameter (Eq. 7) (IN+D12AD12)X 69 :3 79:2 77:4Renormalization trick (Eq. 8) ~D12~A~D12X 70:3 81:5 79:01st-order term only D12AD12X 68 :7 80:5 77:8Multi-layer perceptron X 46 :5 55:1 71:43https://github.com/kimiyoung/planetoid7Published as a conference paper at ICLR 20176.3 T RAINING TIME PER EPOCH1k 10k 100k 1M 10M# Edges10-310-210-1100101Sec./epoch*GPUCPUFigure 2: Wall-clock time per epoch for randomgraphs. (*) indicates out-of-memory error.Here, we report results for the mean trainingtime per epoch (forward pass, cross-entropycalculation, backward pass) for 100 epochs onsimulated random graphs, measured in secondswall-clock time. See Section 5.1 for a detaileddescription of the random graph dataset usedin these experiments. We compare results ona GPU and on a CPU-only implementation4inTensorFlow (Abadi et al., 2015). Figure 2 sum-marizes the results.7 D ISCUSSION7.1 S EMI-SUPERVISED MODELIn the experiments demonstrated here, our method for semi-supervised node classification outper-forms recent related methods by a significant margin. Methods based on graph-Laplacian regular-ization (Zhu et al., 2003; Belkin et al., 2006; Weston et al., 2012) are most likely limited due to theirassumption that edges encode mere similarity of nodes. Skip-gram based methods on the other handare limited by the fact that they are based on a multi-step pipeline which is difficult to optimize.Our proposed model can overcome both limitations, while still comparing favorably in terms of ef-ficiency (measured in wall-clock time) to related methods. Propagation of feature information fromneighboring nodes in every layer improves classification performance in comparison to methods likeICA (Lu & Getoor, 2003), where only label information is aggregated.We have further demonstrated that the proposed renormalized propagation model (Eq. 8) offers bothimproved efficiency (fewer parameters and operations, such as multiplication or addition) and betterpredictive performance on a number of datasets compared to a na ̈ıve1st-order model (Eq. 6) orhigher-order graph convolutional models using Chebyshev polynomials (Eq. 5).7.2 L IMITATIONS AND FUTURE WORKHere, we describe several limitations of our current model and outline how these might be overcomein future work.Memory requirement In the current setup with full-batch gradient descent, memory requirementgrows linearly in the size of the dataset. We have shown that for large graphs that do not fit in GPUmemory, training on CPU can still be a viable option. Mini-batch stochastic gradient descent canalleviate this issue. The procedure of generating mini-batches, however, should take into account thenumber of layers in the GCN model, as the Kth-order neighborhood for a GCN with Klayers has tobe stored in memory for an exact procedure. For very large and densely connected graph datasets,further approximations might be necessary.Directed edges and edge features Our framework currently does not naturally support edge fea-tures and is limited to undirected graphs (weighted or unweighted). Results on NELL howevershow that it is possible to handle both directed edges and edge features by representing the originaldirected graph as an undirected bipartite graph with additional nodes that represent edges in theoriginal graph (see Section 5.1 for details).Limiting assumptions Through the approximations introduced in Section 2, we implicitly assumelocality (dependence on the Kth-order neighborhood for a GCN with Klayers) and equal impor-tance of self-connections vs. edges to neighboring nodes. For some datasets, however, it might bebeneficial to introduce a trade-off parameter in the definition of ~A:~A=A+IN: (11)4Hardware used: 16-core Intel RXeon RCPU E5-2640 v3 @ 2.60GHz, GeForce RGTX TITAN X8Published as a conference paper at ICLR 2017This parameter now plays a similar role as the trade-off parameter between supervised and unsuper-vised loss in the typical semi-supervised setting (see Eq. 1). Here, however, it can be learned viagradient descent.8 C ONCLUSIONWe have introduced a novel approach for semi-supervised classification on graph-structured data.Our GCN model uses an efficient layer-wise propagation rule that is based on a first-order approx-imation of spectral convolutions on graphs. Experiments on a number of network datasets suggestthat the proposed GCN model is capable of encoding both graph structure and node features in away useful for semi-supervised classification. In this setting, our model outperforms several recentlyproposed methods by a significant margin, while being computationally efficient.ACKNOWLEDGMENTSWe would like to thank Christos Louizos, Taco Cohen, Joan Bruna, Zhilin Yang, Dave Herman,Pramod Sinha and Abdul-Saboor Sheikh for helpful discussions. This research was funded by SAP.
HJMB4vQ4e
Solid results.
7: Good paper, accept
This paper proposes the graph convolutional networks, motivated from approximating graph convolutions. In one propagation step, what the model does can be simplified as, first linearly transform the node representations for each node, and then multiply the transformed node representations with the normalized affinity matrix (with self-connections added), and then pass through nonlinearity. This model is used for semi-supervised learning on graphs, and in the experiments it demonstrated quite impressive results compared to other baselines, outperforming them by a significant margin. The evaluation of propagation model is also interesting, where different variants of the model and design decisions are evaluated and compared. It is surprising that such a simple model works so much better than all the baselines. Considering that the model used is just a two-layer model in most experiments, this is really surprising as a two-layer model is very local, and the output of a node can only be affected by nodes in a 2-hop neighborhood, and no longer range interactions can play any roles in this. Since computation is quite efficient (sec. 6.3), I wonder if adding more layers helped anything or not. Even though motivated from graph convolutions, when simplified as the paper suggests, the operations the model does are quite simple. Compared to Duvenaud et al. 2015 and Li et al. 2016, the proposed method is simpler and does almost strictly less things. So how would the proposed GCN compare against these methods? Overall I think this model is simple, but the connection to graph convolutions is interesting, and the experiment results are quite good. There are a few questions that still remain, but I feel this paper can be accepted.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
SJU4ayYgl
ICLR.cc/2017/conference
2017
Semi-Supervised Classification with Graph Convolutional Networks
["Thomas N. Kipf", "Max Welling"]
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.
["Deep learning", "Semi-Supervised Learning"]
ABSTRACTWe present a scalable approach for semi-supervised learning on graph-structureddata that is based on an efficient variant of convolutional neural networks whichoperate directly on graphs. We motivate the choice of our convolutional archi-tecture via a localized first-order approximation of spectral graph convolutions.Our model scales linearly in the number of graph edges and learns hidden layerrepresentations that encode both local graph structure and features of nodes. Ina number of experiments on citation networks and on a knowledge graph datasetwe demonstrate that our approach outperforms related methods by a significantmargin.1 I NTRODUCTIONWe consider the problem of classifying nodes (such as documents) in a graph (such as a citationnetwork), where labels are only available for a small subset of nodes. This problem can be framedas graph-based semi-supervised learning, where label information is smoothed over the graph viasome form of explicit graph-based regularization (Zhu et al., 2003; Zhou et al., 2004; Belkin et al.,2006; Weston et al., 2012), e.g. by using a graph Laplacian regularization term in the loss function:L=L0+Lreg;withLreg=Xi;jAijkf(Xi)f(Xj)k2=f(X)>f(X): (1)Here,L0denotes the supervised loss w.r.t. the labeled part of the graph, f()can be a neural network-like differentiable function, is a weighing factor and Xis a matrix of node feature vectors Xi. =DAdenotes the unnormalized graph Laplacian of an undirected graph G= (V;E)withNnodesvi2V, edges (vi;vj)2E, an adjacency matrix A2RNN(binary or weighted) anda degree matrix Dii=PjAij. The formulation of Eq. 1 relies on the assumption that connectednodes in the graph are likely to share the same label. This assumption, however, might restrictmodeling capacity, as graph edges need not necessarily encode node similarity, but could containadditional information.In this work, we encode the graph structure directly using a neural network model f(X;A)andtrain on a supervised target L0for all nodes with labels, thereby avoiding explicit graph-basedregularization in the loss function. Conditioning f()on the adjacency matrix of the graph willallow the model to distribute gradient information from the supervised loss L0and will enable it tolearn representations of nodes both with and without labels.Our contributions are two-fold. Firstly, we introduce a simple and well-behaved layer-wise prop-agation rule for neural network models which operate directly on graphs and show how it can bemotivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011).Secondly, we demonstrate how this form of a graph-based neural network model can be used forfast and scalable semi-supervised classification of nodes in a graph. Experiments on a number ofdatasets demonstrate that our model compares favorably both in classification accuracy and effi-ciency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.1Published as a conference paper at ICLR 20172 F AST APPROXIMATE CONVOLUTIONS ON GRAPHSIn this section, we provide theoretical motivation for a specific graph-based neural network modelf(X;A)that we will use in the rest of this paper. We consider a multi-layer Graph ConvolutionalNetwork (GCN) with the following layer-wise propagation rule:H(l+1)=~D12~A~D12H(l)W(l): (2)Here, ~A=A+INis the adjacency matrix of the undirected graph Gwith added self-connections.INis the identity matrix, ~Dii=Pj~AijandW(l)is a layer-specific trainable weight matrix. ()denotes an activation function, such as the ReLU() = max(0;).H(l)2RNDis the matrix of ac-tivations in the lthlayer;H(0)=X. In the following, we show that the form of this propagation rulecan be motivated1via a first-order approximation of localized spectral filters on graphs (Hammondet al., 2011; Defferrard et al., 2016).2.1 S PECTRAL GRAPH CONVOLUTIONSWe consider spectral convolutions on graphs defined as the multiplication of a signal x2RN(ascalar for every node) with a filter g=diag()parameterized by 2RNin the Fourier domain,i.e.:g?x=UgU>x; (3)whereUis the matrix of eigenvectors of the normalized graph Laplacian L=IND12AD12=UU>, with a diagonal matrix of its eigenvalues andU>xbeing the graph Fourier transformofx. We can understand gas a function of the eigenvalues of L, i.e.g(). Evaluating Eq. 3 iscomputationally expensive, as multiplication with the eigenvector matrix UisO(N2). Furthermore,computing the eigendecomposition of Lin the first place might be prohibitively expensive for largegraphs. To circumvent this problem, it was suggested in Hammond et al. (2011) that g()can bewell-approximated by a truncated expansion in terms of Chebyshev polynomials Tk(x)up toKthorder:g0()KXk=00kTk(~); (4)with a rescaled ~ =2maxIN.maxdenotes the largest eigenvalue of L.02RKis now avector of Chebyshev coefficients. The Chebyshev polynomials are recursively defined as Tk(x) =2xTk1(x)Tk2(x), withT0(x) = 1 andT1(x) =x. The reader is referred to Hammond et al.(2011) for an in-depth discussion of this approximation.Going back to our definition of a convolution of a signal xwith a filterg0, we now have:g0?xKXk=00kTk(~L)x; (5)with ~L=2maxLIN; as can easily be verified by noticing that (UU>)k=UkU>. Note thatthis expression is now K-localized since it is a Kth-order polynomial in the Laplacian, i.e. it dependsonly on nodes that are at maximum Ksteps away from the central node ( Kth-order neighborhood).The complexity of evaluating Eq. 5 is O(jEj), i.e. linear in the number of edges. Defferrard et al.(2016) use this K-localized convolution to define a convolutional neural network on graphs.2.2 L AYER -WISELINEAR MODELA neural network model based on graph convolutions can therefore be built by stacking multipleconvolutional layers of the form of Eq. 5, each layer followed by a point-wise non-linearity. Now,imagine we limited the layer-wise convolution operation to K= 1(see Eq. 5), i.e. a function that islinear w.r.t.Land therefore a linear function on the graph Laplacian spectrum.1We provide an alternative interpretation of this propagation rule based on the Weisfeiler-Lehman algorithm(Weisfeiler & Lehmann, 1968) in Appendix A.2Published as a conference paper at ICLR 2017In this way, we can still recover a rich class of convolutional filter functions by stacking multiplesuch layers, but we are not limited to the explicit parameterization given by, e.g., the Chebyshevpolynomials. We intuitively expect that such a model can alleviate the problem of overfitting onlocal neighborhood structures for graphs with very wide node degree distributions, such as socialnetworks, citation networks, knowledge graphs and many other real-world graph datasets. Addition-ally, for a fixed computational budget, this layer-wise linear formulation allows us to build deepermodels, a practice that is known to improve modeling capacity on a number of domains (He et al.,2016).In this linear formulation of a GCN we further approximate max2, as we can expect that neuralnetwork parameters will adapt to this change in scale during training. Under these approximationsEq. 5 simplifies to:g0?x00x+01(LIN)x=00x01D12AD12x; (6)with two free parameters 00and01. The filter parameters can be shared over the whole graph.Successive application of filters of this form then effectively convolve the kth-order neighborhood ofa node, where kis the number of successive filtering operations or convolutional layers in the neuralnetwork model.In practice, it can be beneficial to constrain the number of parameters further to address overfittingand to minimize the number of operations (such as matrix multiplications) per layer. This leaves uswith the following expression:g?xIN+D12AD12x; (7)with a single parameter =00=01. Note that IN+D12AD12now has eigenvalues inthe range [0;2]. Repeated application of this operator can therefore lead to numerical instabilitiesand exploding/vanishing gradients when used in a deep neural network model. To alleviate thisproblem, we introduce the following renormalization trick :IN+D12AD12!~D12~A~D12, with~A=A+INand~Dii=Pj~Aij.We can generalize this definition to a signal X2RNCwithCinput channels (i.e. a C-dimensionalfeature vector for every node) and Ffilters or feature maps as follows:Z=~D12~A~D12X; (8)where 2RCFis now a matrix of filter parameters and Z2RNFis the convolved signalmatrix. This filtering operation has complexity O(jEjFC), as~AX can be efficiently implementedas a product of a sparse matrix with a dense matrix.3 S EMI-SUPERVISED NODE CLASSIFICATIONHaving introduced a simple, yet flexible model f(X;A)for efficient information propagation ongraphs, we can return to the problem of semi-supervised node classification. As outlined in the in-troduction, we can relax certain assumptions typically made in graph-based semi-supervised learn-ing by conditioning our model f(X;A)both on the data Xand on the adjacency matrix Aof theunderlying graph structure. We expect this setting to be especially powerful in scenarios where theadjacency matrix contains information not present in the data X, such as citation links between doc-uments in a citation network or relations in a knowledge graph. The overall model, a multi-layerGCN for semi-supervised learning, is schematically depicted in Figure 1.3.1 E XAMPLEIn the following, we consider a two-layer GCN for semi-supervised node classification on a graphwith a symmetric adjacency matrix A(binary or weighted). We first calculate ^A=~D12~A~D12ina pre-processing step. Our forward model then takes the simple form:Z=f(X;A) = softmax^AReLU^AXW(0)W(1): (9)3Published as a conference paper at ICLR 2017Cinput layerX1X2X3X4Foutput layerZ1Z2Z3Z4hiddenlayersY1Y41(a) Graph Convolutional Network30 20 10 0 10 20 303020100102030 (b) Hidden layer activationsFigure 1: Left: Schematic depiction of multi-layer Graph Convolutional Network (GCN) for semi-supervised learning with Cinput channels and Ffeature maps in the output layer. The graph struc-ture (edges shown as black lines) is shared over layers, labels are denoted by Yi.Right : t-SNE(Maaten & Hinton, 2008) visualization of hidden layer activations of a two-layer GCN trained onthe Cora dataset (Sen et al., 2008) using 5%of labels. Colors denote document class.Here,W(0)2RCHis an input-to-hidden weight matrix for a hidden layer with Hfeature maps.W(1)2RHFis a hidden-to-output weight matrix. The softmax activation function, defined assoftmax(xi) =1Zexp(xi)withZ=Piexp(xi), is applied row-wise. For semi-supervised multi-class classification, we then evaluate the cross-entropy error over all labeled examples:L=Xl2YLFXf=1YlflnZlf; (10)whereYLis the set of node indices that have labels.The neural network weights W(0)andW(1)are trained using gradient descent. In this work, weperform batch gradient descent using the full dataset for every training iteration, which is a viableoption as long as datasets fit in memory. Using a sparse representation for A, memory requirementisO(jEj), i.e. linear in the number of edges. Stochasticity in the training process is introduced viadropout (Srivastava et al., 2014). We leave memory-efficient extensions with mini-batch stochasticgradient descent for future work.3.2 I MPLEMENTATIONIn practice, we make use of TensorFlow (Abadi et al., 2015) for an efficient GPU-based imple-mentation2of Eq. 9 using sparse-dense matrix multiplications. The computational complexity ofevaluating Eq. 9 is then O(jEjCHF ), i.e. linear in the number of graph edges.4 R ELATED WORKOur model draws inspiration both from the field of graph-based semi-supervised learning and fromrecent work on neural networks that operate on graphs. In what follows, we provide a brief overviewon related work in both fields.4.1 G RAPH -BASED SEMI-SUPERVISED LEARNINGA large number of approaches for semi-supervised learning using graph representations have beenproposed in recent years, most of which fall into two broad categories: methods that use someform of explicit graph Laplacian regularization and graph embedding-based approaches. Prominentexamples for graph Laplacian regularization include label propagation (Zhu et al., 2003), manifoldregularization (Belkin et al., 2006) and deep semi-supervised embedding (Weston et al., 2012).2Code to reproduce our experiments is available at https://github.com/tkipf/gcn .4Published as a conference paper at ICLR 2017Recently, attention has shifted to models that learn graph embeddings with methods inspired bythe skip-gram model (Mikolov et al., 2013). DeepWalk (Perozzi et al., 2014) learns embeddingsvia the prediction of the local neighborhood of nodes, sampled from random walks on the graph.LINE (Tang et al., 2015) and node2vec (Grover & Leskovec, 2016) extend DeepWalk with moresophisticated random walk or breadth-first search schemes. For all these methods, however, a multi-step pipeline including random walk generation and semi-supervised training is required where eachstep has to be optimized separately. Planetoid (Yang et al., 2016) alleviates this by injecting labelinformation in the process of learning embeddings.4.2 N EURAL NETWORKS ON GRAPHSNeural networks that operate on graphs have previously been introduced in Gori et al. (2005);Scarselli et al. (2009) as a form of recurrent neural network. Their framework requires the repeatedapplication of contraction maps as propagation functions until node representations reach a stablefixed point. This restriction was later alleviated in Li et al. (2016) by introducing modern practicesfor recurrent neural network training to the original graph neural network framework. Duvenaudet al. (2015) introduced a convolution-like propagation rule on graphs and methods for graph-levelclassification. Their approach requires to learn node degree-specific weight matrices which does notscale to large graphs with wide node degree distributions. Our model instead uses a single weightmatrix per layer and deals with varying node degrees through an appropriate normalization of theadjacency matrix (see Section 3.1).A related approach to node classification with a graph-based neural network was recently introducedin Atwood & Towsley (2016). They report O(N2)complexity, limiting the range of possible appli-cations. In a different yet related model, Niepert et al. (2016) convert graphs locally into sequencesthat are fed into a conventional 1D convolutional neural network, which requires the definition of anode ordering in a pre-processing step.Our method is based on spectral graph convolutional neural networks, introduced in Bruna et al.(2014) and later extended by Defferrard et al. (2016) with fast localized convolutions. In contrastto these works, we consider here the task of transductive node classification within networks ofsignificantly larger scale. We show that in this setting, a number of simplifications (see Section 2.2)can be introduced to the original frameworks of Bruna et al. (2014) and Defferrard et al. (2016) thatimprove scalability and classification performance in large-scale networks.5 E XPERIMENTSWe test our model in a number of experiments: semi-supervised document classification in cita-tion networks, semi-supervised entity classification in a bipartite graph extracted from a knowledgegraph, an evaluation of various graph propagation models and a run-time analysis on random graphs.5.1 D ATASETSWe closely follow the experimental setup in Yang et al. (2016). Dataset statistics are summarizedin Table 1. In the citation network datasets—Citeseer, Cora and Pubmed (Sen et al., 2008)—nodesare documents and edges are citation links. Label rate denotes the number of labeled nodes that areused for training divided by the total number of nodes in each dataset. NELL (Carlson et al., 2010;Yang et al., 2016) is a bipartite graph dataset extracted from a knowledge graph with 55,864 relationnodes and 9,891 entity nodes.Table 1: Dataset statistics, as reported in Yang et al. (2016).Dataset Type Nodes Edges Classes Features Label rateCiteseer Citation network 3,327 4,732 6 3,703 0:036Cora Citation network 2,708 5,429 7 1,433 0:052Pubmed Citation network 19,717 44,338 3 500 0:003NELL Knowledge graph 65,755 266,144 210 5,414 0:0015Published as a conference paper at ICLR 2017Citation networks We consider three citation network datasets: Citeseer, Cora and Pubmed (Senet al., 2008). The datasets contain sparse bag-of-words feature vectors for each document and a listof citation links between documents. We treat the citation links as (undirected) edges and constructa binary, symmetric adjacency matrix A. Each document has a class label. For training, we only use20 labels per class, but all feature vectors.NELL NELL is a dataset extracted from the knowledge graph introduced in (Carlson et al., 2010).A knowledge graph is a set of entities connected with directed, labeled edges (relations). We followthe pre-processing scheme as described in Yang et al. (2016). We assign separate relation nodesr1andr2for each entity pair (e1;r;e 2)as(e1;r1)and(e2;r2). Entity nodes are described bysparse feature vectors. We extend the number of features in NELL by assigning a unique one-hotrepresentation for every relation node, effectively resulting in a 61,278-dim sparse feature vector pernode. The semi-supervised task here considers the extreme case of only a single labeled exampleper class in the training set. We construct a binary, symmetric adjacency matrix from this graph bysetting entries Aij= 1, if one or more edges are present between nodes iandj.Random graphs We simulate random graph datasets of various sizes for experiments where wemeasure training time per epoch. For a dataset with Nnodes we create a random graph assigning2Nedges uniformly at random. We take the identity matrix INas input feature matrix X, therebyimplicitly taking a featureless approach where the model is only informed about the identity of eachnode, specified by a unique one-hot vector. We add dummy labels Yi= 1for every node.5.2 E XPERIMENTAL SET-UPUnless otherwise noted, we train a two-layer GCN as described in Section 3.1 and evaluate pre-diction accuracy on a test set of 1,000 labeled examples. We provide additional experiments usingdeeper models with up to 10 layers in Appendix B. We choose the same dataset splits as in Yanget al. (2016) with an additional validation set of 500 labeled examples for hyperparameter opti-mization (dropout rate for all layers, L2 regularization factor for the first GCN layer and number ofhidden units). We do not use the validation set labels for training.For the citation network datasets, we optimize hyperparameters on Cora only and use the same setof parameters for Citeseer and Pubmed. We train all models for a maximum of 200 epochs (trainingiterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0:01and early stopping with awindow size of 10, i.e. we stop training if the validation loss does not decrease for 10 consecutiveepochs. We initialize weights using the initialization described in Glorot & Bengio (2010) andaccordingly (row-)normalize input feature vectors. On the random graph datasets, we use a hiddenlayer size of 32 units and omit regularization (i.e. neither dropout nor L2 regularization).5.3 B ASELINESWe compare against the same baseline methods as in Yang et al. (2016), i.e. label propagation(LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifoldregularization (ManiReg) (Belkin et al., 2006) and skip-gram based graph embeddings (DeepWalk)(Perozzi et al., 2014). We omit TSVM (Joachims, 1999), as it does not scale to the large number ofclasses in one of our datasets.We further compare against the iterative classification algorithm (ICA) proposed in Lu & Getoor(2003) in conjunction with two logistic regression classifiers, one for local node features alone andone for relational classification using local features and an aggregation operator as described inSen et al. (2008). We first train the local classifier using all labeled training set nodes and useit to bootstrap class labels of unlabeled nodes for relational classifier training. We run iterativeclassification (relational classifier) with a random node ordering for 10 iterations on all unlabelednodes (bootstrapped using the local classifier). L2 regularization parameter and aggregation operator(count vs.prop, see Sen et al. (2008)) are chosen based on validation set performance for each datasetseparately.Lastly, we compare against Planetoid (Yang et al., 2016), where we always choose their best-performing model variant (transductive vs. inductive) as a baseline.6Published as a conference paper at ICLR 20176 R ESULTS6.1 S EMI-SUPERVISED NODE CLASSIFICATIONResults are summarized in Table 2. Reported numbers denote classification accuracy in percent. ForICA, we report the mean accuracy of 100 runs with random node orderings. Results for all otherbaseline methods are taken from the Planetoid paper (Yang et al., 2016). Planetoid* denotes the bestmodel for the respective dataset out of the variants presented in their paper.Table 2: Summary of results in terms of classification accuracy (in percent).Method Citeseer Cora Pubmed NELLManiReg [3] 60:1 59 :5 70 :7 21 :8SemiEmb [28] 59:6 59 :0 71 :1 26 :7LP [32] 45:3 68 :0 63 :0 26 :5DeepWalk [22] 43:2 67 :2 65 :3 58 :1ICA [18] 69:1 75 :1 73 :9 23 :1Planetoid* [29] 64:7(26s) 75:7(13s) 77:2(25s) 61:9(185s)GCN (this paper) 70:3(7s) 81:5(4s) 79:0(38s) 66:0(48s)GCN (rand. splits) 67:90:5 80:10:5 78:90:7 58:41:7We further report wall-clock training time in seconds until convergence (in brackets) for our method(incl. evaluation of validation error) and for Planetoid. For the latter, we used an implementation pro-vided by the authors3and trained on the same hardware (with GPU) as our GCN model. We trainedand tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracyof 100 runs with random weight initializations. We used the following sets of hyperparameters forCiteseer, Cora and Pubmed: 0.5 (dropout rate), 5104(L2 regularization) and 16(number of hid-den units); and for NELL: 0.1 (dropout rate), 1105(L2 regularization) and 64(number of hiddenunits).In addition, we report performance of our model on 10 randomly drawn dataset splits of the samesize as in Yang et al. (2016), denoted by GCN (rand. splits). Here, we report mean and standarderror of prediction accuracy on the test set split in percent.6.2 E VALUATION OF PROPAGATION MODELWe compare different variants of our proposed per-layer propagation model on the citation networkdatasets. We follow the experimental set-up described in the previous section. Results are summa-rized in Table 3. The propagation model of our original GCN model is denoted by renormalizationtrick (in bold). In all other cases, the propagation model of both neural network layers is replacedwith the model specified under propagation model . Reported numbers denote mean classificationaccuracy for 100 repeated runs with random weight matrix initializations. In case of multiple vari-ables iper layer, we impose L2 regularization on all weight matrices of the first layer.Table 3: Comparison of propagation models.Description Propagation model Citeseer Cora PubmedChebyshev filter (Eq. 5)K= 3PKk=0Tk(~L)Xk69:8 79:5 74:4K= 2 69 :6 81:2 73:81st-order model (Eq. 6) X0+D12AD12X1 68:3 80:0 77:5Single parameter (Eq. 7) (IN+D12AD12)X 69 :3 79:2 77:4Renormalization trick (Eq. 8) ~D12~A~D12X 70:3 81:5 79:01st-order term only D12AD12X 68 :7 80:5 77:8Multi-layer perceptron X 46 :5 55:1 71:43https://github.com/kimiyoung/planetoid7Published as a conference paper at ICLR 20176.3 T RAINING TIME PER EPOCH1k 10k 100k 1M 10M# Edges10-310-210-1100101Sec./epoch*GPUCPUFigure 2: Wall-clock time per epoch for randomgraphs. (*) indicates out-of-memory error.Here, we report results for the mean trainingtime per epoch (forward pass, cross-entropycalculation, backward pass) for 100 epochs onsimulated random graphs, measured in secondswall-clock time. See Section 5.1 for a detaileddescription of the random graph dataset usedin these experiments. We compare results ona GPU and on a CPU-only implementation4inTensorFlow (Abadi et al., 2015). Figure 2 sum-marizes the results.7 D ISCUSSION7.1 S EMI-SUPERVISED MODELIn the experiments demonstrated here, our method for semi-supervised node classification outper-forms recent related methods by a significant margin. Methods based on graph-Laplacian regular-ization (Zhu et al., 2003; Belkin et al., 2006; Weston et al., 2012) are most likely limited due to theirassumption that edges encode mere similarity of nodes. Skip-gram based methods on the other handare limited by the fact that they are based on a multi-step pipeline which is difficult to optimize.Our proposed model can overcome both limitations, while still comparing favorably in terms of ef-ficiency (measured in wall-clock time) to related methods. Propagation of feature information fromneighboring nodes in every layer improves classification performance in comparison to methods likeICA (Lu & Getoor, 2003), where only label information is aggregated.We have further demonstrated that the proposed renormalized propagation model (Eq. 8) offers bothimproved efficiency (fewer parameters and operations, such as multiplication or addition) and betterpredictive performance on a number of datasets compared to a na ̈ıve1st-order model (Eq. 6) orhigher-order graph convolutional models using Chebyshev polynomials (Eq. 5).7.2 L IMITATIONS AND FUTURE WORKHere, we describe several limitations of our current model and outline how these might be overcomein future work.Memory requirement In the current setup with full-batch gradient descent, memory requirementgrows linearly in the size of the dataset. We have shown that for large graphs that do not fit in GPUmemory, training on CPU can still be a viable option. Mini-batch stochastic gradient descent canalleviate this issue. The procedure of generating mini-batches, however, should take into account thenumber of layers in the GCN model, as the Kth-order neighborhood for a GCN with Klayers has tobe stored in memory for an exact procedure. For very large and densely connected graph datasets,further approximations might be necessary.Directed edges and edge features Our framework currently does not naturally support edge fea-tures and is limited to undirected graphs (weighted or unweighted). Results on NELL howevershow that it is possible to handle both directed edges and edge features by representing the originaldirected graph as an undirected bipartite graph with additional nodes that represent edges in theoriginal graph (see Section 5.1 for details).Limiting assumptions Through the approximations introduced in Section 2, we implicitly assumelocality (dependence on the Kth-order neighborhood for a GCN with Klayers) and equal impor-tance of self-connections vs. edges to neighboring nodes. For some datasets, however, it might bebeneficial to introduce a trade-off parameter in the definition of ~A:~A=A+IN: (11)4Hardware used: 16-core Intel RXeon RCPU E5-2640 v3 @ 2.60GHz, GeForce RGTX TITAN X8Published as a conference paper at ICLR 2017This parameter now plays a similar role as the trade-off parameter between supervised and unsuper-vised loss in the typical semi-supervised setting (see Eq. 1). Here, however, it can be learned viagradient descent.8 C ONCLUSIONWe have introduced a novel approach for semi-supervised classification on graph-structured data.Our GCN model uses an efficient layer-wise propagation rule that is based on a first-order approx-imation of spectral convolutions on graphs. Experiments on a number of network datasets suggestthat the proposed GCN model is capable of encoding both graph structure and node features in away useful for semi-supervised classification. In this setting, our model outperforms several recentlyproposed methods by a significant margin, while being computationally efficient.ACKNOWLEDGMENTSWe would like to thank Christos Louizos, Taco Cohen, Joan Bruna, Zhilin Yang, Dave Herman,Pramod Sinha and Abdul-Saboor Sheikh for helpful discussions. This research was funded by SAP.
S1eLrWQBg
Simple and reasonable approach
7: Good paper, accept
The paper develops a simple and reasonable algorithm for graph node prediction/classification. The formulations are very intuitive and lead to a simple CNN based training and can easily leverage existing GPU speedups. Experiments are thorough and compare with many reasonable baselines on large and real benchmark datasets. Although, I am not quite aware of the literature on other methods and there may be similar alternatives as link and node prediction is an old problem. I still think the approach is quite simple and reasonably supported by good evaluations.
3: The reviewer is fairly confident that the evaluation is correct
H1Gq5Q9el
ICLR.cc/2017/conference
2017
Unsupervised Pretraining for Sequence to Sequence Learning
["Prajit Ramachandran", "Peter J. Liu", "Quoc V. Le"]
This work presents a general unsupervised learning method to improve the accuracy of sequence to sequence (seq2seq) models. In our method, the weights of the encoder and decoder of a seq2seq model are initialized with the pretrained weights of two language models and then fine-tuned with labeled data. We apply this method to challenging benchmarks in machine translation and abstractive summarization and find that it significantly improves the subsequent supervised models. Our main result is that the pretraining accelerates training and improves generalization of seq2seq models, achieving state-of-the-art results on the WMT English->German task, surpassing a range of methods using both phrase-based machine translation and neural machine translation. Our method achieves an improvement of 1.3 BLEU from the previous best models on both WMT'14 and WMT'15 English->German. On summarization, our method beats the supervised learning baseline.
["Natural language processing", "Deep learning", "Semi-Supervised Learning", "Transfer Learning"]
ABSTRACTThis work presents a general unsupervised learning method to improve the accu-racy of sequence to sequence (seq2seq) models. In our method, the weights ofthe encoder and decoder of a seq2seq model are initialized with the pretrainedweights of two language models and then fine-tuned with labeled data. We ap-ply this method to challenging benchmarks in machine translation and abstractivesummarization and find that it significantly improves the subsequent supervisedmodels. Our main result is that the pretraining accelerates training and improvesgeneralization of seq2seq models, achieving state-of-the-art results on the WMTEnglish!German task, surpassing a range of methods using both phrase-basedmachine translation and neural machine translation. Our method achieves an im-provement of 1.3 BLEU from the previous best models on both WMT’14 andWMT’15 English!German. On summarization, our method beats the supervisedlearning baseline.1 I NTRODUCTIONSequence to sequence ( seq2seq ) models (Sutskever et al., 2014; Cho et al., 2014; Kalchbrenner& Blunsom, 2013; Allen, 1987; ̃Neco & Forcada, 1997) are extremely effective on a variety oftasks that require a mapping between a variable-length input sequence to a variable-length outputsequence. The main weakness of sequence to sequence models, and deep networks in general, liesin the fact that they can easily overfit when the amount of supervised training data is small.In this work, we propose a simple and effective technique for using unsupervised pretraining toimprove seq2seq models. Our proposal is to initialize both encoder and decoder networks withpretrained weights of two language models. These pretrained weights are then fine-tuned with thelabeled corpus.We benchmark this method on machine translation for English !German and abstractive summa-rization on CNN and Daily Mail articles. Our main result is that a seq2seq model, with pretraining,exceeds the strongest possible baseline in both neural machine translation and phrase-based machinetranslation. Our model obtains an improvement of 1.3 BLEU from the previous best models on bothWMT’14 and WMT’15 English !German. On abstractive summarization, our method achievescompetitive results to the strongest baselines.We also perform ablation study to understand the behaviors of the pretraining method. Our studyconfirms that among many other possible choices of using a language model in seq2seq with atten-tion, the above proposal works best. Our study also shows that, for translation, the main gains comefrom the improved generalization due to the pretrained features, whereas for summarization thegains come from the improved optimization due to pretraining the encoder which has been unrolledfor hundreds of timesteps. On both tasks, our proposed method always improves generalization onthe test sets.Work done as an intern on Google Brain.1Under review as a conference paper at ICLR 20172 U NSUPERVISED PRETRAINING FOR SEQUENCE TO SEQUENCE LEARNINGIn the following section, we will describe our basic unsupervised pretraining procedure for sequenceto sequence learning and how to modify sequence to sequence learning to effectively make use ofthe pretrained weights. We then show several extensions to improve the basic model.2.1 B ASIC PROCEDUREGiven an input sequence x1;x2;:::;x mand an output sequence yn;yn1;:::;y 1, the objective of se-quence to sequence learning is to maximize the likelihood p(yn;yn1;:::;y 1jx1;x2;:::;x m).Common sequence to sequence learning methods decompose this objective asp(yn;yn1;:::;y 1jx1;x2;:::;x m) =Qnt=1p(ytjyt1;:::;y 1;x1;x2;:::;x m).In sequence to sequence learning, an RNN encoder is used to represent x1;:::;x mas a hidden vector,which is given to an RNN decoder to produce the output sequence. Our method is based on theobservation that without the encoder, the decoder essentially acts like a language model on y’s.Similarly, the encoder with an additional output layer also acts like a language model. Thus it isnatural to use trained languages models to initialize the encoder and decoder.Therefore, the basic procedure of our approach is to pretrain both the seq2seq encoder and decodernetworks with language models, which can be trained on large amounts of unlabeled text data. Thiscan be seen in Figure 1, where the parameters in the shaded boxes are pretrained. In the followingwe will describe the method in detail using machine translation as an example application.A B C <EOS> W X Y ZW X Y Z <EOS>EmbeddingFirst RNN LayerSoftmaxSecond RNN LayerFigure 1: Pretrained sequence to sequence model. The red parameters are the encoder and the blueparameters are the decoder. All parameters in a shaded box are pretrained, either from the sourceside (light red) or target side (light blue) language model. Otherwise, they are randomly initialized.First, two monolingual datasets are collected, one for the source side language, and one for thetarget side language. A language model ( LM) is trained on each dataset independently, giving anLM trained on the source side corpus and an LM trained on the target side corpus.After two language models are trained, a multi-layer seq2seq model Mis constructed. The embed-ding and first LSTM layers of the encoder and decoder are initialized with the pretrained weights.To be even more efficient, the softmax of the decoder is initialized with the softmax of the pretrainedtarget side LM.2.2 I MPROVING THE MODELWe also employ three additional methods to further improve the model above. The three meth-ods are: a) Monolingual language modeling losses, b) Residual connections and c) Attention overmultiple layers (see Figure 2).Monolingual language modeling losses: After the seq2seq model Mis initialized with the twoLMs, it is fine-tuned with a labeled dataset. To ensure that the model does not overfit the labeleddata, we regularize the parameters that were pretrained by continuing to train with the monolinguallanguage modeling losses. The seq2seq and language modeling losses are weighted equally.2Under review as a conference paper at ICLR 2017WX+(a)A B C <EOS>WAttention(b)Figure 2: Two improvements to the baseline model: (a) residual connection, and (b) attention overmultiple layers.Residual connections: As described, the input vector to the decoder softmax layer is a randomvector because the high level (non-first) layers of the LSTM are randomly initialized. This slowsdown training and introduces random gradients to the pretrained parameters, reducing the effective-ness of pretraining. To circumvent this issue, we use a residual connection from the output of thefirst LSTM layer directly to the input of the softmax (see Figure 2-a).Attention over multiple layers: In all our models, we use an attention mechanism (Bahdanauet al., 2015), where the model attends over both top and first layer (see Figure 2-b). More concretely,given a query vector qtfrom the decoder, encoder states from the first layer h11;:::;h1T, and encoderstates from the last layer hL1;:::;hLT, we compute the attention context vector ctas follows:i=exp(qthNi)PTj=1exp(qthNj)c1t=TXi=1ih1icNt=TXi=1ihNict= [c1t;cNt]Note that attention weights iare only computed once using the top level encoder states.We also experiment with passing the attention vector ctas input into the next timestep (Luong et al.,2015b). Instead of passing cinto the first LSTM layer, we pass it as input to the second LSTM layerby concatenating it with the output of the first LSTM layer.We use all three improvements in our experiments. However, in general we notice that the benefitsof the attention modifications are minor in comparison with the benefits of the additional languagemodeling objectives and residual connections.3 E XPERIMENTSIn the following section, we apply our approach to two important tasks in seq2seq learning: machinetranslation and abstractive summarization. On each task, we compare against the previous bestsystems. We also perform ablation experiments to understand the behavior of each component ofour method.3.1 M ACHINE TRANSLATIONDataset and Evaluation: For machine translation, we evaluate our method on the WMTEnglish!German task (Bojar et al., 2015). We used the WMT 14 training dataset, which is slightlysmaller than the WMT 15 dataset. Because the dataset has some noisy examples, we used a lan-guage detection system to filter the training examples. Sentences pairs where either the source wasnot English or the target was not German were thrown away. This resulted in around 4 milliontraining examples. Following Sennrich et al. (2015b), we use subword units (Sennrich et al., 2015a)3Under review as a conference paper at ICLR 2017with 89500 merge operations, giving a vocabulary size around 90000. The validation set is theconcatenated newstest2012 and newstest2013, and our test sets are newstest2014 and newstest2015.Evaluation on the validation set was with case-sensitive BLEU (Papineni et al., 2002) on tokenizedtext using multi-bleu.perl . Evaluation on the test sets was with case-sensitive BLEU ondetokenized text using mteval-v13a.pl . The monolingual training datasets are the News CrawlEnglish and German corpora, each of which has more than a billion tokens.Experimental settings: The language models were trained in the same fashion as (Jozefowiczet al., 2016) We used a 1 layer 4096 dimensional LSTM with the hidden state projected downto 1024 units (Sak et al., 2014) and trained for one week on 32 Tesla K40 GPUs. Our seq2seqmodel was a 3 layer model, where the second and third layers each have 1000 hidden units. Themonolingual objectives, residual connection, and the modified attention were all used. We used theAdam optimizer (Kingma & Ba, 2015) and train with asynchronous SGD on 16 GPUs for speed.We used a learning rate of 5e-5 which is multiplied by 0.8 every 50K steps after an initial 400Ksteps, gradient clipping with norm 5.0 (Pascanu et al., 2013), and dropout of 0.2 on non-recurrentconnections (Zaremba et al., 2014). We used early stopping on validation set perplexity. A beamsize of 10 was used for decoding. Our ensemble is constructed with the 5 best performing modelson the validation set, which are trained with different hyperparameters.Results: Table 1 shows the results of our method in comparison with other baselines. Ourmethod achieves a new state-of-the-art for single model performance on both newstest2014 andnewstest2015, significantly outperforming the competitive semi-supervised backtranslation tech-nique (Sennrich et al., 2015b). Equally impressive is the fact that our best single model outperformsthe previous state of the art ensemble of 4 models. Our ensemble of 5 models matches or exceedsthe previous best ensemble of 12 models.BLEUSystem ensemble? newstest2014 newstest2015Phrase Based MT (Williams et al., 2016) - 21.9 23.7Supervised NMT (Jean et al., 2015) single - 22.4Edit Distance Transducer NMT (Stahlberg et al., 2016) single 21.7 24.1Edit Distance Transducer NMT (Stahlberg et al., 2016) ensemble 8 22.9 25.7Backtranslation (Sennrich et al., 2015b) single 22.7 25.7Backtranslation (Sennrich et al., 2015b) ensemble 4 23.8 26.5Backtranslation (Sennrich et al., 2015b) ensemble 12 24.7 27.6No pretraining single 21.3 24.3Pretrained seq2seq single 24.0 27.0Pretrained seq2seq ensemble 5 24.7 28.1Table 1: English!German performance on WMT test sets. Our pretrained model outperforms allother models. Note that the model without pretraining uses the LM objective.Ablation study: In order to better understand the effects of pretraining, we conducted an ablationstudy by modifying the pretraining scheme. Figure 3 shows the drop in validation BLEU of variousablations compared with the full model. The full model uses LMs trained with monolingual data toinitialize the encoder and decoder, in addition to the language modeling objective. In the following,we interpret the findings of the study. Note that some findings are specific to the translation task.Given the results from the ablation study, we can make the following observations:Pretraining the decoder is better than pretraining the encoder: Only pretraining the encoderleads to a 1.6 BLEU point drop while only pretraining the decoder leads to a 1.0 BLEUpoint drop.Pretrain as much as possible because the benefits compound: given the drops of no pre-training at all (2:0) and only pretraining the encoder ( 1:6), the additive estimate of thedrop of only pretraining the decoder side is 2:0(1:6) =0:4; however the actualdrop is1:0which is a much larger drop than the additive estimate.Pretraining the softmax is important: Pretraining only the embeddings and first LSTM layergives a large drop of 1.6 BLEU points.4Under review as a conference paper at ICLR 20172.01.51.00.50.0Difference/uni00A0in/uni00A0BLEU/uni00AD2.1Pretrain/uni00A0on/uni00A0parallel/uni00A0corpus/uni00AD2.0No/uni00A0pretraining/uni00AD2.0Only/uni00A0pretrain/uni00A0embeddings/uni00AD2.0No/uni00A0LM/uni00A0objective/uni00AD1.6Only/uni00A0pretrain/uni00A0encoder/uni00AD1.6Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM/uni00AD1.0Only/uni00A0pretrain/uni00A0decoder/uni00AD0.3Pretrain/uni00A0on/uni00A0WikipediaFigure 3: English!German ablation study measuring the difference in validation BLEU betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withmonolingual data to initialize the encoder and decoder, plus the language modeling objective.The language modeling objective is a strong regularizer: The drop in BLEU points ofpretraining the entire model and not using the LM objective is as bad as using the LMobjective without pretraining.Pretraining on a lot of unlabeled data is essential for learning to extract powerful features:If the model is initialized with LMs that are pretrained on the source part and target part oftheparallel corpus, the drop in performance is as large as not pretraining at all. However,performance remains strong when pretrained on the large, non-news Wikipedia corpus.To understand the contributions of unsupervised pretraining vs. supervised training, we track theperformance of pretraining as a function of dataset size. For this, we trained a a model with andwithout pretraining on random subsets of the English !German corpus. Both models use the ad-ditional LM objective. The results are summarized in Figure 4. When a 100% of the labeled datais used, the gap between the pretrained and no pretrain model is 2.0 BLEU points. However, thatgap grows when less data is available. When trained on 20% of the labeled data, the gap becomes3.8 BLEU points. This demonstrates that the pretrained models degrade less as the labeled datasetbecomes smaller.20 40 60 80 100Percent/uni00A0of/uni00A0entire/uni00A0labeled/uni00A0dataset/uni00A0used/uni00A0for/uni00A0training1516171819202122BLEUPretrainNo/uni00A0pretrainFigure 4: Validation performance of pretraining vs. no pretraining when trained on a subset of theentire labeled dataset for English !German translation.5Under review as a conference paper at ICLR 20173.2 A BSTRACTIVE SUMMARIZATIONDataset and Evaluation: For a low-resource abstractive summarization task, we use theCNN/Daily Mail corpus from (Hermann et al., 2015). Following Nallapati et al. (2016), we modifythe data collection scripts to restore the bullet point summaries. The task is to predict the bulletpoint summaries from a news article. The dataset has fewer than 300K document-summary pairs.To compare against Nallapati et al. (2016), we used the anonymized corpus. However, for our abla-tion study, we used the non-anonymized corpus.1We evaluate our system using full length ROUGE(Lin, 2004). For the anonymized corpus in particular, we considered each highlight as a separatesentence following Nallapati et al. (2016). In this setting, we used the English Gigaword corpus(Napoles et al., 2012) as our larger, unlabeled “monolingual” corpus, although all data used in thistask is in English.Experimental settings: We use subword units (Sennrich et al., 2015a) with 31500 merges, result-ing in a vocabulary size of about 32000. We use up to the first 600 tokens of the document andpredict the entire summary. Only one language model is trained and it is used to initialize both theencoder and decoder, since the source and target languages are the same. However, the encoderand decoder are not tied. The LM is a one-layer LSTM of size 1024 trained in a similar fashion toJozefowicz et al. (2016). For the seq2seq model, we use the same settings as the machine translationexperiments. The only differences are that we use a 2 layer model with the second layer having1024 hidden units, and that the learning rate is multiplied by 0.8 every 30K steps after an initial100K steps.Results: Table 2 summarizes our results on the anonymized version of the corpus. Our pretrainedmodel is only able to match the previous baseline seq2seq of Nallapati et al. (2016). However, ourmodel is a unidirectional LSTM while they use a bidirectional LSTM. They also use a longer contextof 800 tokens, whereas we used a context of 600 tokens due to GPU memory issues. Furthermore,they use pretrained word2vec (Mikolov et al., 2013) vectors to initialize their word embeddings. Aswe show in our ablation study, just pretraining the embeddings itself gives a large improvement.System ROUGE-1 ROUGE-2 ROUGE-LSeq2seq + pretrained embeddings (Nallapati et al., 2016) 32.49 11.84 29.47+ temporal attention (Nallapati et al., 2016) 35.46 13.30 32.65Pretrained seq2seq 32.56 11.89 29.44Table 2: Results on the anonymized CNN/Daily Mail dataset.Ablation study: We performed an ablation study similar to the one performed on the machinetranslation model. The results are reported in Figure 5. Here we report the drops on ROUGE-1,ROUGE-2, and ROUGE-L on the non-anonymized validation set.Given the results from our ablation study, we can make the following observations:Pretraining improves optimization: in contrast with the machine translation model, it ismore beneficial to only pretrain the encoder than only the decoder of the summarizationmodel. One interpretation is that pretraining enables the gradient to flow much furtherback in time than randomly initialized weights. This may also explain why pretraining onthe parallel corpus is no worse than pretraining on a larger monolingual corpus.The language modeling objective is a strong regularizer: A model without the LM objectivehas a significant drop in ROUGE scores.Human evaluation: As ROUGE may not be able to capture the quality of summarization, wealso performed a small qualitative study to understand the human impression of the summariesproduced by different models. We took 200 random documents and compared the performance of1We encourage future researchers to use the non-anonymized version because it is a more realistic summa-rization setting with a larger vocabulary. Our numbers on the non-anonymized test set are 35:56ROUGE-1,14:60ROUGE-2, and 25:08ROUGE-L. We did not consider highlights as separate sentences.6Under review as a conference paper at ICLR 2017/uni00AD5/uni00AD4/uni00AD3/uni00AD2/uni00AD10Difference/uni00A0in/uni00A0ROUGENo/uni00A0pretraining Only/uni00A0pretrain/uni00A0decoder No/uni00A0LM/uni00A0objective Only/uni00A0pretrain/uni00A0embeddings Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM Only/uni00A0pretrain/uni00A0encoder Pretrain/uni00A0on/uni00A0parallel/uni00A0corpusROUGE/uni00AD1ROUGE/uni00AD2ROUGE/uni00ADLFigure 5: Summarization ablation study measuring the difference in validation ROUGE betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withunlabeled data to initialize the encoder and decoder, plus the language modeling objective.a pretrained and non-pretrained system. The document, gold summary, and the two system outputswere presented to a human evaluator who was asked to rate each system output on a scale of 1-5with 5 being the best score. The system outputs were presented in random order and the evaluatordid not know the identity of either output. The evaluator noted if there were repetitive phrases orsentences in either system outputs. Unwanted repetition was also noticed by Nallapati et al. (2016).Table 3 and 4 show the results of the study. In both cases, the pretrained system outperforms thesystem without pretraining in a statistically significant manner. The better optimization enabled bypretraining improves the generated summaries and decreases unwanted repetition in the output.NP>PNP = P NP<P29 88 83Table 3: The count of how often the no pretrain system ( NP) achieves a higher, equal, and lowerscore than the pretrained system ( P) in the side-by-side study where the human evaluator gave eachsystem a score from 1-5. The sign statistical test gives a p-value of <0:0001 for rejecting the nullhypothesis that there is no difference in the score obtained by either system.No pretrainNo repeats RepeatsPretrainNo repeats 67 65Repeats 24 44Table 4: The count of how often the pretrain and no pretrain systems contain repeated phrases orsentences in their outputs in the side-by-side study. McNemar’s test gives a p-value of <0:0001for rejecting the null hypothesis that the two systems repeat the same proportion of times. Thepretrained system clearly repeats less than the system without pretraining.4 R ELATED WORKUnsupervised pretraining has been intensively studied in the past years, most notably is the workby Dahl et al. (2012) who found that pretraining with deep belief networks improved feedforwardacoustic models. More recent acoustic models have found pretraining unnecessary (Xiong et al.,7Under review as a conference paper at ICLR 20172016; Zhang et al., 2016; Chan et al., 2015), probably because the reconstruction objective of deepbelief networks is too easy. In contrast, we find that pretraining language models by next stepprediction significantly improves seq2seq on challenging real world datasets.Despite its appeal, unsupervised learning is rarely shown to improve supervised training. Dai & Le(2015) was amongst the rare studies which showed the benefits of pretraining in a semi-supervisedlearning setting. Their method is similar to our method except that they did not have a decodernetwork and thus could not apply to seq2seq learning. Similarly, Zhang & Zong (2016) found ituseful to add an additional task of sentence reordering of source-side monolingual data for neuralmachine translation. Various forms of transfer or multitask learning with seq2seq framework alsohave the flavors of our algorithm (Zoph et al., 2016; Luong et al., 2015a; Firat et al., 2016).Perhaps most closely related to our method is the work by Gulcehre et al. (2015), who combined alanguage model with an already trained seq2seq model by fine-tuning additional deep output layers.Empirically, their method produces small improvements over the supervised baseline. We suspectthat their method does not produce significant gains because (i) the models are trained independentlyof each other and are not fine-tuned (ii) the LM is combined with the seq2seq model after the lastlayer, wasting the benefit of the low level LM features, and (iii) only using the LM on the decoderside. Venugopalan et al. (2016) addressed (i) but still experienced minor improvements. Usingpretrained GloVe embedding vectors (Pennington et al., 2014) had more impact.Related to our approach in principle is the work by Chen et al. (2016) who proposed a two-term,theoretically motivated unsupervised objective for unpaired input-output samples. Though they didnot apply their method to seq2seq learning, their framework can be modified to do so. In that case,the first term pushes the output to be highly probable under some scoring model, and the secondterm ensures that the output depends on the input. In the seq2seq setting, we interpret the first termas a pretrained language model scoring the output sequence. In our work, we fold the pretrainedlanguage model into the decoder. We believe that using the pretrained language model only forscoring is less efficient that using all the pretrained weights. Our use of labeled examples satisfiesthe second term. These connections provide a theoretical grounding for our work.In our experiments, we benchmark our method on machine translation, where other unsupervisedmethods are shown to give promising results (Sennrich et al., 2015b; Cheng et al., 2016). In back-translation (Sennrich et al., 2015b), the trained model is used to decode unlabeled data to yield extralabeled data. One can argue that this method may not have a natural analogue to other tasks such assummarization. We note that their technique is complementary to ours, and may lead to additionalgains in machine translation. The method of using autoencoders in Cheng et al. (2016) is promising,though it can be argued that autoencoding is an easy objective and language modeling may force theunsupervised models to learn better features.5 C ONCLUSIONWe presented a novel unsupervised pretraining method to improve sequence to sequence learning.The method can aid in both generalization and optimization. Our scheme involves pretraining twolanguage models in the source and target domain, and initializing the embeddings, first LSTM layers,and softmax of a sequence to sequence model with the weights of the language models. Using ourmethod, we achieved state-of-the-art machine translation results on both WMT’14 and WMT’15English to German.A key advantage of this technique is that it is flexible and can be applied to a large variety of tasks,such as summarization, where it surpasses the supervised learning baseline.ACKNOWLEDGMENTSWe thank George Dahl, Andrew Dai, Laurent Dinh, Stephan Gouws, Geoffrey Hinton, Rafal Joze-fowicz, Pooya Khorrami, Phillip Louis, Ramesh Nallapati, Arvind Neelakantan, Xin Pan, Abi See,Rico Sennrich, Luke Vilnis, Yuan Yu and the Google Brain team for their help with the project.8Under review as a conference paper at ICLR 2017
HyfU5MFSg
good paper with strong experiments
7: Good paper, accept
In this paper, the authors propose to pretrain the encoder/decoder of seq2seq models on a large amount of unlabeled data using a LM objective. They obtain improvements using this technique on machine translation and abstractive summarization. While the effectiveness of pretraining seq2seq models has been known among researchers and explored in a few papers (e.g. Zoph et al. 2016, Dai and Le 2015), I believe this is the first paper to pretrain using a LM for both the encoder/decoder. The technique is simple, but the gains are large (e.g. +2.7 BLEU on NMT). In addition, the authors perform extensive ablation studies to analyze where the performance is coming from. Hence, I think this paper should be accepted.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
H1Gq5Q9el
ICLR.cc/2017/conference
2017
Unsupervised Pretraining for Sequence to Sequence Learning
["Prajit Ramachandran", "Peter J. Liu", "Quoc V. Le"]
This work presents a general unsupervised learning method to improve the accuracy of sequence to sequence (seq2seq) models. In our method, the weights of the encoder and decoder of a seq2seq model are initialized with the pretrained weights of two language models and then fine-tuned with labeled data. We apply this method to challenging benchmarks in machine translation and abstractive summarization and find that it significantly improves the subsequent supervised models. Our main result is that the pretraining accelerates training and improves generalization of seq2seq models, achieving state-of-the-art results on the WMT English->German task, surpassing a range of methods using both phrase-based machine translation and neural machine translation. Our method achieves an improvement of 1.3 BLEU from the previous best models on both WMT'14 and WMT'15 English->German. On summarization, our method beats the supervised learning baseline.
["Natural language processing", "Deep learning", "Semi-Supervised Learning", "Transfer Learning"]
ABSTRACTThis work presents a general unsupervised learning method to improve the accu-racy of sequence to sequence (seq2seq) models. In our method, the weights ofthe encoder and decoder of a seq2seq model are initialized with the pretrainedweights of two language models and then fine-tuned with labeled data. We ap-ply this method to challenging benchmarks in machine translation and abstractivesummarization and find that it significantly improves the subsequent supervisedmodels. Our main result is that the pretraining accelerates training and improvesgeneralization of seq2seq models, achieving state-of-the-art results on the WMTEnglish!German task, surpassing a range of methods using both phrase-basedmachine translation and neural machine translation. Our method achieves an im-provement of 1.3 BLEU from the previous best models on both WMT’14 andWMT’15 English!German. On summarization, our method beats the supervisedlearning baseline.1 I NTRODUCTIONSequence to sequence ( seq2seq ) models (Sutskever et al., 2014; Cho et al., 2014; Kalchbrenner& Blunsom, 2013; Allen, 1987; ̃Neco & Forcada, 1997) are extremely effective on a variety oftasks that require a mapping between a variable-length input sequence to a variable-length outputsequence. The main weakness of sequence to sequence models, and deep networks in general, liesin the fact that they can easily overfit when the amount of supervised training data is small.In this work, we propose a simple and effective technique for using unsupervised pretraining toimprove seq2seq models. Our proposal is to initialize both encoder and decoder networks withpretrained weights of two language models. These pretrained weights are then fine-tuned with thelabeled corpus.We benchmark this method on machine translation for English !German and abstractive summa-rization on CNN and Daily Mail articles. Our main result is that a seq2seq model, with pretraining,exceeds the strongest possible baseline in both neural machine translation and phrase-based machinetranslation. Our model obtains an improvement of 1.3 BLEU from the previous best models on bothWMT’14 and WMT’15 English !German. On abstractive summarization, our method achievescompetitive results to the strongest baselines.We also perform ablation study to understand the behaviors of the pretraining method. Our studyconfirms that among many other possible choices of using a language model in seq2seq with atten-tion, the above proposal works best. Our study also shows that, for translation, the main gains comefrom the improved generalization due to the pretrained features, whereas for summarization thegains come from the improved optimization due to pretraining the encoder which has been unrolledfor hundreds of timesteps. On both tasks, our proposed method always improves generalization onthe test sets.Work done as an intern on Google Brain.1Under review as a conference paper at ICLR 20172 U NSUPERVISED PRETRAINING FOR SEQUENCE TO SEQUENCE LEARNINGIn the following section, we will describe our basic unsupervised pretraining procedure for sequenceto sequence learning and how to modify sequence to sequence learning to effectively make use ofthe pretrained weights. We then show several extensions to improve the basic model.2.1 B ASIC PROCEDUREGiven an input sequence x1;x2;:::;x mand an output sequence yn;yn1;:::;y 1, the objective of se-quence to sequence learning is to maximize the likelihood p(yn;yn1;:::;y 1jx1;x2;:::;x m).Common sequence to sequence learning methods decompose this objective asp(yn;yn1;:::;y 1jx1;x2;:::;x m) =Qnt=1p(ytjyt1;:::;y 1;x1;x2;:::;x m).In sequence to sequence learning, an RNN encoder is used to represent x1;:::;x mas a hidden vector,which is given to an RNN decoder to produce the output sequence. Our method is based on theobservation that without the encoder, the decoder essentially acts like a language model on y’s.Similarly, the encoder with an additional output layer also acts like a language model. Thus it isnatural to use trained languages models to initialize the encoder and decoder.Therefore, the basic procedure of our approach is to pretrain both the seq2seq encoder and decodernetworks with language models, which can be trained on large amounts of unlabeled text data. Thiscan be seen in Figure 1, where the parameters in the shaded boxes are pretrained. In the followingwe will describe the method in detail using machine translation as an example application.A B C <EOS> W X Y ZW X Y Z <EOS>EmbeddingFirst RNN LayerSoftmaxSecond RNN LayerFigure 1: Pretrained sequence to sequence model. The red parameters are the encoder and the blueparameters are the decoder. All parameters in a shaded box are pretrained, either from the sourceside (light red) or target side (light blue) language model. Otherwise, they are randomly initialized.First, two monolingual datasets are collected, one for the source side language, and one for thetarget side language. A language model ( LM) is trained on each dataset independently, giving anLM trained on the source side corpus and an LM trained on the target side corpus.After two language models are trained, a multi-layer seq2seq model Mis constructed. The embed-ding and first LSTM layers of the encoder and decoder are initialized with the pretrained weights.To be even more efficient, the softmax of the decoder is initialized with the softmax of the pretrainedtarget side LM.2.2 I MPROVING THE MODELWe also employ three additional methods to further improve the model above. The three meth-ods are: a) Monolingual language modeling losses, b) Residual connections and c) Attention overmultiple layers (see Figure 2).Monolingual language modeling losses: After the seq2seq model Mis initialized with the twoLMs, it is fine-tuned with a labeled dataset. To ensure that the model does not overfit the labeleddata, we regularize the parameters that were pretrained by continuing to train with the monolinguallanguage modeling losses. The seq2seq and language modeling losses are weighted equally.2Under review as a conference paper at ICLR 2017WX+(a)A B C <EOS>WAttention(b)Figure 2: Two improvements to the baseline model: (a) residual connection, and (b) attention overmultiple layers.Residual connections: As described, the input vector to the decoder softmax layer is a randomvector because the high level (non-first) layers of the LSTM are randomly initialized. This slowsdown training and introduces random gradients to the pretrained parameters, reducing the effective-ness of pretraining. To circumvent this issue, we use a residual connection from the output of thefirst LSTM layer directly to the input of the softmax (see Figure 2-a).Attention over multiple layers: In all our models, we use an attention mechanism (Bahdanauet al., 2015), where the model attends over both top and first layer (see Figure 2-b). More concretely,given a query vector qtfrom the decoder, encoder states from the first layer h11;:::;h1T, and encoderstates from the last layer hL1;:::;hLT, we compute the attention context vector ctas follows:i=exp(qthNi)PTj=1exp(qthNj)c1t=TXi=1ih1icNt=TXi=1ihNict= [c1t;cNt]Note that attention weights iare only computed once using the top level encoder states.We also experiment with passing the attention vector ctas input into the next timestep (Luong et al.,2015b). Instead of passing cinto the first LSTM layer, we pass it as input to the second LSTM layerby concatenating it with the output of the first LSTM layer.We use all three improvements in our experiments. However, in general we notice that the benefitsof the attention modifications are minor in comparison with the benefits of the additional languagemodeling objectives and residual connections.3 E XPERIMENTSIn the following section, we apply our approach to two important tasks in seq2seq learning: machinetranslation and abstractive summarization. On each task, we compare against the previous bestsystems. We also perform ablation experiments to understand the behavior of each component ofour method.3.1 M ACHINE TRANSLATIONDataset and Evaluation: For machine translation, we evaluate our method on the WMTEnglish!German task (Bojar et al., 2015). We used the WMT 14 training dataset, which is slightlysmaller than the WMT 15 dataset. Because the dataset has some noisy examples, we used a lan-guage detection system to filter the training examples. Sentences pairs where either the source wasnot English or the target was not German were thrown away. This resulted in around 4 milliontraining examples. Following Sennrich et al. (2015b), we use subword units (Sennrich et al., 2015a)3Under review as a conference paper at ICLR 2017with 89500 merge operations, giving a vocabulary size around 90000. The validation set is theconcatenated newstest2012 and newstest2013, and our test sets are newstest2014 and newstest2015.Evaluation on the validation set was with case-sensitive BLEU (Papineni et al., 2002) on tokenizedtext using multi-bleu.perl . Evaluation on the test sets was with case-sensitive BLEU ondetokenized text using mteval-v13a.pl . The monolingual training datasets are the News CrawlEnglish and German corpora, each of which has more than a billion tokens.Experimental settings: The language models were trained in the same fashion as (Jozefowiczet al., 2016) We used a 1 layer 4096 dimensional LSTM with the hidden state projected downto 1024 units (Sak et al., 2014) and trained for one week on 32 Tesla K40 GPUs. Our seq2seqmodel was a 3 layer model, where the second and third layers each have 1000 hidden units. Themonolingual objectives, residual connection, and the modified attention were all used. We used theAdam optimizer (Kingma & Ba, 2015) and train with asynchronous SGD on 16 GPUs for speed.We used a learning rate of 5e-5 which is multiplied by 0.8 every 50K steps after an initial 400Ksteps, gradient clipping with norm 5.0 (Pascanu et al., 2013), and dropout of 0.2 on non-recurrentconnections (Zaremba et al., 2014). We used early stopping on validation set perplexity. A beamsize of 10 was used for decoding. Our ensemble is constructed with the 5 best performing modelson the validation set, which are trained with different hyperparameters.Results: Table 1 shows the results of our method in comparison with other baselines. Ourmethod achieves a new state-of-the-art for single model performance on both newstest2014 andnewstest2015, significantly outperforming the competitive semi-supervised backtranslation tech-nique (Sennrich et al., 2015b). Equally impressive is the fact that our best single model outperformsthe previous state of the art ensemble of 4 models. Our ensemble of 5 models matches or exceedsthe previous best ensemble of 12 models.BLEUSystem ensemble? newstest2014 newstest2015Phrase Based MT (Williams et al., 2016) - 21.9 23.7Supervised NMT (Jean et al., 2015) single - 22.4Edit Distance Transducer NMT (Stahlberg et al., 2016) single 21.7 24.1Edit Distance Transducer NMT (Stahlberg et al., 2016) ensemble 8 22.9 25.7Backtranslation (Sennrich et al., 2015b) single 22.7 25.7Backtranslation (Sennrich et al., 2015b) ensemble 4 23.8 26.5Backtranslation (Sennrich et al., 2015b) ensemble 12 24.7 27.6No pretraining single 21.3 24.3Pretrained seq2seq single 24.0 27.0Pretrained seq2seq ensemble 5 24.7 28.1Table 1: English!German performance on WMT test sets. Our pretrained model outperforms allother models. Note that the model without pretraining uses the LM objective.Ablation study: In order to better understand the effects of pretraining, we conducted an ablationstudy by modifying the pretraining scheme. Figure 3 shows the drop in validation BLEU of variousablations compared with the full model. The full model uses LMs trained with monolingual data toinitialize the encoder and decoder, in addition to the language modeling objective. In the following,we interpret the findings of the study. Note that some findings are specific to the translation task.Given the results from the ablation study, we can make the following observations:Pretraining the decoder is better than pretraining the encoder: Only pretraining the encoderleads to a 1.6 BLEU point drop while only pretraining the decoder leads to a 1.0 BLEUpoint drop.Pretrain as much as possible because the benefits compound: given the drops of no pre-training at all (2:0) and only pretraining the encoder ( 1:6), the additive estimate of thedrop of only pretraining the decoder side is 2:0(1:6) =0:4; however the actualdrop is1:0which is a much larger drop than the additive estimate.Pretraining the softmax is important: Pretraining only the embeddings and first LSTM layergives a large drop of 1.6 BLEU points.4Under review as a conference paper at ICLR 20172.01.51.00.50.0Difference/uni00A0in/uni00A0BLEU/uni00AD2.1Pretrain/uni00A0on/uni00A0parallel/uni00A0corpus/uni00AD2.0No/uni00A0pretraining/uni00AD2.0Only/uni00A0pretrain/uni00A0embeddings/uni00AD2.0No/uni00A0LM/uni00A0objective/uni00AD1.6Only/uni00A0pretrain/uni00A0encoder/uni00AD1.6Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM/uni00AD1.0Only/uni00A0pretrain/uni00A0decoder/uni00AD0.3Pretrain/uni00A0on/uni00A0WikipediaFigure 3: English!German ablation study measuring the difference in validation BLEU betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withmonolingual data to initialize the encoder and decoder, plus the language modeling objective.The language modeling objective is a strong regularizer: The drop in BLEU points ofpretraining the entire model and not using the LM objective is as bad as using the LMobjective without pretraining.Pretraining on a lot of unlabeled data is essential for learning to extract powerful features:If the model is initialized with LMs that are pretrained on the source part and target part oftheparallel corpus, the drop in performance is as large as not pretraining at all. However,performance remains strong when pretrained on the large, non-news Wikipedia corpus.To understand the contributions of unsupervised pretraining vs. supervised training, we track theperformance of pretraining as a function of dataset size. For this, we trained a a model with andwithout pretraining on random subsets of the English !German corpus. Both models use the ad-ditional LM objective. The results are summarized in Figure 4. When a 100% of the labeled datais used, the gap between the pretrained and no pretrain model is 2.0 BLEU points. However, thatgap grows when less data is available. When trained on 20% of the labeled data, the gap becomes3.8 BLEU points. This demonstrates that the pretrained models degrade less as the labeled datasetbecomes smaller.20 40 60 80 100Percent/uni00A0of/uni00A0entire/uni00A0labeled/uni00A0dataset/uni00A0used/uni00A0for/uni00A0training1516171819202122BLEUPretrainNo/uni00A0pretrainFigure 4: Validation performance of pretraining vs. no pretraining when trained on a subset of theentire labeled dataset for English !German translation.5Under review as a conference paper at ICLR 20173.2 A BSTRACTIVE SUMMARIZATIONDataset and Evaluation: For a low-resource abstractive summarization task, we use theCNN/Daily Mail corpus from (Hermann et al., 2015). Following Nallapati et al. (2016), we modifythe data collection scripts to restore the bullet point summaries. The task is to predict the bulletpoint summaries from a news article. The dataset has fewer than 300K document-summary pairs.To compare against Nallapati et al. (2016), we used the anonymized corpus. However, for our abla-tion study, we used the non-anonymized corpus.1We evaluate our system using full length ROUGE(Lin, 2004). For the anonymized corpus in particular, we considered each highlight as a separatesentence following Nallapati et al. (2016). In this setting, we used the English Gigaword corpus(Napoles et al., 2012) as our larger, unlabeled “monolingual” corpus, although all data used in thistask is in English.Experimental settings: We use subword units (Sennrich et al., 2015a) with 31500 merges, result-ing in a vocabulary size of about 32000. We use up to the first 600 tokens of the document andpredict the entire summary. Only one language model is trained and it is used to initialize both theencoder and decoder, since the source and target languages are the same. However, the encoderand decoder are not tied. The LM is a one-layer LSTM of size 1024 trained in a similar fashion toJozefowicz et al. (2016). For the seq2seq model, we use the same settings as the machine translationexperiments. The only differences are that we use a 2 layer model with the second layer having1024 hidden units, and that the learning rate is multiplied by 0.8 every 30K steps after an initial100K steps.Results: Table 2 summarizes our results on the anonymized version of the corpus. Our pretrainedmodel is only able to match the previous baseline seq2seq of Nallapati et al. (2016). However, ourmodel is a unidirectional LSTM while they use a bidirectional LSTM. They also use a longer contextof 800 tokens, whereas we used a context of 600 tokens due to GPU memory issues. Furthermore,they use pretrained word2vec (Mikolov et al., 2013) vectors to initialize their word embeddings. Aswe show in our ablation study, just pretraining the embeddings itself gives a large improvement.System ROUGE-1 ROUGE-2 ROUGE-LSeq2seq + pretrained embeddings (Nallapati et al., 2016) 32.49 11.84 29.47+ temporal attention (Nallapati et al., 2016) 35.46 13.30 32.65Pretrained seq2seq 32.56 11.89 29.44Table 2: Results on the anonymized CNN/Daily Mail dataset.Ablation study: We performed an ablation study similar to the one performed on the machinetranslation model. The results are reported in Figure 5. Here we report the drops on ROUGE-1,ROUGE-2, and ROUGE-L on the non-anonymized validation set.Given the results from our ablation study, we can make the following observations:Pretraining improves optimization: in contrast with the machine translation model, it ismore beneficial to only pretrain the encoder than only the decoder of the summarizationmodel. One interpretation is that pretraining enables the gradient to flow much furtherback in time than randomly initialized weights. This may also explain why pretraining onthe parallel corpus is no worse than pretraining on a larger monolingual corpus.The language modeling objective is a strong regularizer: A model without the LM objectivehas a significant drop in ROUGE scores.Human evaluation: As ROUGE may not be able to capture the quality of summarization, wealso performed a small qualitative study to understand the human impression of the summariesproduced by different models. We took 200 random documents and compared the performance of1We encourage future researchers to use the non-anonymized version because it is a more realistic summa-rization setting with a larger vocabulary. Our numbers on the non-anonymized test set are 35:56ROUGE-1,14:60ROUGE-2, and 25:08ROUGE-L. We did not consider highlights as separate sentences.6Under review as a conference paper at ICLR 2017/uni00AD5/uni00AD4/uni00AD3/uni00AD2/uni00AD10Difference/uni00A0in/uni00A0ROUGENo/uni00A0pretraining Only/uni00A0pretrain/uni00A0decoder No/uni00A0LM/uni00A0objective Only/uni00A0pretrain/uni00A0embeddings Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM Only/uni00A0pretrain/uni00A0encoder Pretrain/uni00A0on/uni00A0parallel/uni00A0corpusROUGE/uni00AD1ROUGE/uni00AD2ROUGE/uni00ADLFigure 5: Summarization ablation study measuring the difference in validation ROUGE betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withunlabeled data to initialize the encoder and decoder, plus the language modeling objective.a pretrained and non-pretrained system. The document, gold summary, and the two system outputswere presented to a human evaluator who was asked to rate each system output on a scale of 1-5with 5 being the best score. The system outputs were presented in random order and the evaluatordid not know the identity of either output. The evaluator noted if there were repetitive phrases orsentences in either system outputs. Unwanted repetition was also noticed by Nallapati et al. (2016).Table 3 and 4 show the results of the study. In both cases, the pretrained system outperforms thesystem without pretraining in a statistically significant manner. The better optimization enabled bypretraining improves the generated summaries and decreases unwanted repetition in the output.NP>PNP = P NP<P29 88 83Table 3: The count of how often the no pretrain system ( NP) achieves a higher, equal, and lowerscore than the pretrained system ( P) in the side-by-side study where the human evaluator gave eachsystem a score from 1-5. The sign statistical test gives a p-value of <0:0001 for rejecting the nullhypothesis that there is no difference in the score obtained by either system.No pretrainNo repeats RepeatsPretrainNo repeats 67 65Repeats 24 44Table 4: The count of how often the pretrain and no pretrain systems contain repeated phrases orsentences in their outputs in the side-by-side study. McNemar’s test gives a p-value of <0:0001for rejecting the null hypothesis that the two systems repeat the same proportion of times. Thepretrained system clearly repeats less than the system without pretraining.4 R ELATED WORKUnsupervised pretraining has been intensively studied in the past years, most notably is the workby Dahl et al. (2012) who found that pretraining with deep belief networks improved feedforwardacoustic models. More recent acoustic models have found pretraining unnecessary (Xiong et al.,7Under review as a conference paper at ICLR 20172016; Zhang et al., 2016; Chan et al., 2015), probably because the reconstruction objective of deepbelief networks is too easy. In contrast, we find that pretraining language models by next stepprediction significantly improves seq2seq on challenging real world datasets.Despite its appeal, unsupervised learning is rarely shown to improve supervised training. Dai & Le(2015) was amongst the rare studies which showed the benefits of pretraining in a semi-supervisedlearning setting. Their method is similar to our method except that they did not have a decodernetwork and thus could not apply to seq2seq learning. Similarly, Zhang & Zong (2016) found ituseful to add an additional task of sentence reordering of source-side monolingual data for neuralmachine translation. Various forms of transfer or multitask learning with seq2seq framework alsohave the flavors of our algorithm (Zoph et al., 2016; Luong et al., 2015a; Firat et al., 2016).Perhaps most closely related to our method is the work by Gulcehre et al. (2015), who combined alanguage model with an already trained seq2seq model by fine-tuning additional deep output layers.Empirically, their method produces small improvements over the supervised baseline. We suspectthat their method does not produce significant gains because (i) the models are trained independentlyof each other and are not fine-tuned (ii) the LM is combined with the seq2seq model after the lastlayer, wasting the benefit of the low level LM features, and (iii) only using the LM on the decoderside. Venugopalan et al. (2016) addressed (i) but still experienced minor improvements. Usingpretrained GloVe embedding vectors (Pennington et al., 2014) had more impact.Related to our approach in principle is the work by Chen et al. (2016) who proposed a two-term,theoretically motivated unsupervised objective for unpaired input-output samples. Though they didnot apply their method to seq2seq learning, their framework can be modified to do so. In that case,the first term pushes the output to be highly probable under some scoring model, and the secondterm ensures that the output depends on the input. In the seq2seq setting, we interpret the first termas a pretrained language model scoring the output sequence. In our work, we fold the pretrainedlanguage model into the decoder. We believe that using the pretrained language model only forscoring is less efficient that using all the pretrained weights. Our use of labeled examples satisfiesthe second term. These connections provide a theoretical grounding for our work.In our experiments, we benchmark our method on machine translation, where other unsupervisedmethods are shown to give promising results (Sennrich et al., 2015b; Cheng et al., 2016). In back-translation (Sennrich et al., 2015b), the trained model is used to decode unlabeled data to yield extralabeled data. One can argue that this method may not have a natural analogue to other tasks such assummarization. We note that their technique is complementary to ours, and may lead to additionalgains in machine translation. The method of using autoencoders in Cheng et al. (2016) is promising,though it can be argued that autoencoding is an easy objective and language modeling may force theunsupervised models to learn better features.5 C ONCLUSIONWe presented a novel unsupervised pretraining method to improve sequence to sequence learning.The method can aid in both generalization and optimization. Our scheme involves pretraining twolanguage models in the source and target domain, and initializing the embeddings, first LSTM layers,and softmax of a sequence to sequence model with the weights of the language models. Using ourmethod, we achieved state-of-the-art machine translation results on both WMT’14 and WMT’15English to German.A key advantage of this technique is that it is flexible and can be applied to a large variety of tasks,such as summarization, where it surpasses the supervised learning baseline.ACKNOWLEDGMENTSWe thank George Dahl, Andrew Dai, Laurent Dinh, Stephan Gouws, Geoffrey Hinton, Rafal Joze-fowicz, Pooya Khorrami, Phillip Louis, Ramesh Nallapati, Arvind Neelakantan, Xin Pan, Abi See,Rico Sennrich, Luke Vilnis, Yuan Yu and the Google Brain team for their help with the project.8Under review as a conference paper at ICLR 2017
r1L2IyIVe
review
6: Marginally above acceptance threshold
Authors propose the use of layer-wise language model-like pretraining for encoder-decoder models. This allows to leverage separate source and target corpora (in unsupervised manner) without necessity of large amounts of parallel training corpora. The idea is in principle fairly simple, and rely on initial optimising both encoder and decoder with LSTM tasked to perform language modelling. The ideas are not new, and the paper is more like a successful compilation of several approaches that have been around for some time. The experimental validation, though, offers some interesting insights into importance of initialization, and the effectiveness of different initialisations approaches in enc-dec setting. The regulariser you propose to use on page 3, looks like typical multi-task objective function, especially it is used in an alternating manner would be interesting to see whether similar performance might have been obtained starting with this objective, from random initialisation. You should probably give credit for encoder-decoder like-RNN models published in 1990s. Minors: Pg. 2, Sec 2.1 2nd paragraph: can be different sizes -> can be of different sizes
4: The reviewer is confident but not absolutely certain that the evaluation is correct
H1Gq5Q9el
ICLR.cc/2017/conference
2017
Unsupervised Pretraining for Sequence to Sequence Learning
["Prajit Ramachandran", "Peter J. Liu", "Quoc V. Le"]
This work presents a general unsupervised learning method to improve the accuracy of sequence to sequence (seq2seq) models. In our method, the weights of the encoder and decoder of a seq2seq model are initialized with the pretrained weights of two language models and then fine-tuned with labeled data. We apply this method to challenging benchmarks in machine translation and abstractive summarization and find that it significantly improves the subsequent supervised models. Our main result is that the pretraining accelerates training and improves generalization of seq2seq models, achieving state-of-the-art results on the WMT English->German task, surpassing a range of methods using both phrase-based machine translation and neural machine translation. Our method achieves an improvement of 1.3 BLEU from the previous best models on both WMT'14 and WMT'15 English->German. On summarization, our method beats the supervised learning baseline.
["Natural language processing", "Deep learning", "Semi-Supervised Learning", "Transfer Learning"]
ABSTRACTThis work presents a general unsupervised learning method to improve the accu-racy of sequence to sequence (seq2seq) models. In our method, the weights ofthe encoder and decoder of a seq2seq model are initialized with the pretrainedweights of two language models and then fine-tuned with labeled data. We ap-ply this method to challenging benchmarks in machine translation and abstractivesummarization and find that it significantly improves the subsequent supervisedmodels. Our main result is that the pretraining accelerates training and improvesgeneralization of seq2seq models, achieving state-of-the-art results on the WMTEnglish!German task, surpassing a range of methods using both phrase-basedmachine translation and neural machine translation. Our method achieves an im-provement of 1.3 BLEU from the previous best models on both WMT’14 andWMT’15 English!German. On summarization, our method beats the supervisedlearning baseline.1 I NTRODUCTIONSequence to sequence ( seq2seq ) models (Sutskever et al., 2014; Cho et al., 2014; Kalchbrenner& Blunsom, 2013; Allen, 1987; ̃Neco & Forcada, 1997) are extremely effective on a variety oftasks that require a mapping between a variable-length input sequence to a variable-length outputsequence. The main weakness of sequence to sequence models, and deep networks in general, liesin the fact that they can easily overfit when the amount of supervised training data is small.In this work, we propose a simple and effective technique for using unsupervised pretraining toimprove seq2seq models. Our proposal is to initialize both encoder and decoder networks withpretrained weights of two language models. These pretrained weights are then fine-tuned with thelabeled corpus.We benchmark this method on machine translation for English !German and abstractive summa-rization on CNN and Daily Mail articles. Our main result is that a seq2seq model, with pretraining,exceeds the strongest possible baseline in both neural machine translation and phrase-based machinetranslation. Our model obtains an improvement of 1.3 BLEU from the previous best models on bothWMT’14 and WMT’15 English !German. On abstractive summarization, our method achievescompetitive results to the strongest baselines.We also perform ablation study to understand the behaviors of the pretraining method. Our studyconfirms that among many other possible choices of using a language model in seq2seq with atten-tion, the above proposal works best. Our study also shows that, for translation, the main gains comefrom the improved generalization due to the pretrained features, whereas for summarization thegains come from the improved optimization due to pretraining the encoder which has been unrolledfor hundreds of timesteps. On both tasks, our proposed method always improves generalization onthe test sets.Work done as an intern on Google Brain.1Under review as a conference paper at ICLR 20172 U NSUPERVISED PRETRAINING FOR SEQUENCE TO SEQUENCE LEARNINGIn the following section, we will describe our basic unsupervised pretraining procedure for sequenceto sequence learning and how to modify sequence to sequence learning to effectively make use ofthe pretrained weights. We then show several extensions to improve the basic model.2.1 B ASIC PROCEDUREGiven an input sequence x1;x2;:::;x mand an output sequence yn;yn1;:::;y 1, the objective of se-quence to sequence learning is to maximize the likelihood p(yn;yn1;:::;y 1jx1;x2;:::;x m).Common sequence to sequence learning methods decompose this objective asp(yn;yn1;:::;y 1jx1;x2;:::;x m) =Qnt=1p(ytjyt1;:::;y 1;x1;x2;:::;x m).In sequence to sequence learning, an RNN encoder is used to represent x1;:::;x mas a hidden vector,which is given to an RNN decoder to produce the output sequence. Our method is based on theobservation that without the encoder, the decoder essentially acts like a language model on y’s.Similarly, the encoder with an additional output layer also acts like a language model. Thus it isnatural to use trained languages models to initialize the encoder and decoder.Therefore, the basic procedure of our approach is to pretrain both the seq2seq encoder and decodernetworks with language models, which can be trained on large amounts of unlabeled text data. Thiscan be seen in Figure 1, where the parameters in the shaded boxes are pretrained. In the followingwe will describe the method in detail using machine translation as an example application.A B C <EOS> W X Y ZW X Y Z <EOS>EmbeddingFirst RNN LayerSoftmaxSecond RNN LayerFigure 1: Pretrained sequence to sequence model. The red parameters are the encoder and the blueparameters are the decoder. All parameters in a shaded box are pretrained, either from the sourceside (light red) or target side (light blue) language model. Otherwise, they are randomly initialized.First, two monolingual datasets are collected, one for the source side language, and one for thetarget side language. A language model ( LM) is trained on each dataset independently, giving anLM trained on the source side corpus and an LM trained on the target side corpus.After two language models are trained, a multi-layer seq2seq model Mis constructed. The embed-ding and first LSTM layers of the encoder and decoder are initialized with the pretrained weights.To be even more efficient, the softmax of the decoder is initialized with the softmax of the pretrainedtarget side LM.2.2 I MPROVING THE MODELWe also employ three additional methods to further improve the model above. The three meth-ods are: a) Monolingual language modeling losses, b) Residual connections and c) Attention overmultiple layers (see Figure 2).Monolingual language modeling losses: After the seq2seq model Mis initialized with the twoLMs, it is fine-tuned with a labeled dataset. To ensure that the model does not overfit the labeleddata, we regularize the parameters that were pretrained by continuing to train with the monolinguallanguage modeling losses. The seq2seq and language modeling losses are weighted equally.2Under review as a conference paper at ICLR 2017WX+(a)A B C <EOS>WAttention(b)Figure 2: Two improvements to the baseline model: (a) residual connection, and (b) attention overmultiple layers.Residual connections: As described, the input vector to the decoder softmax layer is a randomvector because the high level (non-first) layers of the LSTM are randomly initialized. This slowsdown training and introduces random gradients to the pretrained parameters, reducing the effective-ness of pretraining. To circumvent this issue, we use a residual connection from the output of thefirst LSTM layer directly to the input of the softmax (see Figure 2-a).Attention over multiple layers: In all our models, we use an attention mechanism (Bahdanauet al., 2015), where the model attends over both top and first layer (see Figure 2-b). More concretely,given a query vector qtfrom the decoder, encoder states from the first layer h11;:::;h1T, and encoderstates from the last layer hL1;:::;hLT, we compute the attention context vector ctas follows:i=exp(qthNi)PTj=1exp(qthNj)c1t=TXi=1ih1icNt=TXi=1ihNict= [c1t;cNt]Note that attention weights iare only computed once using the top level encoder states.We also experiment with passing the attention vector ctas input into the next timestep (Luong et al.,2015b). Instead of passing cinto the first LSTM layer, we pass it as input to the second LSTM layerby concatenating it with the output of the first LSTM layer.We use all three improvements in our experiments. However, in general we notice that the benefitsof the attention modifications are minor in comparison with the benefits of the additional languagemodeling objectives and residual connections.3 E XPERIMENTSIn the following section, we apply our approach to two important tasks in seq2seq learning: machinetranslation and abstractive summarization. On each task, we compare against the previous bestsystems. We also perform ablation experiments to understand the behavior of each component ofour method.3.1 M ACHINE TRANSLATIONDataset and Evaluation: For machine translation, we evaluate our method on the WMTEnglish!German task (Bojar et al., 2015). We used the WMT 14 training dataset, which is slightlysmaller than the WMT 15 dataset. Because the dataset has some noisy examples, we used a lan-guage detection system to filter the training examples. Sentences pairs where either the source wasnot English or the target was not German were thrown away. This resulted in around 4 milliontraining examples. Following Sennrich et al. (2015b), we use subword units (Sennrich et al., 2015a)3Under review as a conference paper at ICLR 2017with 89500 merge operations, giving a vocabulary size around 90000. The validation set is theconcatenated newstest2012 and newstest2013, and our test sets are newstest2014 and newstest2015.Evaluation on the validation set was with case-sensitive BLEU (Papineni et al., 2002) on tokenizedtext using multi-bleu.perl . Evaluation on the test sets was with case-sensitive BLEU ondetokenized text using mteval-v13a.pl . The monolingual training datasets are the News CrawlEnglish and German corpora, each of which has more than a billion tokens.Experimental settings: The language models were trained in the same fashion as (Jozefowiczet al., 2016) We used a 1 layer 4096 dimensional LSTM with the hidden state projected downto 1024 units (Sak et al., 2014) and trained for one week on 32 Tesla K40 GPUs. Our seq2seqmodel was a 3 layer model, where the second and third layers each have 1000 hidden units. Themonolingual objectives, residual connection, and the modified attention were all used. We used theAdam optimizer (Kingma & Ba, 2015) and train with asynchronous SGD on 16 GPUs for speed.We used a learning rate of 5e-5 which is multiplied by 0.8 every 50K steps after an initial 400Ksteps, gradient clipping with norm 5.0 (Pascanu et al., 2013), and dropout of 0.2 on non-recurrentconnections (Zaremba et al., 2014). We used early stopping on validation set perplexity. A beamsize of 10 was used for decoding. Our ensemble is constructed with the 5 best performing modelson the validation set, which are trained with different hyperparameters.Results: Table 1 shows the results of our method in comparison with other baselines. Ourmethod achieves a new state-of-the-art for single model performance on both newstest2014 andnewstest2015, significantly outperforming the competitive semi-supervised backtranslation tech-nique (Sennrich et al., 2015b). Equally impressive is the fact that our best single model outperformsthe previous state of the art ensemble of 4 models. Our ensemble of 5 models matches or exceedsthe previous best ensemble of 12 models.BLEUSystem ensemble? newstest2014 newstest2015Phrase Based MT (Williams et al., 2016) - 21.9 23.7Supervised NMT (Jean et al., 2015) single - 22.4Edit Distance Transducer NMT (Stahlberg et al., 2016) single 21.7 24.1Edit Distance Transducer NMT (Stahlberg et al., 2016) ensemble 8 22.9 25.7Backtranslation (Sennrich et al., 2015b) single 22.7 25.7Backtranslation (Sennrich et al., 2015b) ensemble 4 23.8 26.5Backtranslation (Sennrich et al., 2015b) ensemble 12 24.7 27.6No pretraining single 21.3 24.3Pretrained seq2seq single 24.0 27.0Pretrained seq2seq ensemble 5 24.7 28.1Table 1: English!German performance on WMT test sets. Our pretrained model outperforms allother models. Note that the model without pretraining uses the LM objective.Ablation study: In order to better understand the effects of pretraining, we conducted an ablationstudy by modifying the pretraining scheme. Figure 3 shows the drop in validation BLEU of variousablations compared with the full model. The full model uses LMs trained with monolingual data toinitialize the encoder and decoder, in addition to the language modeling objective. In the following,we interpret the findings of the study. Note that some findings are specific to the translation task.Given the results from the ablation study, we can make the following observations:Pretraining the decoder is better than pretraining the encoder: Only pretraining the encoderleads to a 1.6 BLEU point drop while only pretraining the decoder leads to a 1.0 BLEUpoint drop.Pretrain as much as possible because the benefits compound: given the drops of no pre-training at all (2:0) and only pretraining the encoder ( 1:6), the additive estimate of thedrop of only pretraining the decoder side is 2:0(1:6) =0:4; however the actualdrop is1:0which is a much larger drop than the additive estimate.Pretraining the softmax is important: Pretraining only the embeddings and first LSTM layergives a large drop of 1.6 BLEU points.4Under review as a conference paper at ICLR 20172.01.51.00.50.0Difference/uni00A0in/uni00A0BLEU/uni00AD2.1Pretrain/uni00A0on/uni00A0parallel/uni00A0corpus/uni00AD2.0No/uni00A0pretraining/uni00AD2.0Only/uni00A0pretrain/uni00A0embeddings/uni00AD2.0No/uni00A0LM/uni00A0objective/uni00AD1.6Only/uni00A0pretrain/uni00A0encoder/uni00AD1.6Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM/uni00AD1.0Only/uni00A0pretrain/uni00A0decoder/uni00AD0.3Pretrain/uni00A0on/uni00A0WikipediaFigure 3: English!German ablation study measuring the difference in validation BLEU betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withmonolingual data to initialize the encoder and decoder, plus the language modeling objective.The language modeling objective is a strong regularizer: The drop in BLEU points ofpretraining the entire model and not using the LM objective is as bad as using the LMobjective without pretraining.Pretraining on a lot of unlabeled data is essential for learning to extract powerful features:If the model is initialized with LMs that are pretrained on the source part and target part oftheparallel corpus, the drop in performance is as large as not pretraining at all. However,performance remains strong when pretrained on the large, non-news Wikipedia corpus.To understand the contributions of unsupervised pretraining vs. supervised training, we track theperformance of pretraining as a function of dataset size. For this, we trained a a model with andwithout pretraining on random subsets of the English !German corpus. Both models use the ad-ditional LM objective. The results are summarized in Figure 4. When a 100% of the labeled datais used, the gap between the pretrained and no pretrain model is 2.0 BLEU points. However, thatgap grows when less data is available. When trained on 20% of the labeled data, the gap becomes3.8 BLEU points. This demonstrates that the pretrained models degrade less as the labeled datasetbecomes smaller.20 40 60 80 100Percent/uni00A0of/uni00A0entire/uni00A0labeled/uni00A0dataset/uni00A0used/uni00A0for/uni00A0training1516171819202122BLEUPretrainNo/uni00A0pretrainFigure 4: Validation performance of pretraining vs. no pretraining when trained on a subset of theentire labeled dataset for English !German translation.5Under review as a conference paper at ICLR 20173.2 A BSTRACTIVE SUMMARIZATIONDataset and Evaluation: For a low-resource abstractive summarization task, we use theCNN/Daily Mail corpus from (Hermann et al., 2015). Following Nallapati et al. (2016), we modifythe data collection scripts to restore the bullet point summaries. The task is to predict the bulletpoint summaries from a news article. The dataset has fewer than 300K document-summary pairs.To compare against Nallapati et al. (2016), we used the anonymized corpus. However, for our abla-tion study, we used the non-anonymized corpus.1We evaluate our system using full length ROUGE(Lin, 2004). For the anonymized corpus in particular, we considered each highlight as a separatesentence following Nallapati et al. (2016). In this setting, we used the English Gigaword corpus(Napoles et al., 2012) as our larger, unlabeled “monolingual” corpus, although all data used in thistask is in English.Experimental settings: We use subword units (Sennrich et al., 2015a) with 31500 merges, result-ing in a vocabulary size of about 32000. We use up to the first 600 tokens of the document andpredict the entire summary. Only one language model is trained and it is used to initialize both theencoder and decoder, since the source and target languages are the same. However, the encoderand decoder are not tied. The LM is a one-layer LSTM of size 1024 trained in a similar fashion toJozefowicz et al. (2016). For the seq2seq model, we use the same settings as the machine translationexperiments. The only differences are that we use a 2 layer model with the second layer having1024 hidden units, and that the learning rate is multiplied by 0.8 every 30K steps after an initial100K steps.Results: Table 2 summarizes our results on the anonymized version of the corpus. Our pretrainedmodel is only able to match the previous baseline seq2seq of Nallapati et al. (2016). However, ourmodel is a unidirectional LSTM while they use a bidirectional LSTM. They also use a longer contextof 800 tokens, whereas we used a context of 600 tokens due to GPU memory issues. Furthermore,they use pretrained word2vec (Mikolov et al., 2013) vectors to initialize their word embeddings. Aswe show in our ablation study, just pretraining the embeddings itself gives a large improvement.System ROUGE-1 ROUGE-2 ROUGE-LSeq2seq + pretrained embeddings (Nallapati et al., 2016) 32.49 11.84 29.47+ temporal attention (Nallapati et al., 2016) 35.46 13.30 32.65Pretrained seq2seq 32.56 11.89 29.44Table 2: Results on the anonymized CNN/Daily Mail dataset.Ablation study: We performed an ablation study similar to the one performed on the machinetranslation model. The results are reported in Figure 5. Here we report the drops on ROUGE-1,ROUGE-2, and ROUGE-L on the non-anonymized validation set.Given the results from our ablation study, we can make the following observations:Pretraining improves optimization: in contrast with the machine translation model, it ismore beneficial to only pretrain the encoder than only the decoder of the summarizationmodel. One interpretation is that pretraining enables the gradient to flow much furtherback in time than randomly initialized weights. This may also explain why pretraining onthe parallel corpus is no worse than pretraining on a larger monolingual corpus.The language modeling objective is a strong regularizer: A model without the LM objectivehas a significant drop in ROUGE scores.Human evaluation: As ROUGE may not be able to capture the quality of summarization, wealso performed a small qualitative study to understand the human impression of the summariesproduced by different models. We took 200 random documents and compared the performance of1We encourage future researchers to use the non-anonymized version because it is a more realistic summa-rization setting with a larger vocabulary. Our numbers on the non-anonymized test set are 35:56ROUGE-1,14:60ROUGE-2, and 25:08ROUGE-L. We did not consider highlights as separate sentences.6Under review as a conference paper at ICLR 2017/uni00AD5/uni00AD4/uni00AD3/uni00AD2/uni00AD10Difference/uni00A0in/uni00A0ROUGENo/uni00A0pretraining Only/uni00A0pretrain/uni00A0decoder No/uni00A0LM/uni00A0objective Only/uni00A0pretrain/uni00A0embeddings Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM Only/uni00A0pretrain/uni00A0encoder Pretrain/uni00A0on/uni00A0parallel/uni00A0corpusROUGE/uni00AD1ROUGE/uni00AD2ROUGE/uni00ADLFigure 5: Summarization ablation study measuring the difference in validation ROUGE betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withunlabeled data to initialize the encoder and decoder, plus the language modeling objective.a pretrained and non-pretrained system. The document, gold summary, and the two system outputswere presented to a human evaluator who was asked to rate each system output on a scale of 1-5with 5 being the best score. The system outputs were presented in random order and the evaluatordid not know the identity of either output. The evaluator noted if there were repetitive phrases orsentences in either system outputs. Unwanted repetition was also noticed by Nallapati et al. (2016).Table 3 and 4 show the results of the study. In both cases, the pretrained system outperforms thesystem without pretraining in a statistically significant manner. The better optimization enabled bypretraining improves the generated summaries and decreases unwanted repetition in the output.NP>PNP = P NP<P29 88 83Table 3: The count of how often the no pretrain system ( NP) achieves a higher, equal, and lowerscore than the pretrained system ( P) in the side-by-side study where the human evaluator gave eachsystem a score from 1-5. The sign statistical test gives a p-value of <0:0001 for rejecting the nullhypothesis that there is no difference in the score obtained by either system.No pretrainNo repeats RepeatsPretrainNo repeats 67 65Repeats 24 44Table 4: The count of how often the pretrain and no pretrain systems contain repeated phrases orsentences in their outputs in the side-by-side study. McNemar’s test gives a p-value of <0:0001for rejecting the null hypothesis that the two systems repeat the same proportion of times. Thepretrained system clearly repeats less than the system without pretraining.4 R ELATED WORKUnsupervised pretraining has been intensively studied in the past years, most notably is the workby Dahl et al. (2012) who found that pretraining with deep belief networks improved feedforwardacoustic models. More recent acoustic models have found pretraining unnecessary (Xiong et al.,7Under review as a conference paper at ICLR 20172016; Zhang et al., 2016; Chan et al., 2015), probably because the reconstruction objective of deepbelief networks is too easy. In contrast, we find that pretraining language models by next stepprediction significantly improves seq2seq on challenging real world datasets.Despite its appeal, unsupervised learning is rarely shown to improve supervised training. Dai & Le(2015) was amongst the rare studies which showed the benefits of pretraining in a semi-supervisedlearning setting. Their method is similar to our method except that they did not have a decodernetwork and thus could not apply to seq2seq learning. Similarly, Zhang & Zong (2016) found ituseful to add an additional task of sentence reordering of source-side monolingual data for neuralmachine translation. Various forms of transfer or multitask learning with seq2seq framework alsohave the flavors of our algorithm (Zoph et al., 2016; Luong et al., 2015a; Firat et al., 2016).Perhaps most closely related to our method is the work by Gulcehre et al. (2015), who combined alanguage model with an already trained seq2seq model by fine-tuning additional deep output layers.Empirically, their method produces small improvements over the supervised baseline. We suspectthat their method does not produce significant gains because (i) the models are trained independentlyof each other and are not fine-tuned (ii) the LM is combined with the seq2seq model after the lastlayer, wasting the benefit of the low level LM features, and (iii) only using the LM on the decoderside. Venugopalan et al. (2016) addressed (i) but still experienced minor improvements. Usingpretrained GloVe embedding vectors (Pennington et al., 2014) had more impact.Related to our approach in principle is the work by Chen et al. (2016) who proposed a two-term,theoretically motivated unsupervised objective for unpaired input-output samples. Though they didnot apply their method to seq2seq learning, their framework can be modified to do so. In that case,the first term pushes the output to be highly probable under some scoring model, and the secondterm ensures that the output depends on the input. In the seq2seq setting, we interpret the first termas a pretrained language model scoring the output sequence. In our work, we fold the pretrainedlanguage model into the decoder. We believe that using the pretrained language model only forscoring is less efficient that using all the pretrained weights. Our use of labeled examples satisfiesthe second term. These connections provide a theoretical grounding for our work.In our experiments, we benchmark our method on machine translation, where other unsupervisedmethods are shown to give promising results (Sennrich et al., 2015b; Cheng et al., 2016). In back-translation (Sennrich et al., 2015b), the trained model is used to decode unlabeled data to yield extralabeled data. One can argue that this method may not have a natural analogue to other tasks such assummarization. We note that their technique is complementary to ours, and may lead to additionalgains in machine translation. The method of using autoencoders in Cheng et al. (2016) is promising,though it can be argued that autoencoding is an easy objective and language modeling may force theunsupervised models to learn better features.5 C ONCLUSIONWe presented a novel unsupervised pretraining method to improve sequence to sequence learning.The method can aid in both generalization and optimization. Our scheme involves pretraining twolanguage models in the source and target domain, and initializing the embeddings, first LSTM layers,and softmax of a sequence to sequence model with the weights of the language models. Using ourmethod, we achieved state-of-the-art machine translation results on both WMT’14 and WMT’15English to German.A key advantage of this technique is that it is flexible and can be applied to a large variety of tasks,such as summarization, where it surpasses the supervised learning baseline.ACKNOWLEDGMENTSWe thank George Dahl, Andrew Dai, Laurent Dinh, Stephan Gouws, Geoffrey Hinton, Rafal Joze-fowicz, Pooya Khorrami, Phillip Louis, Ramesh Nallapati, Arvind Neelakantan, Xin Pan, Abi See,Rico Sennrich, Luke Vilnis, Yuan Yu and the Google Brain team for their help with the project.8Under review as a conference paper at ICLR 2017
S1iDqXoVl
the paper addresses a very important issue of exploiting non-parallel training data, but it should add detailed discussions on comparing with two pieces of prior arts detailed in the review below
5: Marginally below acceptance threshold
strengths: A method is proposed in this paper to initialize the encoder and decoder of the seq2seq model using the trained weights of language models with no parallel data. After such pretraining, all weights are jointly fine-tuned with parallel labeled data with an additional language modeling loss. It is shown that pretraining accelerates training and improves generalization of seq2seq models. The main value of the proposed method is to leverage separate source and target corpora, contrasting the common methods of using large amounts of parallel training corpora. weaknesses: The objective function shown in the middle of pg 3 is highly empirical, not directly linked to how non-parallel data helps to improve the final prediction results. The paper should compare with and discuss the objective function based on expectation of cross entropy which is directly linked to improving prediction results as proposed in arXiv:1606.04646, Chen et al.: Unsupervised Learning of Predictors from Unpaired Input-Output Samples, 2016. The pre-training procedure proposed in this paper is also closely connected with the DNN pretraining method presented in Dahl et al. 2011, 2012. Comparisons should be made in the paper, highlighting why the proposed one is conceptually superior if the authors believe so.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
Sys6GJqxl
ICLR.cc/2017/conference
2017
Delving into Transferable Adversarial Examples and Black-box Attacks
["Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song"]
"An intriguing property of deep neural networks is the existence of adversarial examples, which can (...TRUNCATED)
["Computer vision", "Deep learning", "Applications"]
"ABSTRACTAn intriguing property of deep neural networks is the existence of adversarial ex-amples, w(...TRUNCATED)
Syhdnc0Qx
interesting and insightful work on adversarial examples for deep CNNs for image classification
7: Good paper, accept
"This paper present an experimental study of the robustness of state-of-the-art CNNs to different ty(...TRUNCATED)
3: The reviewer is fairly confident that the evaluation is correct
Sys6GJqxl
ICLR.cc/2017/conference
2017
Delving into Transferable Adversarial Examples and Black-box Attacks
["Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song"]
"An intriguing property of deep neural networks is the existence of adversarial examples, which can (...TRUNCATED)
["Computer vision", "Deep learning", "Applications"]
"ABSTRACTAn intriguing property of deep neural networks is the existence of adversarial ex-amples, w(...TRUNCATED)
HJeU-eaQx
Review for Liu et al
5: Marginally below acceptance threshold
"I reviewed the manuscript as of December 7th.\n\nSummary:\nThe authors investigate the transferabil(...TRUNCATED)
3: The reviewer is fairly confident that the evaluation is correct
Sys6GJqxl
ICLR.cc/2017/conference
2017
Delving into Transferable Adversarial Examples and Black-box Attacks
["Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song"]
"An intriguing property of deep neural networks is the existence of adversarial examples, which can (...TRUNCATED)
["Computer vision", "Deep learning", "Applications"]
"ABSTRACTAn intriguing property of deep neural networks is the existence of adversarial ex-amples, w(...TRUNCATED)
ryLKyXLVg
good in-depth exploration but strongly recommend a rewrite
6: Marginally above acceptance threshold
"The paper presents an interesting and very detailed study of targeted and non-targeted adversarial (...TRUNCATED)
3: The reviewer is fairly confident that the evaluation is correct
BkSmc8qll
ICLR.cc/2017/conference
2017
Dynamic Neural Turing Machine with Continuous and Discrete Addressing Schemes
["Caglar Gulcehre", "Sarath Chandar", "Kyunghyun Cho", "Yoshua Bengio"]
"In this paper, we extend neural Turing machine (NTM) into a dynamic neural Turing machine (D-NTM) b(...TRUNCATED)
["Deep learning", "Natural language processing", "Reinforcement Learning"]
"ABSTRACTIn this paper, we extend neural Turing machine (NTM) into a dynamic neural Turingmachine (D(...TRUNCATED)
Hkg1A2IVx
Review
6: Marginally above acceptance threshold
"The authors proposed a dynamic neural Turing machine (D-NTM) model that overcomes the rigid locatio(...TRUNCATED)
4: The reviewer is confident but not absolutely certain that the evaluation is correct
README.md exists but content is empty.
Downloads last month
13