note_id
stringlengths
9
12
forum_id
stringlengths
9
13
invitation
stringlengths
40
95
content
stringlengths
44
35k
type
stringclasses
1 value
year
stringclasses
7 values
venue
stringclasses
171 values
paper_title
stringlengths
0
188
paper_authors
stringlengths
4
1.01k
paper_abstract
stringlengths
0
5k
paper_keywords
stringlengths
2
679
forum_url
stringlengths
41
45
pdf_url
stringlengths
39
43
review_url
stringlengths
58
64
raw_ocr_text
stringlengths
4
631k
HJ3LKSSEg
SJU4ayYgl
ICLR.cc/2017/conference/-/paper72/official/review
{"title": "", "rating": "7: Good paper, accept", "review": "The paper introduces a method for semi-supervised learning in graphs that exploits the spectral structure of the graph in a convolutional NN implementation. The proposed algorithm has a limited complexity and it is shown to scale well on a large dataset. The comparison with baselines on different datasets show a clear jump of performance with the proposed method.\n\nThe paper is technically fine and clear, the algorithm seems to scale well, and the results on the different datasets compare very favorably with the different baselines. The algorithm is simple and training seems easy. Concerning the originality, the proposed algorithm is a simple adaptation of graph convolutional networks (ref Defferrard 2016 in the paper) to a semi-supervised transductive setting. This is clearly mentioned in the paper, but the authors could better highlight the differences and novelty wrt this reference paper. Also, there is no comparison with the family of iterative classifiers, which usually compare favorably, both in performance and training time, with regularization based approaches, although they are mostly used in inductive settings. Below are some references for this family of methods.\n\nThe authors mention that more complex filters could be learned by stacking layers but they limit their architecture to one hidden layer. They should comment on the interest of using more layers for graph classification.\n\n\nSome references on iterative classification Qing Lu and Lise Getoor. 2003. Link-based classification. In ICML, Vol. 3. 496\u2013503.\n\nGideon S Mann and Andrew McCallum. 2010. Generalized expectation criteria for semi-supervised learning with weakly labeled data. The Journal of Machine Learning Research 11 (2010), 955\u2013984.\nDavid Jensen, Jennifer Neville, and Brian Gallagher. 2004. Why collective inference improves relational classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 593\u2013598.\nJoseph J Pfeiffer III, Jennifer Neville, and Paul N Bennett. 2015. Overcoming Relational Learning Biases\nto Accurately Predict Preferences in Large Scale Networks. In Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 853\u2013\n863.\nStephane Peters, Ludovic Denoyer, and Patrick Gallinari. 2010. Iterative annotation of multi-relational social networks. In Advances in Social Networks Analysis and Mining (ASONAM), 2010 International Conference on. IEEE, 96\u2013103.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Semi-Supervised Classification with Graph Convolutional Networks
["Thomas N. Kipf", "Max Welling"]
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.
["Deep learning", "Semi-Supervised Learning"]
https://openreview.net/forum?id=SJU4ayYgl
https://openreview.net/pdf?id=SJU4ayYgl
https://openreview.net/forum?id=SJU4ayYgl&noteId=HJ3LKSSEg
Published as a conference paper at ICLR 2017SEMI-SUPERVISED CLASSIFICATION WITHGRAPH CONVOLUTIONAL NETWORKSThomas N. KipfUniversity of AmsterdamT.N.Kipf@uva.nlMax WellingUniversity of AmsterdamCanadian Institute for Advanced Research (CIFAR)M.Welling@uva.nlABSTRACTWe present a scalable approach for semi-supervised learning on graph-structureddata that is based on an efficient variant of convolutional neural networks whichoperate directly on graphs. We motivate the choice of our convolutional archi-tecture via a localized first-order approximation of spectral graph convolutions.Our model scales linearly in the number of graph edges and learns hidden layerrepresentations that encode both local graph structure and features of nodes. Ina number of experiments on citation networks and on a knowledge graph datasetwe demonstrate that our approach outperforms related methods by a significantmargin.1 I NTRODUCTIONWe consider the problem of classifying nodes (such as documents) in a graph (such as a citationnetwork), where labels are only available for a small subset of nodes. This problem can be framedas graph-based semi-supervised learning, where label information is smoothed over the graph viasome form of explicit graph-based regularization (Zhu et al., 2003; Zhou et al., 2004; Belkin et al.,2006; Weston et al., 2012), e.g. by using a graph Laplacian regularization term in the loss function:L=L0+Lreg;withLreg=Xi;jAijkf(Xi)f(Xj)k2=f(X)>f(X): (1)Here,L0denotes the supervised loss w.r.t. the labeled part of the graph, f()can be a neural network-like differentiable function, is a weighing factor and Xis a matrix of node feature vectors Xi. =DAdenotes the unnormalized graph Laplacian of an undirected graph G= (V;E)withNnodesvi2V, edges (vi;vj)2E, an adjacency matrix A2RNN(binary or weighted) anda degree matrix Dii=PjAij. The formulation of Eq. 1 relies on the assumption that connectednodes in the graph are likely to share the same label. This assumption, however, might restrictmodeling capacity, as graph edges need not necessarily encode node similarity, but could containadditional information.In this work, we encode the graph structure directly using a neural network model f(X;A)andtrain on a supervised target L0for all nodes with labels, thereby avoiding explicit graph-basedregularization in the loss function. Conditioning f()on the adjacency matrix of the graph willallow the model to distribute gradient information from the supervised loss L0and will enable it tolearn representations of nodes both with and without labels.Our contributions are two-fold. Firstly, we introduce a simple and well-behaved layer-wise prop-agation rule for neural network models which operate directly on graphs and show how it can bemotivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011).Secondly, we demonstrate how this form of a graph-based neural network model can be used forfast and scalable semi-supervised classification of nodes in a graph. Experiments on a number ofdatasets demonstrate that our model compares favorably both in classification accuracy and effi-ciency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.1Published as a conference paper at ICLR 20172 F AST APPROXIMATE CONVOLUTIONS ON GRAPHSIn this section, we provide theoretical motivation for a specific graph-based neural network modelf(X;A)that we will use in the rest of this paper. We consider a multi-layer Graph ConvolutionalNetwork (GCN) with the following layer-wise propagation rule:H(l+1)=~D12~A~D12H(l)W(l): (2)Here, ~A=A+INis the adjacency matrix of the undirected graph Gwith added self-connections.INis the identity matrix, ~Dii=Pj~AijandW(l)is a layer-specific trainable weight matrix. ()denotes an activation function, such as the ReLU() = max(0;).H(l)2RNDis the matrix of ac-tivations in the lthlayer;H(0)=X. In the following, we show that the form of this propagation rulecan be motivated1via a first-order approximation of localized spectral filters on graphs (Hammondet al., 2011; Defferrard et al., 2016).2.1 S PECTRAL GRAPH CONVOLUTIONSWe consider spectral convolutions on graphs defined as the multiplication of a signal x2RN(ascalar for every node) with a filter g=diag()parameterized by 2RNin the Fourier domain,i.e.:g?x=UgU>x; (3)whereUis the matrix of eigenvectors of the normalized graph Laplacian L=IND12AD12=UU>, with a diagonal matrix of its eigenvalues andU>xbeing the graph Fourier transformofx. We can understand gas a function of the eigenvalues of L, i.e.g(). Evaluating Eq. 3 iscomputationally expensive, as multiplication with the eigenvector matrix UisO(N2). Furthermore,computing the eigendecomposition of Lin the first place might be prohibitively expensive for largegraphs. To circumvent this problem, it was suggested in Hammond et al. (2011) that g()can bewell-approximated by a truncated expansion in terms of Chebyshev polynomials Tk(x)up toKthorder:g0()KXk=00kTk(~); (4)with a rescaled ~ =2maxIN.maxdenotes the largest eigenvalue of L.02RKis now avector of Chebyshev coefficients. The Chebyshev polynomials are recursively defined as Tk(x) =2xTk1(x)Tk2(x), withT0(x) = 1 andT1(x) =x. The reader is referred to Hammond et al.(2011) for an in-depth discussion of this approximation.Going back to our definition of a convolution of a signal xwith a filterg0, we now have:g0?xKXk=00kTk(~L)x; (5)with ~L=2maxLIN; as can easily be verified by noticing that (UU>)k=UkU>. Note thatthis expression is now K-localized since it is a Kth-order polynomial in the Laplacian, i.e. it dependsonly on nodes that are at maximum Ksteps away from the central node ( Kth-order neighborhood).The complexity of evaluating Eq. 5 is O(jEj), i.e. linear in the number of edges. Defferrard et al.(2016) use this K-localized convolution to define a convolutional neural network on graphs.2.2 L AYER -WISELINEAR MODELA neural network model based on graph convolutions can therefore be built by stacking multipleconvolutional layers of the form of Eq. 5, each layer followed by a point-wise non-linearity. Now,imagine we limited the layer-wise convolution operation to K= 1(see Eq. 5), i.e. a function that islinear w.r.t.Land therefore a linear function on the graph Laplacian spectrum.1We provide an alternative interpretation of this propagation rule based on the Weisfeiler-Lehman algorithm(Weisfeiler & Lehmann, 1968) in Appendix A.2Published as a conference paper at ICLR 2017In this way, we can still recover a rich class of convolutional filter functions by stacking multiplesuch layers, but we are not limited to the explicit parameterization given by, e.g., the Chebyshevpolynomials. We intuitively expect that such a model can alleviate the problem of overfitting onlocal neighborhood structures for graphs with very wide node degree distributions, such as socialnetworks, citation networks, knowledge graphs and many other real-world graph datasets. Addition-ally, for a fixed computational budget, this layer-wise linear formulation allows us to build deepermodels, a practice that is known to improve modeling capacity on a number of domains (He et al.,2016).In this linear formulation of a GCN we further approximate max2, as we can expect that neuralnetwork parameters will adapt to this change in scale during training. Under these approximationsEq. 5 simplifies to:g0?x00x+01(LIN)x=00x01D12AD12x; (6)with two free parameters 00and01. The filter parameters can be shared over the whole graph.Successive application of filters of this form then effectively convolve the kth-order neighborhood ofa node, where kis the number of successive filtering operations or convolutional layers in the neuralnetwork model.In practice, it can be beneficial to constrain the number of parameters further to address overfittingand to minimize the number of operations (such as matrix multiplications) per layer. This leaves uswith the following expression:g?xIN+D12AD12x; (7)with a single parameter =00=01. Note that IN+D12AD12now has eigenvalues inthe range [0;2]. Repeated application of this operator can therefore lead to numerical instabilitiesand exploding/vanishing gradients when used in a deep neural network model. To alleviate thisproblem, we introduce the following renormalization trick :IN+D12AD12!~D12~A~D12, with~A=A+INand~Dii=Pj~Aij.We can generalize this definition to a signal X2RNCwithCinput channels (i.e. a C-dimensionalfeature vector for every node) and Ffilters or feature maps as follows:Z=~D12~A~D12X; (8)where 2RCFis now a matrix of filter parameters and Z2RNFis the convolved signalmatrix. This filtering operation has complexity O(jEjFC), as~AX can be efficiently implementedas a product of a sparse matrix with a dense matrix.3 S EMI-SUPERVISED NODE CLASSIFICATIONHaving introduced a simple, yet flexible model f(X;A)for efficient information propagation ongraphs, we can return to the problem of semi-supervised node classification. As outlined in the in-troduction, we can relax certain assumptions typically made in graph-based semi-supervised learn-ing by conditioning our model f(X;A)both on the data Xand on the adjacency matrix Aof theunderlying graph structure. We expect this setting to be especially powerful in scenarios where theadjacency matrix contains information not present in the data X, such as citation links between doc-uments in a citation network or relations in a knowledge graph. The overall model, a multi-layerGCN for semi-supervised learning, is schematically depicted in Figure 1.3.1 E XAMPLEIn the following, we consider a two-layer GCN for semi-supervised node classification on a graphwith a symmetric adjacency matrix A(binary or weighted). We first calculate ^A=~D12~A~D12ina pre-processing step. Our forward model then takes the simple form:Z=f(X;A) = softmax^AReLU^AXW(0)W(1): (9)3Published as a conference paper at ICLR 2017Cinput layerX1X2X3X4Foutput layerZ1Z2Z3Z4hiddenlayersY1Y41(a) Graph Convolutional Network30 20 10 0 10 20 303020100102030 (b) Hidden layer activationsFigure 1: Left: Schematic depiction of multi-layer Graph Convolutional Network (GCN) for semi-supervised learning with Cinput channels and Ffeature maps in the output layer. The graph struc-ture (edges shown as black lines) is shared over layers, labels are denoted by Yi.Right : t-SNE(Maaten & Hinton, 2008) visualization of hidden layer activations of a two-layer GCN trained onthe Cora dataset (Sen et al., 2008) using 5%of labels. Colors denote document class.Here,W(0)2RCHis an input-to-hidden weight matrix for a hidden layer with Hfeature maps.W(1)2RHFis a hidden-to-output weight matrix. The softmax activation function, defined assoftmax(xi) =1Zexp(xi)withZ=Piexp(xi), is applied row-wise. For semi-supervised multi-class classification, we then evaluate the cross-entropy error over all labeled examples:L=Xl2YLFXf=1YlflnZlf; (10)whereYLis the set of node indices that have labels.The neural network weights W(0)andW(1)are trained using gradient descent. In this work, weperform batch gradient descent using the full dataset for every training iteration, which is a viableoption as long as datasets fit in memory. Using a sparse representation for A, memory requirementisO(jEj), i.e. linear in the number of edges. Stochasticity in the training process is introduced viadropout (Srivastava et al., 2014). We leave memory-efficient extensions with mini-batch stochasticgradient descent for future work.3.2 I MPLEMENTATIONIn practice, we make use of TensorFlow (Abadi et al., 2015) for an efficient GPU-based imple-mentation2of Eq. 9 using sparse-dense matrix multiplications. The computational complexity ofevaluating Eq. 9 is then O(jEjCHF ), i.e. linear in the number of graph edges.4 R ELATED WORKOur model draws inspiration both from the field of graph-based semi-supervised learning and fromrecent work on neural networks that operate on graphs. In what follows, we provide a brief overviewon related work in both fields.4.1 G RAPH -BASED SEMI-SUPERVISED LEARNINGA large number of approaches for semi-supervised learning using graph representations have beenproposed in recent years, most of which fall into two broad categories: methods that use someform of explicit graph Laplacian regularization and graph embedding-based approaches. Prominentexamples for graph Laplacian regularization include label propagation (Zhu et al., 2003), manifoldregularization (Belkin et al., 2006) and deep semi-supervised embedding (Weston et al., 2012).2Code to reproduce our experiments is available at https://github.com/tkipf/gcn .4Published as a conference paper at ICLR 2017Recently, attention has shifted to models that learn graph embeddings with methods inspired bythe skip-gram model (Mikolov et al., 2013). DeepWalk (Perozzi et al., 2014) learns embeddingsvia the prediction of the local neighborhood of nodes, sampled from random walks on the graph.LINE (Tang et al., 2015) and node2vec (Grover & Leskovec, 2016) extend DeepWalk with moresophisticated random walk or breadth-first search schemes. For all these methods, however, a multi-step pipeline including random walk generation and semi-supervised training is required where eachstep has to be optimized separately. Planetoid (Yang et al., 2016) alleviates this by injecting labelinformation in the process of learning embeddings.4.2 N EURAL NETWORKS ON GRAPHSNeural networks that operate on graphs have previously been introduced in Gori et al. (2005);Scarselli et al. (2009) as a form of recurrent neural network. Their framework requires the repeatedapplication of contraction maps as propagation functions until node representations reach a stablefixed point. This restriction was later alleviated in Li et al. (2016) by introducing modern practicesfor recurrent neural network training to the original graph neural network framework. Duvenaudet al. (2015) introduced a convolution-like propagation rule on graphs and methods for graph-levelclassification. Their approach requires to learn node degree-specific weight matrices which does notscale to large graphs with wide node degree distributions. Our model instead uses a single weightmatrix per layer and deals with varying node degrees through an appropriate normalization of theadjacency matrix (see Section 3.1).A related approach to node classification with a graph-based neural network was recently introducedin Atwood & Towsley (2016). They report O(N2)complexity, limiting the range of possible appli-cations. In a different yet related model, Niepert et al. (2016) convert graphs locally into sequencesthat are fed into a conventional 1D convolutional neural network, which requires the definition of anode ordering in a pre-processing step.Our method is based on spectral graph convolutional neural networks, introduced in Bruna et al.(2014) and later extended by Defferrard et al. (2016) with fast localized convolutions. In contrastto these works, we consider here the task of transductive node classification within networks ofsignificantly larger scale. We show that in this setting, a number of simplifications (see Section 2.2)can be introduced to the original frameworks of Bruna et al. (2014) and Defferrard et al. (2016) thatimprove scalability and classification performance in large-scale networks.5 E XPERIMENTSWe test our model in a number of experiments: semi-supervised document classification in cita-tion networks, semi-supervised entity classification in a bipartite graph extracted from a knowledgegraph, an evaluation of various graph propagation models and a run-time analysis on random graphs.5.1 D ATASETSWe closely follow the experimental setup in Yang et al. (2016). Dataset statistics are summarizedin Table 1. In the citation network datasets—Citeseer, Cora and Pubmed (Sen et al., 2008)—nodesare documents and edges are citation links. Label rate denotes the number of labeled nodes that areused for training divided by the total number of nodes in each dataset. NELL (Carlson et al., 2010;Yang et al., 2016) is a bipartite graph dataset extracted from a knowledge graph with 55,864 relationnodes and 9,891 entity nodes.Table 1: Dataset statistics, as reported in Yang et al. (2016).Dataset Type Nodes Edges Classes Features Label rateCiteseer Citation network 3,327 4,732 6 3,703 0:036Cora Citation network 2,708 5,429 7 1,433 0:052Pubmed Citation network 19,717 44,338 3 500 0:003NELL Knowledge graph 65,755 266,144 210 5,414 0:0015Published as a conference paper at ICLR 2017Citation networks We consider three citation network datasets: Citeseer, Cora and Pubmed (Senet al., 2008). The datasets contain sparse bag-of-words feature vectors for each document and a listof citation links between documents. We treat the citation links as (undirected) edges and constructa binary, symmetric adjacency matrix A. Each document has a class label. For training, we only use20 labels per class, but all feature vectors.NELL NELL is a dataset extracted from the knowledge graph introduced in (Carlson et al., 2010).A knowledge graph is a set of entities connected with directed, labeled edges (relations). We followthe pre-processing scheme as described in Yang et al. (2016). We assign separate relation nodesr1andr2for each entity pair (e1;r;e 2)as(e1;r1)and(e2;r2). Entity nodes are described bysparse feature vectors. We extend the number of features in NELL by assigning a unique one-hotrepresentation for every relation node, effectively resulting in a 61,278-dim sparse feature vector pernode. The semi-supervised task here considers the extreme case of only a single labeled exampleper class in the training set. We construct a binary, symmetric adjacency matrix from this graph bysetting entries Aij= 1, if one or more edges are present between nodes iandj.Random graphs We simulate random graph datasets of various sizes for experiments where wemeasure training time per epoch. For a dataset with Nnodes we create a random graph assigning2Nedges uniformly at random. We take the identity matrix INas input feature matrix X, therebyimplicitly taking a featureless approach where the model is only informed about the identity of eachnode, specified by a unique one-hot vector. We add dummy labels Yi= 1for every node.5.2 E XPERIMENTAL SET-UPUnless otherwise noted, we train a two-layer GCN as described in Section 3.1 and evaluate pre-diction accuracy on a test set of 1,000 labeled examples. We provide additional experiments usingdeeper models with up to 10 layers in Appendix B. We choose the same dataset splits as in Yanget al. (2016) with an additional validation set of 500 labeled examples for hyperparameter opti-mization (dropout rate for all layers, L2 regularization factor for the first GCN layer and number ofhidden units). We do not use the validation set labels for training.For the citation network datasets, we optimize hyperparameters on Cora only and use the same setof parameters for Citeseer and Pubmed. We train all models for a maximum of 200 epochs (trainingiterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0:01and early stopping with awindow size of 10, i.e. we stop training if the validation loss does not decrease for 10 consecutiveepochs. We initialize weights using the initialization described in Glorot & Bengio (2010) andaccordingly (row-)normalize input feature vectors. On the random graph datasets, we use a hiddenlayer size of 32 units and omit regularization (i.e. neither dropout nor L2 regularization).5.3 B ASELINESWe compare against the same baseline methods as in Yang et al. (2016), i.e. label propagation(LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifoldregularization (ManiReg) (Belkin et al., 2006) and skip-gram based graph embeddings (DeepWalk)(Perozzi et al., 2014). We omit TSVM (Joachims, 1999), as it does not scale to the large number ofclasses in one of our datasets.We further compare against the iterative classification algorithm (ICA) proposed in Lu & Getoor(2003) in conjunction with two logistic regression classifiers, one for local node features alone andone for relational classification using local features and an aggregation operator as described inSen et al. (2008). We first train the local classifier using all labeled training set nodes and useit to bootstrap class labels of unlabeled nodes for relational classifier training. We run iterativeclassification (relational classifier) with a random node ordering for 10 iterations on all unlabelednodes (bootstrapped using the local classifier). L2 regularization parameter and aggregation operator(count vs.prop, see Sen et al. (2008)) are chosen based on validation set performance for each datasetseparately.Lastly, we compare against Planetoid (Yang et al., 2016), where we always choose their best-performing model variant (transductive vs. inductive) as a baseline.6Published as a conference paper at ICLR 20176 R ESULTS6.1 S EMI-SUPERVISED NODE CLASSIFICATIONResults are summarized in Table 2. Reported numbers denote classification accuracy in percent. ForICA, we report the mean accuracy of 100 runs with random node orderings. Results for all otherbaseline methods are taken from the Planetoid paper (Yang et al., 2016). Planetoid* denotes the bestmodel for the respective dataset out of the variants presented in their paper.Table 2: Summary of results in terms of classification accuracy (in percent).Method Citeseer Cora Pubmed NELLManiReg [3] 60:1 59 :5 70 :7 21 :8SemiEmb [28] 59:6 59 :0 71 :1 26 :7LP [32] 45:3 68 :0 63 :0 26 :5DeepWalk [22] 43:2 67 :2 65 :3 58 :1ICA [18] 69:1 75 :1 73 :9 23 :1Planetoid* [29] 64:7(26s) 75:7(13s) 77:2(25s) 61:9(185s)GCN (this paper) 70:3(7s) 81:5(4s) 79:0(38s) 66:0(48s)GCN (rand. splits) 67:90:5 80:10:5 78:90:7 58:41:7We further report wall-clock training time in seconds until convergence (in brackets) for our method(incl. evaluation of validation error) and for Planetoid. For the latter, we used an implementation pro-vided by the authors3and trained on the same hardware (with GPU) as our GCN model. We trainedand tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracyof 100 runs with random weight initializations. We used the following sets of hyperparameters forCiteseer, Cora and Pubmed: 0.5 (dropout rate), 5104(L2 regularization) and 16(number of hid-den units); and for NELL: 0.1 (dropout rate), 1105(L2 regularization) and 64(number of hiddenunits).In addition, we report performance of our model on 10 randomly drawn dataset splits of the samesize as in Yang et al. (2016), denoted by GCN (rand. splits). Here, we report mean and standarderror of prediction accuracy on the test set split in percent.6.2 E VALUATION OF PROPAGATION MODELWe compare different variants of our proposed per-layer propagation model on the citation networkdatasets. We follow the experimental set-up described in the previous section. Results are summa-rized in Table 3. The propagation model of our original GCN model is denoted by renormalizationtrick (in bold). In all other cases, the propagation model of both neural network layers is replacedwith the model specified under propagation model . Reported numbers denote mean classificationaccuracy for 100 repeated runs with random weight matrix initializations. In case of multiple vari-ables iper layer, we impose L2 regularization on all weight matrices of the first layer.Table 3: Comparison of propagation models.Description Propagation model Citeseer Cora PubmedChebyshev filter (Eq. 5)K= 3PKk=0Tk(~L)Xk69:8 79:5 74:4K= 2 69 :6 81:2 73:81st-order model (Eq. 6) X0+D12AD12X1 68:3 80:0 77:5Single parameter (Eq. 7) (IN+D12AD12)X 69 :3 79:2 77:4Renormalization trick (Eq. 8) ~D12~A~D12X 70:3 81:5 79:01st-order term only D12AD12X 68 :7 80:5 77:8Multi-layer perceptron X 46 :5 55:1 71:43https://github.com/kimiyoung/planetoid7Published as a conference paper at ICLR 20176.3 T RAINING TIME PER EPOCH1k 10k 100k 1M 10M# Edges10-310-210-1100101Sec./epoch*GPUCPUFigure 2: Wall-clock time per epoch for randomgraphs. (*) indicates out-of-memory error.Here, we report results for the mean trainingtime per epoch (forward pass, cross-entropycalculation, backward pass) for 100 epochs onsimulated random graphs, measured in secondswall-clock time. See Section 5.1 for a detaileddescription of the random graph dataset usedin these experiments. We compare results ona GPU and on a CPU-only implementation4inTensorFlow (Abadi et al., 2015). Figure 2 sum-marizes the results.7 D ISCUSSION7.1 S EMI-SUPERVISED MODELIn the experiments demonstrated here, our method for semi-supervised node classification outper-forms recent related methods by a significant margin. Methods based on graph-Laplacian regular-ization (Zhu et al., 2003; Belkin et al., 2006; Weston et al., 2012) are most likely limited due to theirassumption that edges encode mere similarity of nodes. Skip-gram based methods on the other handare limited by the fact that they are based on a multi-step pipeline which is difficult to optimize.Our proposed model can overcome both limitations, while still comparing favorably in terms of ef-ficiency (measured in wall-clock time) to related methods. Propagation of feature information fromneighboring nodes in every layer improves classification performance in comparison to methods likeICA (Lu & Getoor, 2003), where only label information is aggregated.We have further demonstrated that the proposed renormalized propagation model (Eq. 8) offers bothimproved efficiency (fewer parameters and operations, such as multiplication or addition) and betterpredictive performance on a number of datasets compared to a na ̈ıve1st-order model (Eq. 6) orhigher-order graph convolutional models using Chebyshev polynomials (Eq. 5).7.2 L IMITATIONS AND FUTURE WORKHere, we describe several limitations of our current model and outline how these might be overcomein future work.Memory requirement In the current setup with full-batch gradient descent, memory requirementgrows linearly in the size of the dataset. We have shown that for large graphs that do not fit in GPUmemory, training on CPU can still be a viable option. Mini-batch stochastic gradient descent canalleviate this issue. The procedure of generating mini-batches, however, should take into account thenumber of layers in the GCN model, as the Kth-order neighborhood for a GCN with Klayers has tobe stored in memory for an exact procedure. For very large and densely connected graph datasets,further approximations might be necessary.Directed edges and edge features Our framework currently does not naturally support edge fea-tures and is limited to undirected graphs (weighted or unweighted). Results on NELL howevershow that it is possible to handle both directed edges and edge features by representing the originaldirected graph as an undirected bipartite graph with additional nodes that represent edges in theoriginal graph (see Section 5.1 for details).Limiting assumptions Through the approximations introduced in Section 2, we implicitly assumelocality (dependence on the Kth-order neighborhood for a GCN with Klayers) and equal impor-tance of self-connections vs. edges to neighboring nodes. For some datasets, however, it might bebeneficial to introduce a trade-off parameter in the definition of ~A:~A=A+IN: (11)4Hardware used: 16-core Intel RXeon RCPU E5-2640 v3 @ 2.60GHz, GeForce RGTX TITAN X8Published as a conference paper at ICLR 2017This parameter now plays a similar role as the trade-off parameter between supervised and unsuper-vised loss in the typical semi-supervised setting (see Eq. 1). Here, however, it can be learned viagradient descent.8 C ONCLUSIONWe have introduced a novel approach for semi-supervised classification on graph-structured data.Our GCN model uses an efficient layer-wise propagation rule that is based on a first-order approx-imation of spectral convolutions on graphs. Experiments on a number of network datasets suggestthat the proposed GCN model is capable of encoding both graph structure and node features in away useful for semi-supervised classification. In this setting, our model outperforms several recentlyproposed methods by a significant margin, while being computationally efficient.ACKNOWLEDGMENTSWe would like to thank Christos Louizos, Taco Cohen, Joan Bruna, Zhilin Yang, Dave Herman,Pramod Sinha and Abdul-Saboor Sheikh for helpful discussions. This research was funded by SAP.REFERENCESMart ́ın Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in neuralinformation processing systems (NIPS) , 2016.Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric frame-work for learning from labeled and unlabeled examples. Journal of machine learning research(JMLR) , 7(Nov):2399–2434, 2006.Ulrik Brandes, Daniel Delling, Marco Gaertler, Robert Gorke, Martin Hoefer, Zoran Nikoloski,and Dorothea Wagner. On modularity clustering. IEEE Transactions on Knowledge and DataEngineering , 20(2):172–188, 2008.Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locallyconnected networks on graphs. In International Conference on Learning Representations (ICLR) ,2014.Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr, and Tom M.Mitchell. Toward an architecture for never-ending language learning. In AAAI , volume 5, pp. 3,2010.Micha ̈el Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks ongraphs with fast localized spectral filtering. In Advances in neural information processing systems(NIPS) , 2016.Brendan L. Douglas. The Weisfeiler-Lehman method and graph isomorphism testing. arXiv preprintarXiv:1101.5211 , 2011.David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Al ́anAspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs for learning molecularfingerprints. In Advances in neural information processing systems (NIPS) , pp. 2224–2232, 2015.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neuralnetworks. In AISTATS , volume 9, pp. 249–256, 2010.Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains.InProceedings. 2005 IEEE International Joint Conference on Neural Networks. , volume 2, pp.729–734. IEEE, 2005.Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedingsof the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining .ACM, 2016.9Published as a conference paper at ICLR 2017David K. Hammond, Pierre Vandergheynst, and R ́emi Gribonval. Wavelets on graphs via spectralgraph theory. Applied and Computational Harmonic Analysis , 30(2):129–150, 2011.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016.Thorsten Joachims. Transductive inference for text classification using support vector machines. InInternational Conference on Machine Learning (ICML) , volume 99, pp. 200–209, 1999.Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In Interna-tional Conference on Learning Representations (ICLR) , 2015.Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neuralnetworks. In International Conference on Learning Representations (ICLR) , 2016.Qing Lu and Lise Getoor. Link-based classification. In International Conference on Machine Learn-ing (ICML) , volume 3, pp. 496–503, 2003.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research (JMLR) , 9(Nov):2579–2605, 2008.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed repre-sentations of words and phrases and their compositionality. In Advances in neural informationprocessing systems (NIPS) , pp. 3111–3119, 2013.Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural net-works for graphs. In International Conference on Machine Learning (ICML) , 2016.Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social repre-sentations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledgediscovery and data mining , pp. 701–710. ACM, 2014.Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.The graph neural network model. IEEE Transactions on Neural Networks , 20(1):61–80, 2009.Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad.Collective classification in network data. AI magazine , 29(3):93, 2008.Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine LearningResearch (JMLR) , 15(1):1929–1958, 2014.Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scaleinformation network embedding. In Proceedings of the 24th International Conference on WorldWide Web , pp. 1067–1077. ACM, 2015.Boris Weisfeiler and A. A. Lehmann. A reduction of a graph to a canonical form and an algebraarising during this reduction. Nauchno-Technicheskaya Informatsia , 2(9):12–16, 1968.Jason Weston, Fr ́ed ́eric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi-supervised embedding. In Neural Networks: Tricks of the Trade , pp. 639–655. Springer, 2012.Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning withgraph embeddings. In International Conference on Machine Learning (ICML) , 2016.Wayne W. Zachary. An information flow model for conflict and fission in small groups. Journal ofanthropological research , pp. 452–473, 1977.Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Sch ̈olkopf.Learning with local and global consistency. In Advances in neural information processing systems(NIPS) , volume 16, pp. 321–328, 2004.Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Semi-supervised learning using gaussian fieldsand harmonic functions. In International Conference on Machine Learning (ICML) , volume 3,pp. 912–919, 2003.10Published as a conference paper at ICLR 2017A R ELATION TO WEISFEILER -LEHMAN ALGORITHMA neural network model for graph-structured data should ideally be able to learn representations ofnodes in a graph, taking both the graph structure and feature description of nodes into account. Awell-studied framework for the unique assignment of node labels given a graph and (optionally) dis-crete initial node labels is provided by the 1-dim Weisfeiler-Lehman (WL-1) algorithm (Weisfeiler& Lehmann, 1968):Algorithm 1: WL-1 algorithm (Weisfeiler & Lehmann, 1968)Input: Initial node coloring (h(0)1;h(0)2;:::;h(0)N)Output: Final node coloring (h(T)1;h(T)2;:::;h(T)N)t 0;repeatforvi2Vdoh(t+1)i hashPj2Nih(t)j;t t+ 1;until stable node coloring is reached ;Here,h(t)idenotes the coloring (label assignment) of node vi(at iteration t) andNiis its set ofneighboring node indices (irrespective of whether the graph includes self-connections for every nodeor not). hash()is a hash function. For an in-depth mathematical discussion of the WL-1 algorithmsee, e.g., Douglas (2011).We can replace the hash function in Algorithm 1 with a neural network layer-like differentiablefunction with trainable parameters as follows:h(l+1)i =0@Xj2Ni1cijh(l)jW(l)1A; (12)wherecijis an appropriately chosen normalization constant for the edge (vi;vj). Further, we cantakeh(l)inow to be a vector of activations of nodeiin thelthneural network layer. W(l)is alayer-specific weight matrix and ()denotes a differentiable, non-linear activation function.By choosing cij=pdidj, wheredi=jNijdenotes the degree of node vi, we recover the propaga-tion rule of our Graph Convolutional Network (GCN) model in vector form (see Eq. 2)5.This—loosely speaking—allows us to interpret our GCN model as a differentiable and parameter-ized generalization of the 1-dim Weisfeiler-Lehman algorithm on graphs.A.1 N ODE EMBEDDINGS WITH RANDOM WEIGHTSFrom the analogy with the Weisfeiler-Lehman algorithm, we can understand that even an untrainedGCN model with random weights can serve as a powerful feature extractor for nodes in a graph. Asan example, consider the following 3-layer GCN model:Z= tanh^Atanh^Atanh^AXW(0)W(1)W(2); (13)with weight matrices W(l)initialized at random using the initialization described in Glorot & Bengio(2010). ^A,XandZare defined as in Section 3.1.We apply this model on Zachary’s karate club network (Zachary, 1977). This graph contains 34nodes, connected by 154 (undirected and unweighted) edges. Every node is labeled by one offour classes, obtained via modularity-based clustering (Brandes et al., 2008). See Figure 3a for anillustration.5Note that we here implicitly assume that self-connections have already been added to every node in thegraph (for a clutter-free notation).11Published as a conference paper at ICLR 2017(a) Karate club network (b) Random weight embeddingFigure 3: Left: Zachary’s karate club network (Zachary, 1977), colors denote communities obtainedvia modularity-based clustering (Brandes et al., 2008). Right : Embeddings obtained from an un-trained 3-layer GCN model (Eq. 13) with random weights applied to the karate club network. Bestviewed on a computer screen.We take a featureless approach by setting X=IN, whereINis theNbyNidentity matrix. Nisthe number of nodes in the graph. Note that nodes are randomly ordered (i.e. ordering contains noinformation). Furthermore, we choose a hidden layer dimensionality6of4and a two-dimensionaloutput (so that the output can immediately be visualized in a 2-dim plot).Figure 3b shows a representative example of node embeddings (outputs Z) obtained from an un-trained GCN model applied to the karate club network. These results are comparable to embeddingsobtained from DeepWalk (Perozzi et al., 2014), which uses a more expensive unsupervised trainingprocedure.A.2 S EMI-SUPERVISED NODE EMBEDDINGSOn this simple example of a GCN applied to the karate club network it is interesting to observe howembeddings react during training on a semi-supervised classification task. Such a visualization (seeFigure 4) provides insights into how the GCN model can make use of the graph structure (and offeatures extracted from the graph structure at later layers) to learn embeddings that are useful for aclassification task.We consider the following semi-supervised learning setup: we add a softmax layer on top of ourmodel (Eq. 13) and train using only a single labeled example per class (i.e. a total number of 4 labelednodes). We train for 300 training iterations using Adam (Kingma & Ba, 2015) with a learning rateof0:01on a cross-entropy loss.Figure 4 shows the evolution of node embeddings over a number of training iterations. The modelsucceeds in linearly separating the communities based on minimal supervision and the graph struc-ture alone. A video of the full training process can be found on our website7.6We originally experimented with a hidden layer dimensionality of 2(i.e. same as output layer), but observedthat a dimensionality of 4resulted in less frequent saturation of tanh()units and therefore visually morepleasing results.7http://tkipf.github.io/graph-convolutional-networks/12Published as a conference paper at ICLR 2017(a) Iteration 25 (b) Iteration 50(c) Iteration 75 (d) Iteration 100(e) Iteration 200 (f) Iteration 300Figure 4: Evolution of karate club network node embeddings obtained from a GCN model after anumber of semi-supervised training iterations. Colors denote class. Nodes of which labels wereprovided during training (one per class) are highlighted (grey outline). Grey links between nodesdenote graph edges. Best viewed on a computer screen.13Published as a conference paper at ICLR 2017B E XPERIMENTS ON MODEL DEPTHIn these experiments, we investigate the influence of model depth (number of layers) on classificationperformance. We report results on a 5-fold cross-validation experiment on the Cora, Citeseer andPubmed datasets (Sen et al., 2008) using all labels. In addition to the standard GCN model (Eq. 2),we report results on a model variant where we use residual connections (He et al., 2016) betweenhidden layers to facilitate training of deeper models by enabling the model to carry over informationfrom the previous layer’s input:H(l+1)=~D12~A~D12H(l)W(l)+H(l): (14)On each cross-validation split, we train for 400 epochs (without early stopping) using the Adamoptimizer (Kingma & Ba, 2015) with a learning rate of 0:01. Other hyperparameters are chosen asfollows: 0.5 (dropout rate, first and last layer), 5104(L2 regularization, first layer), 16 (numberof units for each hidden layer) and 0.01 (learning rate). Results are summarized in Figure 5.12345678910Number of layers0.500.550.600.650.700.750.800.850.90AccuracyCiteseerTrainTrain (Residual)TestTest (Residual)12345678910Number of layers0.550.600.650.700.750.800.850.900.95AccuracyCoraTrainTrain (Residual)TestTest (Residual)12345678910Number of layers0.760.780.800.820.840.860.88AccuracyPubmedTrainTrain (Residual)TestTest (Residual)Figure 5: Influence of model depth (number of layers) on classification performance. Markersdenote mean classification accuracy (training vs. testing) for 5-fold cross-validation. Shaded areasdenote standard error. We show results both for a standard GCN model (dashed lines) and a modelwith added residual connections (He et al., 2016) between hidden layers (solid lines).For the datasets considered here, best results are obtained with a 2- or 3-layer model. We observethat for models deeper than 7 layers, training without the use of residual connections can becomedifficult, as the effective context size for each node increases by the size of its Kth-order neighbor-hood (for a model with Klayers) with each additional layer. Furthermore, overfitting can becomean issue as the number of parameters increases with model depth.14
HJMB4vQ4e
SJU4ayYgl
ICLR.cc/2017/conference/-/paper72/official/review
{"title": "Solid results.", "rating": "7: Good paper, accept", "review": "This paper proposes the graph convolutional networks, motivated from approximating graph convolutions. In one propagation step, what the model does can be simplified as, first linearly transform the node representations for each node, and then multiply the transformed node representations with the normalized affinity matrix (with self-connections added), and then pass through nonlinearity.\n\nThis model is used for semi-supervised learning on graphs, and in the experiments it demonstrated quite impressive results compared to other baselines, outperforming them by a significant margin. The evaluation of propagation model is also interesting, where different variants of the model and design decisions are evaluated and compared.\n\nIt is surprising that such a simple model works so much better than all the baselines. Considering that the model used is just a two-layer model in most experiments, this is really surprising as a two-layer model is very local, and the output of a node can only be affected by nodes in a 2-hop neighborhood, and no longer range interactions can play any roles in this. Since computation is quite efficient (sec. 6.3), I wonder if adding more layers helped anything or not.\n\nEven though motivated from graph convolutions, when simplified as the paper suggests, the operations the model does are quite simple. Compared to Duvenaud et al. 2015 and Li et al. 2016, the proposed method is simpler and does almost strictly less things. So how would the proposed GCN compare against these methods?\n\nOverall I think this model is simple, but the connection to graph convolutions is interesting, and the experiment results are quite good. There are a few questions that still remain, but I feel this paper can be accepted.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Semi-Supervised Classification with Graph Convolutional Networks
["Thomas N. Kipf", "Max Welling"]
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.
["Deep learning", "Semi-Supervised Learning"]
https://openreview.net/forum?id=SJU4ayYgl
https://openreview.net/pdf?id=SJU4ayYgl
https://openreview.net/forum?id=SJU4ayYgl&noteId=HJMB4vQ4e
Published as a conference paper at ICLR 2017SEMI-SUPERVISED CLASSIFICATION WITHGRAPH CONVOLUTIONAL NETWORKSThomas N. KipfUniversity of AmsterdamT.N.Kipf@uva.nlMax WellingUniversity of AmsterdamCanadian Institute for Advanced Research (CIFAR)M.Welling@uva.nlABSTRACTWe present a scalable approach for semi-supervised learning on graph-structureddata that is based on an efficient variant of convolutional neural networks whichoperate directly on graphs. We motivate the choice of our convolutional archi-tecture via a localized first-order approximation of spectral graph convolutions.Our model scales linearly in the number of graph edges and learns hidden layerrepresentations that encode both local graph structure and features of nodes. Ina number of experiments on citation networks and on a knowledge graph datasetwe demonstrate that our approach outperforms related methods by a significantmargin.1 I NTRODUCTIONWe consider the problem of classifying nodes (such as documents) in a graph (such as a citationnetwork), where labels are only available for a small subset of nodes. This problem can be framedas graph-based semi-supervised learning, where label information is smoothed over the graph viasome form of explicit graph-based regularization (Zhu et al., 2003; Zhou et al., 2004; Belkin et al.,2006; Weston et al., 2012), e.g. by using a graph Laplacian regularization term in the loss function:L=L0+Lreg;withLreg=Xi;jAijkf(Xi)f(Xj)k2=f(X)>f(X): (1)Here,L0denotes the supervised loss w.r.t. the labeled part of the graph, f()can be a neural network-like differentiable function, is a weighing factor and Xis a matrix of node feature vectors Xi. =DAdenotes the unnormalized graph Laplacian of an undirected graph G= (V;E)withNnodesvi2V, edges (vi;vj)2E, an adjacency matrix A2RNN(binary or weighted) anda degree matrix Dii=PjAij. The formulation of Eq. 1 relies on the assumption that connectednodes in the graph are likely to share the same label. This assumption, however, might restrictmodeling capacity, as graph edges need not necessarily encode node similarity, but could containadditional information.In this work, we encode the graph structure directly using a neural network model f(X;A)andtrain on a supervised target L0for all nodes with labels, thereby avoiding explicit graph-basedregularization in the loss function. Conditioning f()on the adjacency matrix of the graph willallow the model to distribute gradient information from the supervised loss L0and will enable it tolearn representations of nodes both with and without labels.Our contributions are two-fold. Firstly, we introduce a simple and well-behaved layer-wise prop-agation rule for neural network models which operate directly on graphs and show how it can bemotivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011).Secondly, we demonstrate how this form of a graph-based neural network model can be used forfast and scalable semi-supervised classification of nodes in a graph. Experiments on a number ofdatasets demonstrate that our model compares favorably both in classification accuracy and effi-ciency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.1Published as a conference paper at ICLR 20172 F AST APPROXIMATE CONVOLUTIONS ON GRAPHSIn this section, we provide theoretical motivation for a specific graph-based neural network modelf(X;A)that we will use in the rest of this paper. We consider a multi-layer Graph ConvolutionalNetwork (GCN) with the following layer-wise propagation rule:H(l+1)=~D12~A~D12H(l)W(l): (2)Here, ~A=A+INis the adjacency matrix of the undirected graph Gwith added self-connections.INis the identity matrix, ~Dii=Pj~AijandW(l)is a layer-specific trainable weight matrix. ()denotes an activation function, such as the ReLU() = max(0;).H(l)2RNDis the matrix of ac-tivations in the lthlayer;H(0)=X. In the following, we show that the form of this propagation rulecan be motivated1via a first-order approximation of localized spectral filters on graphs (Hammondet al., 2011; Defferrard et al., 2016).2.1 S PECTRAL GRAPH CONVOLUTIONSWe consider spectral convolutions on graphs defined as the multiplication of a signal x2RN(ascalar for every node) with a filter g=diag()parameterized by 2RNin the Fourier domain,i.e.:g?x=UgU>x; (3)whereUis the matrix of eigenvectors of the normalized graph Laplacian L=IND12AD12=UU>, with a diagonal matrix of its eigenvalues andU>xbeing the graph Fourier transformofx. We can understand gas a function of the eigenvalues of L, i.e.g(). Evaluating Eq. 3 iscomputationally expensive, as multiplication with the eigenvector matrix UisO(N2). Furthermore,computing the eigendecomposition of Lin the first place might be prohibitively expensive for largegraphs. To circumvent this problem, it was suggested in Hammond et al. (2011) that g()can bewell-approximated by a truncated expansion in terms of Chebyshev polynomials Tk(x)up toKthorder:g0()KXk=00kTk(~); (4)with a rescaled ~ =2maxIN.maxdenotes the largest eigenvalue of L.02RKis now avector of Chebyshev coefficients. The Chebyshev polynomials are recursively defined as Tk(x) =2xTk1(x)Tk2(x), withT0(x) = 1 andT1(x) =x. The reader is referred to Hammond et al.(2011) for an in-depth discussion of this approximation.Going back to our definition of a convolution of a signal xwith a filterg0, we now have:g0?xKXk=00kTk(~L)x; (5)with ~L=2maxLIN; as can easily be verified by noticing that (UU>)k=UkU>. Note thatthis expression is now K-localized since it is a Kth-order polynomial in the Laplacian, i.e. it dependsonly on nodes that are at maximum Ksteps away from the central node ( Kth-order neighborhood).The complexity of evaluating Eq. 5 is O(jEj), i.e. linear in the number of edges. Defferrard et al.(2016) use this K-localized convolution to define a convolutional neural network on graphs.2.2 L AYER -WISELINEAR MODELA neural network model based on graph convolutions can therefore be built by stacking multipleconvolutional layers of the form of Eq. 5, each layer followed by a point-wise non-linearity. Now,imagine we limited the layer-wise convolution operation to K= 1(see Eq. 5), i.e. a function that islinear w.r.t.Land therefore a linear function on the graph Laplacian spectrum.1We provide an alternative interpretation of this propagation rule based on the Weisfeiler-Lehman algorithm(Weisfeiler & Lehmann, 1968) in Appendix A.2Published as a conference paper at ICLR 2017In this way, we can still recover a rich class of convolutional filter functions by stacking multiplesuch layers, but we are not limited to the explicit parameterization given by, e.g., the Chebyshevpolynomials. We intuitively expect that such a model can alleviate the problem of overfitting onlocal neighborhood structures for graphs with very wide node degree distributions, such as socialnetworks, citation networks, knowledge graphs and many other real-world graph datasets. Addition-ally, for a fixed computational budget, this layer-wise linear formulation allows us to build deepermodels, a practice that is known to improve modeling capacity on a number of domains (He et al.,2016).In this linear formulation of a GCN we further approximate max2, as we can expect that neuralnetwork parameters will adapt to this change in scale during training. Under these approximationsEq. 5 simplifies to:g0?x00x+01(LIN)x=00x01D12AD12x; (6)with two free parameters 00and01. The filter parameters can be shared over the whole graph.Successive application of filters of this form then effectively convolve the kth-order neighborhood ofa node, where kis the number of successive filtering operations or convolutional layers in the neuralnetwork model.In practice, it can be beneficial to constrain the number of parameters further to address overfittingand to minimize the number of operations (such as matrix multiplications) per layer. This leaves uswith the following expression:g?xIN+D12AD12x; (7)with a single parameter =00=01. Note that IN+D12AD12now has eigenvalues inthe range [0;2]. Repeated application of this operator can therefore lead to numerical instabilitiesand exploding/vanishing gradients when used in a deep neural network model. To alleviate thisproblem, we introduce the following renormalization trick :IN+D12AD12!~D12~A~D12, with~A=A+INand~Dii=Pj~Aij.We can generalize this definition to a signal X2RNCwithCinput channels (i.e. a C-dimensionalfeature vector for every node) and Ffilters or feature maps as follows:Z=~D12~A~D12X; (8)where 2RCFis now a matrix of filter parameters and Z2RNFis the convolved signalmatrix. This filtering operation has complexity O(jEjFC), as~AX can be efficiently implementedas a product of a sparse matrix with a dense matrix.3 S EMI-SUPERVISED NODE CLASSIFICATIONHaving introduced a simple, yet flexible model f(X;A)for efficient information propagation ongraphs, we can return to the problem of semi-supervised node classification. As outlined in the in-troduction, we can relax certain assumptions typically made in graph-based semi-supervised learn-ing by conditioning our model f(X;A)both on the data Xand on the adjacency matrix Aof theunderlying graph structure. We expect this setting to be especially powerful in scenarios where theadjacency matrix contains information not present in the data X, such as citation links between doc-uments in a citation network or relations in a knowledge graph. The overall model, a multi-layerGCN for semi-supervised learning, is schematically depicted in Figure 1.3.1 E XAMPLEIn the following, we consider a two-layer GCN for semi-supervised node classification on a graphwith a symmetric adjacency matrix A(binary or weighted). We first calculate ^A=~D12~A~D12ina pre-processing step. Our forward model then takes the simple form:Z=f(X;A) = softmax^AReLU^AXW(0)W(1): (9)3Published as a conference paper at ICLR 2017Cinput layerX1X2X3X4Foutput layerZ1Z2Z3Z4hiddenlayersY1Y41(a) Graph Convolutional Network30 20 10 0 10 20 303020100102030 (b) Hidden layer activationsFigure 1: Left: Schematic depiction of multi-layer Graph Convolutional Network (GCN) for semi-supervised learning with Cinput channels and Ffeature maps in the output layer. The graph struc-ture (edges shown as black lines) is shared over layers, labels are denoted by Yi.Right : t-SNE(Maaten & Hinton, 2008) visualization of hidden layer activations of a two-layer GCN trained onthe Cora dataset (Sen et al., 2008) using 5%of labels. Colors denote document class.Here,W(0)2RCHis an input-to-hidden weight matrix for a hidden layer with Hfeature maps.W(1)2RHFis a hidden-to-output weight matrix. The softmax activation function, defined assoftmax(xi) =1Zexp(xi)withZ=Piexp(xi), is applied row-wise. For semi-supervised multi-class classification, we then evaluate the cross-entropy error over all labeled examples:L=Xl2YLFXf=1YlflnZlf; (10)whereYLis the set of node indices that have labels.The neural network weights W(0)andW(1)are trained using gradient descent. In this work, weperform batch gradient descent using the full dataset for every training iteration, which is a viableoption as long as datasets fit in memory. Using a sparse representation for A, memory requirementisO(jEj), i.e. linear in the number of edges. Stochasticity in the training process is introduced viadropout (Srivastava et al., 2014). We leave memory-efficient extensions with mini-batch stochasticgradient descent for future work.3.2 I MPLEMENTATIONIn practice, we make use of TensorFlow (Abadi et al., 2015) for an efficient GPU-based imple-mentation2of Eq. 9 using sparse-dense matrix multiplications. The computational complexity ofevaluating Eq. 9 is then O(jEjCHF ), i.e. linear in the number of graph edges.4 R ELATED WORKOur model draws inspiration both from the field of graph-based semi-supervised learning and fromrecent work on neural networks that operate on graphs. In what follows, we provide a brief overviewon related work in both fields.4.1 G RAPH -BASED SEMI-SUPERVISED LEARNINGA large number of approaches for semi-supervised learning using graph representations have beenproposed in recent years, most of which fall into two broad categories: methods that use someform of explicit graph Laplacian regularization and graph embedding-based approaches. Prominentexamples for graph Laplacian regularization include label propagation (Zhu et al., 2003), manifoldregularization (Belkin et al., 2006) and deep semi-supervised embedding (Weston et al., 2012).2Code to reproduce our experiments is available at https://github.com/tkipf/gcn .4Published as a conference paper at ICLR 2017Recently, attention has shifted to models that learn graph embeddings with methods inspired bythe skip-gram model (Mikolov et al., 2013). DeepWalk (Perozzi et al., 2014) learns embeddingsvia the prediction of the local neighborhood of nodes, sampled from random walks on the graph.LINE (Tang et al., 2015) and node2vec (Grover & Leskovec, 2016) extend DeepWalk with moresophisticated random walk or breadth-first search schemes. For all these methods, however, a multi-step pipeline including random walk generation and semi-supervised training is required where eachstep has to be optimized separately. Planetoid (Yang et al., 2016) alleviates this by injecting labelinformation in the process of learning embeddings.4.2 N EURAL NETWORKS ON GRAPHSNeural networks that operate on graphs have previously been introduced in Gori et al. (2005);Scarselli et al. (2009) as a form of recurrent neural network. Their framework requires the repeatedapplication of contraction maps as propagation functions until node representations reach a stablefixed point. This restriction was later alleviated in Li et al. (2016) by introducing modern practicesfor recurrent neural network training to the original graph neural network framework. Duvenaudet al. (2015) introduced a convolution-like propagation rule on graphs and methods for graph-levelclassification. Their approach requires to learn node degree-specific weight matrices which does notscale to large graphs with wide node degree distributions. Our model instead uses a single weightmatrix per layer and deals with varying node degrees through an appropriate normalization of theadjacency matrix (see Section 3.1).A related approach to node classification with a graph-based neural network was recently introducedin Atwood & Towsley (2016). They report O(N2)complexity, limiting the range of possible appli-cations. In a different yet related model, Niepert et al. (2016) convert graphs locally into sequencesthat are fed into a conventional 1D convolutional neural network, which requires the definition of anode ordering in a pre-processing step.Our method is based on spectral graph convolutional neural networks, introduced in Bruna et al.(2014) and later extended by Defferrard et al. (2016) with fast localized convolutions. In contrastto these works, we consider here the task of transductive node classification within networks ofsignificantly larger scale. We show that in this setting, a number of simplifications (see Section 2.2)can be introduced to the original frameworks of Bruna et al. (2014) and Defferrard et al. (2016) thatimprove scalability and classification performance in large-scale networks.5 E XPERIMENTSWe test our model in a number of experiments: semi-supervised document classification in cita-tion networks, semi-supervised entity classification in a bipartite graph extracted from a knowledgegraph, an evaluation of various graph propagation models and a run-time analysis on random graphs.5.1 D ATASETSWe closely follow the experimental setup in Yang et al. (2016). Dataset statistics are summarizedin Table 1. In the citation network datasets—Citeseer, Cora and Pubmed (Sen et al., 2008)—nodesare documents and edges are citation links. Label rate denotes the number of labeled nodes that areused for training divided by the total number of nodes in each dataset. NELL (Carlson et al., 2010;Yang et al., 2016) is a bipartite graph dataset extracted from a knowledge graph with 55,864 relationnodes and 9,891 entity nodes.Table 1: Dataset statistics, as reported in Yang et al. (2016).Dataset Type Nodes Edges Classes Features Label rateCiteseer Citation network 3,327 4,732 6 3,703 0:036Cora Citation network 2,708 5,429 7 1,433 0:052Pubmed Citation network 19,717 44,338 3 500 0:003NELL Knowledge graph 65,755 266,144 210 5,414 0:0015Published as a conference paper at ICLR 2017Citation networks We consider three citation network datasets: Citeseer, Cora and Pubmed (Senet al., 2008). The datasets contain sparse bag-of-words feature vectors for each document and a listof citation links between documents. We treat the citation links as (undirected) edges and constructa binary, symmetric adjacency matrix A. Each document has a class label. For training, we only use20 labels per class, but all feature vectors.NELL NELL is a dataset extracted from the knowledge graph introduced in (Carlson et al., 2010).A knowledge graph is a set of entities connected with directed, labeled edges (relations). We followthe pre-processing scheme as described in Yang et al. (2016). We assign separate relation nodesr1andr2for each entity pair (e1;r;e 2)as(e1;r1)and(e2;r2). Entity nodes are described bysparse feature vectors. We extend the number of features in NELL by assigning a unique one-hotrepresentation for every relation node, effectively resulting in a 61,278-dim sparse feature vector pernode. The semi-supervised task here considers the extreme case of only a single labeled exampleper class in the training set. We construct a binary, symmetric adjacency matrix from this graph bysetting entries Aij= 1, if one or more edges are present between nodes iandj.Random graphs We simulate random graph datasets of various sizes for experiments where wemeasure training time per epoch. For a dataset with Nnodes we create a random graph assigning2Nedges uniformly at random. We take the identity matrix INas input feature matrix X, therebyimplicitly taking a featureless approach where the model is only informed about the identity of eachnode, specified by a unique one-hot vector. We add dummy labels Yi= 1for every node.5.2 E XPERIMENTAL SET-UPUnless otherwise noted, we train a two-layer GCN as described in Section 3.1 and evaluate pre-diction accuracy on a test set of 1,000 labeled examples. We provide additional experiments usingdeeper models with up to 10 layers in Appendix B. We choose the same dataset splits as in Yanget al. (2016) with an additional validation set of 500 labeled examples for hyperparameter opti-mization (dropout rate for all layers, L2 regularization factor for the first GCN layer and number ofhidden units). We do not use the validation set labels for training.For the citation network datasets, we optimize hyperparameters on Cora only and use the same setof parameters for Citeseer and Pubmed. We train all models for a maximum of 200 epochs (trainingiterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0:01and early stopping with awindow size of 10, i.e. we stop training if the validation loss does not decrease for 10 consecutiveepochs. We initialize weights using the initialization described in Glorot & Bengio (2010) andaccordingly (row-)normalize input feature vectors. On the random graph datasets, we use a hiddenlayer size of 32 units and omit regularization (i.e. neither dropout nor L2 regularization).5.3 B ASELINESWe compare against the same baseline methods as in Yang et al. (2016), i.e. label propagation(LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifoldregularization (ManiReg) (Belkin et al., 2006) and skip-gram based graph embeddings (DeepWalk)(Perozzi et al., 2014). We omit TSVM (Joachims, 1999), as it does not scale to the large number ofclasses in one of our datasets.We further compare against the iterative classification algorithm (ICA) proposed in Lu & Getoor(2003) in conjunction with two logistic regression classifiers, one for local node features alone andone for relational classification using local features and an aggregation operator as described inSen et al. (2008). We first train the local classifier using all labeled training set nodes and useit to bootstrap class labels of unlabeled nodes for relational classifier training. We run iterativeclassification (relational classifier) with a random node ordering for 10 iterations on all unlabelednodes (bootstrapped using the local classifier). L2 regularization parameter and aggregation operator(count vs.prop, see Sen et al. (2008)) are chosen based on validation set performance for each datasetseparately.Lastly, we compare against Planetoid (Yang et al., 2016), where we always choose their best-performing model variant (transductive vs. inductive) as a baseline.6Published as a conference paper at ICLR 20176 R ESULTS6.1 S EMI-SUPERVISED NODE CLASSIFICATIONResults are summarized in Table 2. Reported numbers denote classification accuracy in percent. ForICA, we report the mean accuracy of 100 runs with random node orderings. Results for all otherbaseline methods are taken from the Planetoid paper (Yang et al., 2016). Planetoid* denotes the bestmodel for the respective dataset out of the variants presented in their paper.Table 2: Summary of results in terms of classification accuracy (in percent).Method Citeseer Cora Pubmed NELLManiReg [3] 60:1 59 :5 70 :7 21 :8SemiEmb [28] 59:6 59 :0 71 :1 26 :7LP [32] 45:3 68 :0 63 :0 26 :5DeepWalk [22] 43:2 67 :2 65 :3 58 :1ICA [18] 69:1 75 :1 73 :9 23 :1Planetoid* [29] 64:7(26s) 75:7(13s) 77:2(25s) 61:9(185s)GCN (this paper) 70:3(7s) 81:5(4s) 79:0(38s) 66:0(48s)GCN (rand. splits) 67:90:5 80:10:5 78:90:7 58:41:7We further report wall-clock training time in seconds until convergence (in brackets) for our method(incl. evaluation of validation error) and for Planetoid. For the latter, we used an implementation pro-vided by the authors3and trained on the same hardware (with GPU) as our GCN model. We trainedand tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracyof 100 runs with random weight initializations. We used the following sets of hyperparameters forCiteseer, Cora and Pubmed: 0.5 (dropout rate), 5104(L2 regularization) and 16(number of hid-den units); and for NELL: 0.1 (dropout rate), 1105(L2 regularization) and 64(number of hiddenunits).In addition, we report performance of our model on 10 randomly drawn dataset splits of the samesize as in Yang et al. (2016), denoted by GCN (rand. splits). Here, we report mean and standarderror of prediction accuracy on the test set split in percent.6.2 E VALUATION OF PROPAGATION MODELWe compare different variants of our proposed per-layer propagation model on the citation networkdatasets. We follow the experimental set-up described in the previous section. Results are summa-rized in Table 3. The propagation model of our original GCN model is denoted by renormalizationtrick (in bold). In all other cases, the propagation model of both neural network layers is replacedwith the model specified under propagation model . Reported numbers denote mean classificationaccuracy for 100 repeated runs with random weight matrix initializations. In case of multiple vari-ables iper layer, we impose L2 regularization on all weight matrices of the first layer.Table 3: Comparison of propagation models.Description Propagation model Citeseer Cora PubmedChebyshev filter (Eq. 5)K= 3PKk=0Tk(~L)Xk69:8 79:5 74:4K= 2 69 :6 81:2 73:81st-order model (Eq. 6) X0+D12AD12X1 68:3 80:0 77:5Single parameter (Eq. 7) (IN+D12AD12)X 69 :3 79:2 77:4Renormalization trick (Eq. 8) ~D12~A~D12X 70:3 81:5 79:01st-order term only D12AD12X 68 :7 80:5 77:8Multi-layer perceptron X 46 :5 55:1 71:43https://github.com/kimiyoung/planetoid7Published as a conference paper at ICLR 20176.3 T RAINING TIME PER EPOCH1k 10k 100k 1M 10M# Edges10-310-210-1100101Sec./epoch*GPUCPUFigure 2: Wall-clock time per epoch for randomgraphs. (*) indicates out-of-memory error.Here, we report results for the mean trainingtime per epoch (forward pass, cross-entropycalculation, backward pass) for 100 epochs onsimulated random graphs, measured in secondswall-clock time. See Section 5.1 for a detaileddescription of the random graph dataset usedin these experiments. We compare results ona GPU and on a CPU-only implementation4inTensorFlow (Abadi et al., 2015). Figure 2 sum-marizes the results.7 D ISCUSSION7.1 S EMI-SUPERVISED MODELIn the experiments demonstrated here, our method for semi-supervised node classification outper-forms recent related methods by a significant margin. Methods based on graph-Laplacian regular-ization (Zhu et al., 2003; Belkin et al., 2006; Weston et al., 2012) are most likely limited due to theirassumption that edges encode mere similarity of nodes. Skip-gram based methods on the other handare limited by the fact that they are based on a multi-step pipeline which is difficult to optimize.Our proposed model can overcome both limitations, while still comparing favorably in terms of ef-ficiency (measured in wall-clock time) to related methods. Propagation of feature information fromneighboring nodes in every layer improves classification performance in comparison to methods likeICA (Lu & Getoor, 2003), where only label information is aggregated.We have further demonstrated that the proposed renormalized propagation model (Eq. 8) offers bothimproved efficiency (fewer parameters and operations, such as multiplication or addition) and betterpredictive performance on a number of datasets compared to a na ̈ıve1st-order model (Eq. 6) orhigher-order graph convolutional models using Chebyshev polynomials (Eq. 5).7.2 L IMITATIONS AND FUTURE WORKHere, we describe several limitations of our current model and outline how these might be overcomein future work.Memory requirement In the current setup with full-batch gradient descent, memory requirementgrows linearly in the size of the dataset. We have shown that for large graphs that do not fit in GPUmemory, training on CPU can still be a viable option. Mini-batch stochastic gradient descent canalleviate this issue. The procedure of generating mini-batches, however, should take into account thenumber of layers in the GCN model, as the Kth-order neighborhood for a GCN with Klayers has tobe stored in memory for an exact procedure. For very large and densely connected graph datasets,further approximations might be necessary.Directed edges and edge features Our framework currently does not naturally support edge fea-tures and is limited to undirected graphs (weighted or unweighted). Results on NELL howevershow that it is possible to handle both directed edges and edge features by representing the originaldirected graph as an undirected bipartite graph with additional nodes that represent edges in theoriginal graph (see Section 5.1 for details).Limiting assumptions Through the approximations introduced in Section 2, we implicitly assumelocality (dependence on the Kth-order neighborhood for a GCN with Klayers) and equal impor-tance of self-connections vs. edges to neighboring nodes. For some datasets, however, it might bebeneficial to introduce a trade-off parameter in the definition of ~A:~A=A+IN: (11)4Hardware used: 16-core Intel RXeon RCPU E5-2640 v3 @ 2.60GHz, GeForce RGTX TITAN X8Published as a conference paper at ICLR 2017This parameter now plays a similar role as the trade-off parameter between supervised and unsuper-vised loss in the typical semi-supervised setting (see Eq. 1). Here, however, it can be learned viagradient descent.8 C ONCLUSIONWe have introduced a novel approach for semi-supervised classification on graph-structured data.Our GCN model uses an efficient layer-wise propagation rule that is based on a first-order approx-imation of spectral convolutions on graphs. Experiments on a number of network datasets suggestthat the proposed GCN model is capable of encoding both graph structure and node features in away useful for semi-supervised classification. In this setting, our model outperforms several recentlyproposed methods by a significant margin, while being computationally efficient.ACKNOWLEDGMENTSWe would like to thank Christos Louizos, Taco Cohen, Joan Bruna, Zhilin Yang, Dave Herman,Pramod Sinha and Abdul-Saboor Sheikh for helpful discussions. This research was funded by SAP.REFERENCESMart ́ın Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in neuralinformation processing systems (NIPS) , 2016.Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric frame-work for learning from labeled and unlabeled examples. Journal of machine learning research(JMLR) , 7(Nov):2399–2434, 2006.Ulrik Brandes, Daniel Delling, Marco Gaertler, Robert Gorke, Martin Hoefer, Zoran Nikoloski,and Dorothea Wagner. On modularity clustering. IEEE Transactions on Knowledge and DataEngineering , 20(2):172–188, 2008.Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locallyconnected networks on graphs. In International Conference on Learning Representations (ICLR) ,2014.Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr, and Tom M.Mitchell. Toward an architecture for never-ending language learning. In AAAI , volume 5, pp. 3,2010.Micha ̈el Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks ongraphs with fast localized spectral filtering. In Advances in neural information processing systems(NIPS) , 2016.Brendan L. Douglas. The Weisfeiler-Lehman method and graph isomorphism testing. arXiv preprintarXiv:1101.5211 , 2011.David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Al ́anAspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs for learning molecularfingerprints. In Advances in neural information processing systems (NIPS) , pp. 2224–2232, 2015.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neuralnetworks. In AISTATS , volume 9, pp. 249–256, 2010.Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains.InProceedings. 2005 IEEE International Joint Conference on Neural Networks. , volume 2, pp.729–734. IEEE, 2005.Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedingsof the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining .ACM, 2016.9Published as a conference paper at ICLR 2017David K. Hammond, Pierre Vandergheynst, and R ́emi Gribonval. Wavelets on graphs via spectralgraph theory. Applied and Computational Harmonic Analysis , 30(2):129–150, 2011.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016.Thorsten Joachims. Transductive inference for text classification using support vector machines. InInternational Conference on Machine Learning (ICML) , volume 99, pp. 200–209, 1999.Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In Interna-tional Conference on Learning Representations (ICLR) , 2015.Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neuralnetworks. In International Conference on Learning Representations (ICLR) , 2016.Qing Lu and Lise Getoor. Link-based classification. In International Conference on Machine Learn-ing (ICML) , volume 3, pp. 496–503, 2003.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research (JMLR) , 9(Nov):2579–2605, 2008.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed repre-sentations of words and phrases and their compositionality. In Advances in neural informationprocessing systems (NIPS) , pp. 3111–3119, 2013.Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural net-works for graphs. In International Conference on Machine Learning (ICML) , 2016.Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social repre-sentations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledgediscovery and data mining , pp. 701–710. ACM, 2014.Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.The graph neural network model. IEEE Transactions on Neural Networks , 20(1):61–80, 2009.Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad.Collective classification in network data. AI magazine , 29(3):93, 2008.Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine LearningResearch (JMLR) , 15(1):1929–1958, 2014.Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scaleinformation network embedding. In Proceedings of the 24th International Conference on WorldWide Web , pp. 1067–1077. ACM, 2015.Boris Weisfeiler and A. A. Lehmann. A reduction of a graph to a canonical form and an algebraarising during this reduction. Nauchno-Technicheskaya Informatsia , 2(9):12–16, 1968.Jason Weston, Fr ́ed ́eric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi-supervised embedding. In Neural Networks: Tricks of the Trade , pp. 639–655. Springer, 2012.Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning withgraph embeddings. In International Conference on Machine Learning (ICML) , 2016.Wayne W. Zachary. An information flow model for conflict and fission in small groups. Journal ofanthropological research , pp. 452–473, 1977.Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Sch ̈olkopf.Learning with local and global consistency. In Advances in neural information processing systems(NIPS) , volume 16, pp. 321–328, 2004.Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Semi-supervised learning using gaussian fieldsand harmonic functions. In International Conference on Machine Learning (ICML) , volume 3,pp. 912–919, 2003.10Published as a conference paper at ICLR 2017A R ELATION TO WEISFEILER -LEHMAN ALGORITHMA neural network model for graph-structured data should ideally be able to learn representations ofnodes in a graph, taking both the graph structure and feature description of nodes into account. Awell-studied framework for the unique assignment of node labels given a graph and (optionally) dis-crete initial node labels is provided by the 1-dim Weisfeiler-Lehman (WL-1) algorithm (Weisfeiler& Lehmann, 1968):Algorithm 1: WL-1 algorithm (Weisfeiler & Lehmann, 1968)Input: Initial node coloring (h(0)1;h(0)2;:::;h(0)N)Output: Final node coloring (h(T)1;h(T)2;:::;h(T)N)t 0;repeatforvi2Vdoh(t+1)i hashPj2Nih(t)j;t t+ 1;until stable node coloring is reached ;Here,h(t)idenotes the coloring (label assignment) of node vi(at iteration t) andNiis its set ofneighboring node indices (irrespective of whether the graph includes self-connections for every nodeor not). hash()is a hash function. For an in-depth mathematical discussion of the WL-1 algorithmsee, e.g., Douglas (2011).We can replace the hash function in Algorithm 1 with a neural network layer-like differentiablefunction with trainable parameters as follows:h(l+1)i =0@Xj2Ni1cijh(l)jW(l)1A; (12)wherecijis an appropriately chosen normalization constant for the edge (vi;vj). Further, we cantakeh(l)inow to be a vector of activations of nodeiin thelthneural network layer. W(l)is alayer-specific weight matrix and ()denotes a differentiable, non-linear activation function.By choosing cij=pdidj, wheredi=jNijdenotes the degree of node vi, we recover the propaga-tion rule of our Graph Convolutional Network (GCN) model in vector form (see Eq. 2)5.This—loosely speaking—allows us to interpret our GCN model as a differentiable and parameter-ized generalization of the 1-dim Weisfeiler-Lehman algorithm on graphs.A.1 N ODE EMBEDDINGS WITH RANDOM WEIGHTSFrom the analogy with the Weisfeiler-Lehman algorithm, we can understand that even an untrainedGCN model with random weights can serve as a powerful feature extractor for nodes in a graph. Asan example, consider the following 3-layer GCN model:Z= tanh^Atanh^Atanh^AXW(0)W(1)W(2); (13)with weight matrices W(l)initialized at random using the initialization described in Glorot & Bengio(2010). ^A,XandZare defined as in Section 3.1.We apply this model on Zachary’s karate club network (Zachary, 1977). This graph contains 34nodes, connected by 154 (undirected and unweighted) edges. Every node is labeled by one offour classes, obtained via modularity-based clustering (Brandes et al., 2008). See Figure 3a for anillustration.5Note that we here implicitly assume that self-connections have already been added to every node in thegraph (for a clutter-free notation).11Published as a conference paper at ICLR 2017(a) Karate club network (b) Random weight embeddingFigure 3: Left: Zachary’s karate club network (Zachary, 1977), colors denote communities obtainedvia modularity-based clustering (Brandes et al., 2008). Right : Embeddings obtained from an un-trained 3-layer GCN model (Eq. 13) with random weights applied to the karate club network. Bestviewed on a computer screen.We take a featureless approach by setting X=IN, whereINis theNbyNidentity matrix. Nisthe number of nodes in the graph. Note that nodes are randomly ordered (i.e. ordering contains noinformation). Furthermore, we choose a hidden layer dimensionality6of4and a two-dimensionaloutput (so that the output can immediately be visualized in a 2-dim plot).Figure 3b shows a representative example of node embeddings (outputs Z) obtained from an un-trained GCN model applied to the karate club network. These results are comparable to embeddingsobtained from DeepWalk (Perozzi et al., 2014), which uses a more expensive unsupervised trainingprocedure.A.2 S EMI-SUPERVISED NODE EMBEDDINGSOn this simple example of a GCN applied to the karate club network it is interesting to observe howembeddings react during training on a semi-supervised classification task. Such a visualization (seeFigure 4) provides insights into how the GCN model can make use of the graph structure (and offeatures extracted from the graph structure at later layers) to learn embeddings that are useful for aclassification task.We consider the following semi-supervised learning setup: we add a softmax layer on top of ourmodel (Eq. 13) and train using only a single labeled example per class (i.e. a total number of 4 labelednodes). We train for 300 training iterations using Adam (Kingma & Ba, 2015) with a learning rateof0:01on a cross-entropy loss.Figure 4 shows the evolution of node embeddings over a number of training iterations. The modelsucceeds in linearly separating the communities based on minimal supervision and the graph struc-ture alone. A video of the full training process can be found on our website7.6We originally experimented with a hidden layer dimensionality of 2(i.e. same as output layer), but observedthat a dimensionality of 4resulted in less frequent saturation of tanh()units and therefore visually morepleasing results.7http://tkipf.github.io/graph-convolutional-networks/12Published as a conference paper at ICLR 2017(a) Iteration 25 (b) Iteration 50(c) Iteration 75 (d) Iteration 100(e) Iteration 200 (f) Iteration 300Figure 4: Evolution of karate club network node embeddings obtained from a GCN model after anumber of semi-supervised training iterations. Colors denote class. Nodes of which labels wereprovided during training (one per class) are highlighted (grey outline). Grey links between nodesdenote graph edges. Best viewed on a computer screen.13Published as a conference paper at ICLR 2017B E XPERIMENTS ON MODEL DEPTHIn these experiments, we investigate the influence of model depth (number of layers) on classificationperformance. We report results on a 5-fold cross-validation experiment on the Cora, Citeseer andPubmed datasets (Sen et al., 2008) using all labels. In addition to the standard GCN model (Eq. 2),we report results on a model variant where we use residual connections (He et al., 2016) betweenhidden layers to facilitate training of deeper models by enabling the model to carry over informationfrom the previous layer’s input:H(l+1)=~D12~A~D12H(l)W(l)+H(l): (14)On each cross-validation split, we train for 400 epochs (without early stopping) using the Adamoptimizer (Kingma & Ba, 2015) with a learning rate of 0:01. Other hyperparameters are chosen asfollows: 0.5 (dropout rate, first and last layer), 5104(L2 regularization, first layer), 16 (numberof units for each hidden layer) and 0.01 (learning rate). Results are summarized in Figure 5.12345678910Number of layers0.500.550.600.650.700.750.800.850.90AccuracyCiteseerTrainTrain (Residual)TestTest (Residual)12345678910Number of layers0.550.600.650.700.750.800.850.900.95AccuracyCoraTrainTrain (Residual)TestTest (Residual)12345678910Number of layers0.760.780.800.820.840.860.88AccuracyPubmedTrainTrain (Residual)TestTest (Residual)Figure 5: Influence of model depth (number of layers) on classification performance. Markersdenote mean classification accuracy (training vs. testing) for 5-fold cross-validation. Shaded areasdenote standard error. We show results both for a standard GCN model (dashed lines) and a modelwith added residual connections (He et al., 2016) between hidden layers (solid lines).For the datasets considered here, best results are obtained with a 2- or 3-layer model. We observethat for models deeper than 7 layers, training without the use of residual connections can becomedifficult, as the effective context size for each node increases by the size of its Kth-order neighbor-hood (for a model with Klayers) with each additional layer. Furthermore, overfitting can becomean issue as the number of parameters increases with model depth.14
S1eLrWQBg
SJU4ayYgl
ICLR.cc/2017/conference/-/paper72/official/review
{"title": "Simple and reasonable approach", "rating": "7: Good paper, accept", "review": "The paper develops a simple and reasonable algorithm for graph node prediction/classification. The formulations are very intuitive and lead to a simple CNN based training and can easily leverage existing GPU speedups. \n\nExperiments are thorough and compare with many reasonable baselines on large and real benchmark datasets. Although, I am not quite aware of the literature on other methods and there may be similar alternatives as link and node prediction is an old problem. I still think the approach is quite simple and reasonably supported by good evaluations. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Semi-Supervised Classification with Graph Convolutional Networks
["Thomas N. Kipf", "Max Welling"]
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.
["Deep learning", "Semi-Supervised Learning"]
https://openreview.net/forum?id=SJU4ayYgl
https://openreview.net/pdf?id=SJU4ayYgl
https://openreview.net/forum?id=SJU4ayYgl&noteId=S1eLrWQBg
Published as a conference paper at ICLR 2017SEMI-SUPERVISED CLASSIFICATION WITHGRAPH CONVOLUTIONAL NETWORKSThomas N. KipfUniversity of AmsterdamT.N.Kipf@uva.nlMax WellingUniversity of AmsterdamCanadian Institute for Advanced Research (CIFAR)M.Welling@uva.nlABSTRACTWe present a scalable approach for semi-supervised learning on graph-structureddata that is based on an efficient variant of convolutional neural networks whichoperate directly on graphs. We motivate the choice of our convolutional archi-tecture via a localized first-order approximation of spectral graph convolutions.Our model scales linearly in the number of graph edges and learns hidden layerrepresentations that encode both local graph structure and features of nodes. Ina number of experiments on citation networks and on a knowledge graph datasetwe demonstrate that our approach outperforms related methods by a significantmargin.1 I NTRODUCTIONWe consider the problem of classifying nodes (such as documents) in a graph (such as a citationnetwork), where labels are only available for a small subset of nodes. This problem can be framedas graph-based semi-supervised learning, where label information is smoothed over the graph viasome form of explicit graph-based regularization (Zhu et al., 2003; Zhou et al., 2004; Belkin et al.,2006; Weston et al., 2012), e.g. by using a graph Laplacian regularization term in the loss function:L=L0+Lreg;withLreg=Xi;jAijkf(Xi)f(Xj)k2=f(X)>f(X): (1)Here,L0denotes the supervised loss w.r.t. the labeled part of the graph, f()can be a neural network-like differentiable function, is a weighing factor and Xis a matrix of node feature vectors Xi. =DAdenotes the unnormalized graph Laplacian of an undirected graph G= (V;E)withNnodesvi2V, edges (vi;vj)2E, an adjacency matrix A2RNN(binary or weighted) anda degree matrix Dii=PjAij. The formulation of Eq. 1 relies on the assumption that connectednodes in the graph are likely to share the same label. This assumption, however, might restrictmodeling capacity, as graph edges need not necessarily encode node similarity, but could containadditional information.In this work, we encode the graph structure directly using a neural network model f(X;A)andtrain on a supervised target L0for all nodes with labels, thereby avoiding explicit graph-basedregularization in the loss function. Conditioning f()on the adjacency matrix of the graph willallow the model to distribute gradient information from the supervised loss L0and will enable it tolearn representations of nodes both with and without labels.Our contributions are two-fold. Firstly, we introduce a simple and well-behaved layer-wise prop-agation rule for neural network models which operate directly on graphs and show how it can bemotivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011).Secondly, we demonstrate how this form of a graph-based neural network model can be used forfast and scalable semi-supervised classification of nodes in a graph. Experiments on a number ofdatasets demonstrate that our model compares favorably both in classification accuracy and effi-ciency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.1Published as a conference paper at ICLR 20172 F AST APPROXIMATE CONVOLUTIONS ON GRAPHSIn this section, we provide theoretical motivation for a specific graph-based neural network modelf(X;A)that we will use in the rest of this paper. We consider a multi-layer Graph ConvolutionalNetwork (GCN) with the following layer-wise propagation rule:H(l+1)=~D12~A~D12H(l)W(l): (2)Here, ~A=A+INis the adjacency matrix of the undirected graph Gwith added self-connections.INis the identity matrix, ~Dii=Pj~AijandW(l)is a layer-specific trainable weight matrix. ()denotes an activation function, such as the ReLU() = max(0;).H(l)2RNDis the matrix of ac-tivations in the lthlayer;H(0)=X. In the following, we show that the form of this propagation rulecan be motivated1via a first-order approximation of localized spectral filters on graphs (Hammondet al., 2011; Defferrard et al., 2016).2.1 S PECTRAL GRAPH CONVOLUTIONSWe consider spectral convolutions on graphs defined as the multiplication of a signal x2RN(ascalar for every node) with a filter g=diag()parameterized by 2RNin the Fourier domain,i.e.:g?x=UgU>x; (3)whereUis the matrix of eigenvectors of the normalized graph Laplacian L=IND12AD12=UU>, with a diagonal matrix of its eigenvalues andU>xbeing the graph Fourier transformofx. We can understand gas a function of the eigenvalues of L, i.e.g(). Evaluating Eq. 3 iscomputationally expensive, as multiplication with the eigenvector matrix UisO(N2). Furthermore,computing the eigendecomposition of Lin the first place might be prohibitively expensive for largegraphs. To circumvent this problem, it was suggested in Hammond et al. (2011) that g()can bewell-approximated by a truncated expansion in terms of Chebyshev polynomials Tk(x)up toKthorder:g0()KXk=00kTk(~); (4)with a rescaled ~ =2maxIN.maxdenotes the largest eigenvalue of L.02RKis now avector of Chebyshev coefficients. The Chebyshev polynomials are recursively defined as Tk(x) =2xTk1(x)Tk2(x), withT0(x) = 1 andT1(x) =x. The reader is referred to Hammond et al.(2011) for an in-depth discussion of this approximation.Going back to our definition of a convolution of a signal xwith a filterg0, we now have:g0?xKXk=00kTk(~L)x; (5)with ~L=2maxLIN; as can easily be verified by noticing that (UU>)k=UkU>. Note thatthis expression is now K-localized since it is a Kth-order polynomial in the Laplacian, i.e. it dependsonly on nodes that are at maximum Ksteps away from the central node ( Kth-order neighborhood).The complexity of evaluating Eq. 5 is O(jEj), i.e. linear in the number of edges. Defferrard et al.(2016) use this K-localized convolution to define a convolutional neural network on graphs.2.2 L AYER -WISELINEAR MODELA neural network model based on graph convolutions can therefore be built by stacking multipleconvolutional layers of the form of Eq. 5, each layer followed by a point-wise non-linearity. Now,imagine we limited the layer-wise convolution operation to K= 1(see Eq. 5), i.e. a function that islinear w.r.t.Land therefore a linear function on the graph Laplacian spectrum.1We provide an alternative interpretation of this propagation rule based on the Weisfeiler-Lehman algorithm(Weisfeiler & Lehmann, 1968) in Appendix A.2Published as a conference paper at ICLR 2017In this way, we can still recover a rich class of convolutional filter functions by stacking multiplesuch layers, but we are not limited to the explicit parameterization given by, e.g., the Chebyshevpolynomials. We intuitively expect that such a model can alleviate the problem of overfitting onlocal neighborhood structures for graphs with very wide node degree distributions, such as socialnetworks, citation networks, knowledge graphs and many other real-world graph datasets. Addition-ally, for a fixed computational budget, this layer-wise linear formulation allows us to build deepermodels, a practice that is known to improve modeling capacity on a number of domains (He et al.,2016).In this linear formulation of a GCN we further approximate max2, as we can expect that neuralnetwork parameters will adapt to this change in scale during training. Under these approximationsEq. 5 simplifies to:g0?x00x+01(LIN)x=00x01D12AD12x; (6)with two free parameters 00and01. The filter parameters can be shared over the whole graph.Successive application of filters of this form then effectively convolve the kth-order neighborhood ofa node, where kis the number of successive filtering operations or convolutional layers in the neuralnetwork model.In practice, it can be beneficial to constrain the number of parameters further to address overfittingand to minimize the number of operations (such as matrix multiplications) per layer. This leaves uswith the following expression:g?xIN+D12AD12x; (7)with a single parameter =00=01. Note that IN+D12AD12now has eigenvalues inthe range [0;2]. Repeated application of this operator can therefore lead to numerical instabilitiesand exploding/vanishing gradients when used in a deep neural network model. To alleviate thisproblem, we introduce the following renormalization trick :IN+D12AD12!~D12~A~D12, with~A=A+INand~Dii=Pj~Aij.We can generalize this definition to a signal X2RNCwithCinput channels (i.e. a C-dimensionalfeature vector for every node) and Ffilters or feature maps as follows:Z=~D12~A~D12X; (8)where 2RCFis now a matrix of filter parameters and Z2RNFis the convolved signalmatrix. This filtering operation has complexity O(jEjFC), as~AX can be efficiently implementedas a product of a sparse matrix with a dense matrix.3 S EMI-SUPERVISED NODE CLASSIFICATIONHaving introduced a simple, yet flexible model f(X;A)for efficient information propagation ongraphs, we can return to the problem of semi-supervised node classification. As outlined in the in-troduction, we can relax certain assumptions typically made in graph-based semi-supervised learn-ing by conditioning our model f(X;A)both on the data Xand on the adjacency matrix Aof theunderlying graph structure. We expect this setting to be especially powerful in scenarios where theadjacency matrix contains information not present in the data X, such as citation links between doc-uments in a citation network or relations in a knowledge graph. The overall model, a multi-layerGCN for semi-supervised learning, is schematically depicted in Figure 1.3.1 E XAMPLEIn the following, we consider a two-layer GCN for semi-supervised node classification on a graphwith a symmetric adjacency matrix A(binary or weighted). We first calculate ^A=~D12~A~D12ina pre-processing step. Our forward model then takes the simple form:Z=f(X;A) = softmax^AReLU^AXW(0)W(1): (9)3Published as a conference paper at ICLR 2017Cinput layerX1X2X3X4Foutput layerZ1Z2Z3Z4hiddenlayersY1Y41(a) Graph Convolutional Network30 20 10 0 10 20 303020100102030 (b) Hidden layer activationsFigure 1: Left: Schematic depiction of multi-layer Graph Convolutional Network (GCN) for semi-supervised learning with Cinput channels and Ffeature maps in the output layer. The graph struc-ture (edges shown as black lines) is shared over layers, labels are denoted by Yi.Right : t-SNE(Maaten & Hinton, 2008) visualization of hidden layer activations of a two-layer GCN trained onthe Cora dataset (Sen et al., 2008) using 5%of labels. Colors denote document class.Here,W(0)2RCHis an input-to-hidden weight matrix for a hidden layer with Hfeature maps.W(1)2RHFis a hidden-to-output weight matrix. The softmax activation function, defined assoftmax(xi) =1Zexp(xi)withZ=Piexp(xi), is applied row-wise. For semi-supervised multi-class classification, we then evaluate the cross-entropy error over all labeled examples:L=Xl2YLFXf=1YlflnZlf; (10)whereYLis the set of node indices that have labels.The neural network weights W(0)andW(1)are trained using gradient descent. In this work, weperform batch gradient descent using the full dataset for every training iteration, which is a viableoption as long as datasets fit in memory. Using a sparse representation for A, memory requirementisO(jEj), i.e. linear in the number of edges. Stochasticity in the training process is introduced viadropout (Srivastava et al., 2014). We leave memory-efficient extensions with mini-batch stochasticgradient descent for future work.3.2 I MPLEMENTATIONIn practice, we make use of TensorFlow (Abadi et al., 2015) for an efficient GPU-based imple-mentation2of Eq. 9 using sparse-dense matrix multiplications. The computational complexity ofevaluating Eq. 9 is then O(jEjCHF ), i.e. linear in the number of graph edges.4 R ELATED WORKOur model draws inspiration both from the field of graph-based semi-supervised learning and fromrecent work on neural networks that operate on graphs. In what follows, we provide a brief overviewon related work in both fields.4.1 G RAPH -BASED SEMI-SUPERVISED LEARNINGA large number of approaches for semi-supervised learning using graph representations have beenproposed in recent years, most of which fall into two broad categories: methods that use someform of explicit graph Laplacian regularization and graph embedding-based approaches. Prominentexamples for graph Laplacian regularization include label propagation (Zhu et al., 2003), manifoldregularization (Belkin et al., 2006) and deep semi-supervised embedding (Weston et al., 2012).2Code to reproduce our experiments is available at https://github.com/tkipf/gcn .4Published as a conference paper at ICLR 2017Recently, attention has shifted to models that learn graph embeddings with methods inspired bythe skip-gram model (Mikolov et al., 2013). DeepWalk (Perozzi et al., 2014) learns embeddingsvia the prediction of the local neighborhood of nodes, sampled from random walks on the graph.LINE (Tang et al., 2015) and node2vec (Grover & Leskovec, 2016) extend DeepWalk with moresophisticated random walk or breadth-first search schemes. For all these methods, however, a multi-step pipeline including random walk generation and semi-supervised training is required where eachstep has to be optimized separately. Planetoid (Yang et al., 2016) alleviates this by injecting labelinformation in the process of learning embeddings.4.2 N EURAL NETWORKS ON GRAPHSNeural networks that operate on graphs have previously been introduced in Gori et al. (2005);Scarselli et al. (2009) as a form of recurrent neural network. Their framework requires the repeatedapplication of contraction maps as propagation functions until node representations reach a stablefixed point. This restriction was later alleviated in Li et al. (2016) by introducing modern practicesfor recurrent neural network training to the original graph neural network framework. Duvenaudet al. (2015) introduced a convolution-like propagation rule on graphs and methods for graph-levelclassification. Their approach requires to learn node degree-specific weight matrices which does notscale to large graphs with wide node degree distributions. Our model instead uses a single weightmatrix per layer and deals with varying node degrees through an appropriate normalization of theadjacency matrix (see Section 3.1).A related approach to node classification with a graph-based neural network was recently introducedin Atwood & Towsley (2016). They report O(N2)complexity, limiting the range of possible appli-cations. In a different yet related model, Niepert et al. (2016) convert graphs locally into sequencesthat are fed into a conventional 1D convolutional neural network, which requires the definition of anode ordering in a pre-processing step.Our method is based on spectral graph convolutional neural networks, introduced in Bruna et al.(2014) and later extended by Defferrard et al. (2016) with fast localized convolutions. In contrastto these works, we consider here the task of transductive node classification within networks ofsignificantly larger scale. We show that in this setting, a number of simplifications (see Section 2.2)can be introduced to the original frameworks of Bruna et al. (2014) and Defferrard et al. (2016) thatimprove scalability and classification performance in large-scale networks.5 E XPERIMENTSWe test our model in a number of experiments: semi-supervised document classification in cita-tion networks, semi-supervised entity classification in a bipartite graph extracted from a knowledgegraph, an evaluation of various graph propagation models and a run-time analysis on random graphs.5.1 D ATASETSWe closely follow the experimental setup in Yang et al. (2016). Dataset statistics are summarizedin Table 1. In the citation network datasets—Citeseer, Cora and Pubmed (Sen et al., 2008)—nodesare documents and edges are citation links. Label rate denotes the number of labeled nodes that areused for training divided by the total number of nodes in each dataset. NELL (Carlson et al., 2010;Yang et al., 2016) is a bipartite graph dataset extracted from a knowledge graph with 55,864 relationnodes and 9,891 entity nodes.Table 1: Dataset statistics, as reported in Yang et al. (2016).Dataset Type Nodes Edges Classes Features Label rateCiteseer Citation network 3,327 4,732 6 3,703 0:036Cora Citation network 2,708 5,429 7 1,433 0:052Pubmed Citation network 19,717 44,338 3 500 0:003NELL Knowledge graph 65,755 266,144 210 5,414 0:0015Published as a conference paper at ICLR 2017Citation networks We consider three citation network datasets: Citeseer, Cora and Pubmed (Senet al., 2008). The datasets contain sparse bag-of-words feature vectors for each document and a listof citation links between documents. We treat the citation links as (undirected) edges and constructa binary, symmetric adjacency matrix A. Each document has a class label. For training, we only use20 labels per class, but all feature vectors.NELL NELL is a dataset extracted from the knowledge graph introduced in (Carlson et al., 2010).A knowledge graph is a set of entities connected with directed, labeled edges (relations). We followthe pre-processing scheme as described in Yang et al. (2016). We assign separate relation nodesr1andr2for each entity pair (e1;r;e 2)as(e1;r1)and(e2;r2). Entity nodes are described bysparse feature vectors. We extend the number of features in NELL by assigning a unique one-hotrepresentation for every relation node, effectively resulting in a 61,278-dim sparse feature vector pernode. The semi-supervised task here considers the extreme case of only a single labeled exampleper class in the training set. We construct a binary, symmetric adjacency matrix from this graph bysetting entries Aij= 1, if one or more edges are present between nodes iandj.Random graphs We simulate random graph datasets of various sizes for experiments where wemeasure training time per epoch. For a dataset with Nnodes we create a random graph assigning2Nedges uniformly at random. We take the identity matrix INas input feature matrix X, therebyimplicitly taking a featureless approach where the model is only informed about the identity of eachnode, specified by a unique one-hot vector. We add dummy labels Yi= 1for every node.5.2 E XPERIMENTAL SET-UPUnless otherwise noted, we train a two-layer GCN as described in Section 3.1 and evaluate pre-diction accuracy on a test set of 1,000 labeled examples. We provide additional experiments usingdeeper models with up to 10 layers in Appendix B. We choose the same dataset splits as in Yanget al. (2016) with an additional validation set of 500 labeled examples for hyperparameter opti-mization (dropout rate for all layers, L2 regularization factor for the first GCN layer and number ofhidden units). We do not use the validation set labels for training.For the citation network datasets, we optimize hyperparameters on Cora only and use the same setof parameters for Citeseer and Pubmed. We train all models for a maximum of 200 epochs (trainingiterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0:01and early stopping with awindow size of 10, i.e. we stop training if the validation loss does not decrease for 10 consecutiveepochs. We initialize weights using the initialization described in Glorot & Bengio (2010) andaccordingly (row-)normalize input feature vectors. On the random graph datasets, we use a hiddenlayer size of 32 units and omit regularization (i.e. neither dropout nor L2 regularization).5.3 B ASELINESWe compare against the same baseline methods as in Yang et al. (2016), i.e. label propagation(LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifoldregularization (ManiReg) (Belkin et al., 2006) and skip-gram based graph embeddings (DeepWalk)(Perozzi et al., 2014). We omit TSVM (Joachims, 1999), as it does not scale to the large number ofclasses in one of our datasets.We further compare against the iterative classification algorithm (ICA) proposed in Lu & Getoor(2003) in conjunction with two logistic regression classifiers, one for local node features alone andone for relational classification using local features and an aggregation operator as described inSen et al. (2008). We first train the local classifier using all labeled training set nodes and useit to bootstrap class labels of unlabeled nodes for relational classifier training. We run iterativeclassification (relational classifier) with a random node ordering for 10 iterations on all unlabelednodes (bootstrapped using the local classifier). L2 regularization parameter and aggregation operator(count vs.prop, see Sen et al. (2008)) are chosen based on validation set performance for each datasetseparately.Lastly, we compare against Planetoid (Yang et al., 2016), where we always choose their best-performing model variant (transductive vs. inductive) as a baseline.6Published as a conference paper at ICLR 20176 R ESULTS6.1 S EMI-SUPERVISED NODE CLASSIFICATIONResults are summarized in Table 2. Reported numbers denote classification accuracy in percent. ForICA, we report the mean accuracy of 100 runs with random node orderings. Results for all otherbaseline methods are taken from the Planetoid paper (Yang et al., 2016). Planetoid* denotes the bestmodel for the respective dataset out of the variants presented in their paper.Table 2: Summary of results in terms of classification accuracy (in percent).Method Citeseer Cora Pubmed NELLManiReg [3] 60:1 59 :5 70 :7 21 :8SemiEmb [28] 59:6 59 :0 71 :1 26 :7LP [32] 45:3 68 :0 63 :0 26 :5DeepWalk [22] 43:2 67 :2 65 :3 58 :1ICA [18] 69:1 75 :1 73 :9 23 :1Planetoid* [29] 64:7(26s) 75:7(13s) 77:2(25s) 61:9(185s)GCN (this paper) 70:3(7s) 81:5(4s) 79:0(38s) 66:0(48s)GCN (rand. splits) 67:90:5 80:10:5 78:90:7 58:41:7We further report wall-clock training time in seconds until convergence (in brackets) for our method(incl. evaluation of validation error) and for Planetoid. For the latter, we used an implementation pro-vided by the authors3and trained on the same hardware (with GPU) as our GCN model. We trainedand tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracyof 100 runs with random weight initializations. We used the following sets of hyperparameters forCiteseer, Cora and Pubmed: 0.5 (dropout rate), 5104(L2 regularization) and 16(number of hid-den units); and for NELL: 0.1 (dropout rate), 1105(L2 regularization) and 64(number of hiddenunits).In addition, we report performance of our model on 10 randomly drawn dataset splits of the samesize as in Yang et al. (2016), denoted by GCN (rand. splits). Here, we report mean and standarderror of prediction accuracy on the test set split in percent.6.2 E VALUATION OF PROPAGATION MODELWe compare different variants of our proposed per-layer propagation model on the citation networkdatasets. We follow the experimental set-up described in the previous section. Results are summa-rized in Table 3. The propagation model of our original GCN model is denoted by renormalizationtrick (in bold). In all other cases, the propagation model of both neural network layers is replacedwith the model specified under propagation model . Reported numbers denote mean classificationaccuracy for 100 repeated runs with random weight matrix initializations. In case of multiple vari-ables iper layer, we impose L2 regularization on all weight matrices of the first layer.Table 3: Comparison of propagation models.Description Propagation model Citeseer Cora PubmedChebyshev filter (Eq. 5)K= 3PKk=0Tk(~L)Xk69:8 79:5 74:4K= 2 69 :6 81:2 73:81st-order model (Eq. 6) X0+D12AD12X1 68:3 80:0 77:5Single parameter (Eq. 7) (IN+D12AD12)X 69 :3 79:2 77:4Renormalization trick (Eq. 8) ~D12~A~D12X 70:3 81:5 79:01st-order term only D12AD12X 68 :7 80:5 77:8Multi-layer perceptron X 46 :5 55:1 71:43https://github.com/kimiyoung/planetoid7Published as a conference paper at ICLR 20176.3 T RAINING TIME PER EPOCH1k 10k 100k 1M 10M# Edges10-310-210-1100101Sec./epoch*GPUCPUFigure 2: Wall-clock time per epoch for randomgraphs. (*) indicates out-of-memory error.Here, we report results for the mean trainingtime per epoch (forward pass, cross-entropycalculation, backward pass) for 100 epochs onsimulated random graphs, measured in secondswall-clock time. See Section 5.1 for a detaileddescription of the random graph dataset usedin these experiments. We compare results ona GPU and on a CPU-only implementation4inTensorFlow (Abadi et al., 2015). Figure 2 sum-marizes the results.7 D ISCUSSION7.1 S EMI-SUPERVISED MODELIn the experiments demonstrated here, our method for semi-supervised node classification outper-forms recent related methods by a significant margin. Methods based on graph-Laplacian regular-ization (Zhu et al., 2003; Belkin et al., 2006; Weston et al., 2012) are most likely limited due to theirassumption that edges encode mere similarity of nodes. Skip-gram based methods on the other handare limited by the fact that they are based on a multi-step pipeline which is difficult to optimize.Our proposed model can overcome both limitations, while still comparing favorably in terms of ef-ficiency (measured in wall-clock time) to related methods. Propagation of feature information fromneighboring nodes in every layer improves classification performance in comparison to methods likeICA (Lu & Getoor, 2003), where only label information is aggregated.We have further demonstrated that the proposed renormalized propagation model (Eq. 8) offers bothimproved efficiency (fewer parameters and operations, such as multiplication or addition) and betterpredictive performance on a number of datasets compared to a na ̈ıve1st-order model (Eq. 6) orhigher-order graph convolutional models using Chebyshev polynomials (Eq. 5).7.2 L IMITATIONS AND FUTURE WORKHere, we describe several limitations of our current model and outline how these might be overcomein future work.Memory requirement In the current setup with full-batch gradient descent, memory requirementgrows linearly in the size of the dataset. We have shown that for large graphs that do not fit in GPUmemory, training on CPU can still be a viable option. Mini-batch stochastic gradient descent canalleviate this issue. The procedure of generating mini-batches, however, should take into account thenumber of layers in the GCN model, as the Kth-order neighborhood for a GCN with Klayers has tobe stored in memory for an exact procedure. For very large and densely connected graph datasets,further approximations might be necessary.Directed edges and edge features Our framework currently does not naturally support edge fea-tures and is limited to undirected graphs (weighted or unweighted). Results on NELL howevershow that it is possible to handle both directed edges and edge features by representing the originaldirected graph as an undirected bipartite graph with additional nodes that represent edges in theoriginal graph (see Section 5.1 for details).Limiting assumptions Through the approximations introduced in Section 2, we implicitly assumelocality (dependence on the Kth-order neighborhood for a GCN with Klayers) and equal impor-tance of self-connections vs. edges to neighboring nodes. For some datasets, however, it might bebeneficial to introduce a trade-off parameter in the definition of ~A:~A=A+IN: (11)4Hardware used: 16-core Intel RXeon RCPU E5-2640 v3 @ 2.60GHz, GeForce RGTX TITAN X8Published as a conference paper at ICLR 2017This parameter now plays a similar role as the trade-off parameter between supervised and unsuper-vised loss in the typical semi-supervised setting (see Eq. 1). Here, however, it can be learned viagradient descent.8 C ONCLUSIONWe have introduced a novel approach for semi-supervised classification on graph-structured data.Our GCN model uses an efficient layer-wise propagation rule that is based on a first-order approx-imation of spectral convolutions on graphs. Experiments on a number of network datasets suggestthat the proposed GCN model is capable of encoding both graph structure and node features in away useful for semi-supervised classification. In this setting, our model outperforms several recentlyproposed methods by a significant margin, while being computationally efficient.ACKNOWLEDGMENTSWe would like to thank Christos Louizos, Taco Cohen, Joan Bruna, Zhilin Yang, Dave Herman,Pramod Sinha and Abdul-Saboor Sheikh for helpful discussions. This research was funded by SAP.REFERENCESMart ́ın Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in neuralinformation processing systems (NIPS) , 2016.Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric frame-work for learning from labeled and unlabeled examples. Journal of machine learning research(JMLR) , 7(Nov):2399–2434, 2006.Ulrik Brandes, Daniel Delling, Marco Gaertler, Robert Gorke, Martin Hoefer, Zoran Nikoloski,and Dorothea Wagner. On modularity clustering. IEEE Transactions on Knowledge and DataEngineering , 20(2):172–188, 2008.Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locallyconnected networks on graphs. In International Conference on Learning Representations (ICLR) ,2014.Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr, and Tom M.Mitchell. Toward an architecture for never-ending language learning. In AAAI , volume 5, pp. 3,2010.Micha ̈el Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks ongraphs with fast localized spectral filtering. In Advances in neural information processing systems(NIPS) , 2016.Brendan L. Douglas. The Weisfeiler-Lehman method and graph isomorphism testing. arXiv preprintarXiv:1101.5211 , 2011.David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Al ́anAspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs for learning molecularfingerprints. In Advances in neural information processing systems (NIPS) , pp. 2224–2232, 2015.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neuralnetworks. In AISTATS , volume 9, pp. 249–256, 2010.Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains.InProceedings. 2005 IEEE International Joint Conference on Neural Networks. , volume 2, pp.729–734. IEEE, 2005.Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedingsof the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining .ACM, 2016.9Published as a conference paper at ICLR 2017David K. Hammond, Pierre Vandergheynst, and R ́emi Gribonval. Wavelets on graphs via spectralgraph theory. Applied and Computational Harmonic Analysis , 30(2):129–150, 2011.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016.Thorsten Joachims. Transductive inference for text classification using support vector machines. InInternational Conference on Machine Learning (ICML) , volume 99, pp. 200–209, 1999.Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In Interna-tional Conference on Learning Representations (ICLR) , 2015.Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neuralnetworks. In International Conference on Learning Representations (ICLR) , 2016.Qing Lu and Lise Getoor. Link-based classification. In International Conference on Machine Learn-ing (ICML) , volume 3, pp. 496–503, 2003.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research (JMLR) , 9(Nov):2579–2605, 2008.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed repre-sentations of words and phrases and their compositionality. In Advances in neural informationprocessing systems (NIPS) , pp. 3111–3119, 2013.Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural net-works for graphs. In International Conference on Machine Learning (ICML) , 2016.Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social repre-sentations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledgediscovery and data mining , pp. 701–710. ACM, 2014.Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.The graph neural network model. IEEE Transactions on Neural Networks , 20(1):61–80, 2009.Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad.Collective classification in network data. AI magazine , 29(3):93, 2008.Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine LearningResearch (JMLR) , 15(1):1929–1958, 2014.Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scaleinformation network embedding. In Proceedings of the 24th International Conference on WorldWide Web , pp. 1067–1077. ACM, 2015.Boris Weisfeiler and A. A. Lehmann. A reduction of a graph to a canonical form and an algebraarising during this reduction. Nauchno-Technicheskaya Informatsia , 2(9):12–16, 1968.Jason Weston, Fr ́ed ́eric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi-supervised embedding. In Neural Networks: Tricks of the Trade , pp. 639–655. Springer, 2012.Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning withgraph embeddings. In International Conference on Machine Learning (ICML) , 2016.Wayne W. Zachary. An information flow model for conflict and fission in small groups. Journal ofanthropological research , pp. 452–473, 1977.Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Sch ̈olkopf.Learning with local and global consistency. In Advances in neural information processing systems(NIPS) , volume 16, pp. 321–328, 2004.Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Semi-supervised learning using gaussian fieldsand harmonic functions. In International Conference on Machine Learning (ICML) , volume 3,pp. 912–919, 2003.10Published as a conference paper at ICLR 2017A R ELATION TO WEISFEILER -LEHMAN ALGORITHMA neural network model for graph-structured data should ideally be able to learn representations ofnodes in a graph, taking both the graph structure and feature description of nodes into account. Awell-studied framework for the unique assignment of node labels given a graph and (optionally) dis-crete initial node labels is provided by the 1-dim Weisfeiler-Lehman (WL-1) algorithm (Weisfeiler& Lehmann, 1968):Algorithm 1: WL-1 algorithm (Weisfeiler & Lehmann, 1968)Input: Initial node coloring (h(0)1;h(0)2;:::;h(0)N)Output: Final node coloring (h(T)1;h(T)2;:::;h(T)N)t 0;repeatforvi2Vdoh(t+1)i hashPj2Nih(t)j;t t+ 1;until stable node coloring is reached ;Here,h(t)idenotes the coloring (label assignment) of node vi(at iteration t) andNiis its set ofneighboring node indices (irrespective of whether the graph includes self-connections for every nodeor not). hash()is a hash function. For an in-depth mathematical discussion of the WL-1 algorithmsee, e.g., Douglas (2011).We can replace the hash function in Algorithm 1 with a neural network layer-like differentiablefunction with trainable parameters as follows:h(l+1)i =0@Xj2Ni1cijh(l)jW(l)1A; (12)wherecijis an appropriately chosen normalization constant for the edge (vi;vj). Further, we cantakeh(l)inow to be a vector of activations of nodeiin thelthneural network layer. W(l)is alayer-specific weight matrix and ()denotes a differentiable, non-linear activation function.By choosing cij=pdidj, wheredi=jNijdenotes the degree of node vi, we recover the propaga-tion rule of our Graph Convolutional Network (GCN) model in vector form (see Eq. 2)5.This—loosely speaking—allows us to interpret our GCN model as a differentiable and parameter-ized generalization of the 1-dim Weisfeiler-Lehman algorithm on graphs.A.1 N ODE EMBEDDINGS WITH RANDOM WEIGHTSFrom the analogy with the Weisfeiler-Lehman algorithm, we can understand that even an untrainedGCN model with random weights can serve as a powerful feature extractor for nodes in a graph. Asan example, consider the following 3-layer GCN model:Z= tanh^Atanh^Atanh^AXW(0)W(1)W(2); (13)with weight matrices W(l)initialized at random using the initialization described in Glorot & Bengio(2010). ^A,XandZare defined as in Section 3.1.We apply this model on Zachary’s karate club network (Zachary, 1977). This graph contains 34nodes, connected by 154 (undirected and unweighted) edges. Every node is labeled by one offour classes, obtained via modularity-based clustering (Brandes et al., 2008). See Figure 3a for anillustration.5Note that we here implicitly assume that self-connections have already been added to every node in thegraph (for a clutter-free notation).11Published as a conference paper at ICLR 2017(a) Karate club network (b) Random weight embeddingFigure 3: Left: Zachary’s karate club network (Zachary, 1977), colors denote communities obtainedvia modularity-based clustering (Brandes et al., 2008). Right : Embeddings obtained from an un-trained 3-layer GCN model (Eq. 13) with random weights applied to the karate club network. Bestviewed on a computer screen.We take a featureless approach by setting X=IN, whereINis theNbyNidentity matrix. Nisthe number of nodes in the graph. Note that nodes are randomly ordered (i.e. ordering contains noinformation). Furthermore, we choose a hidden layer dimensionality6of4and a two-dimensionaloutput (so that the output can immediately be visualized in a 2-dim plot).Figure 3b shows a representative example of node embeddings (outputs Z) obtained from an un-trained GCN model applied to the karate club network. These results are comparable to embeddingsobtained from DeepWalk (Perozzi et al., 2014), which uses a more expensive unsupervised trainingprocedure.A.2 S EMI-SUPERVISED NODE EMBEDDINGSOn this simple example of a GCN applied to the karate club network it is interesting to observe howembeddings react during training on a semi-supervised classification task. Such a visualization (seeFigure 4) provides insights into how the GCN model can make use of the graph structure (and offeatures extracted from the graph structure at later layers) to learn embeddings that are useful for aclassification task.We consider the following semi-supervised learning setup: we add a softmax layer on top of ourmodel (Eq. 13) and train using only a single labeled example per class (i.e. a total number of 4 labelednodes). We train for 300 training iterations using Adam (Kingma & Ba, 2015) with a learning rateof0:01on a cross-entropy loss.Figure 4 shows the evolution of node embeddings over a number of training iterations. The modelsucceeds in linearly separating the communities based on minimal supervision and the graph struc-ture alone. A video of the full training process can be found on our website7.6We originally experimented with a hidden layer dimensionality of 2(i.e. same as output layer), but observedthat a dimensionality of 4resulted in less frequent saturation of tanh()units and therefore visually morepleasing results.7http://tkipf.github.io/graph-convolutional-networks/12Published as a conference paper at ICLR 2017(a) Iteration 25 (b) Iteration 50(c) Iteration 75 (d) Iteration 100(e) Iteration 200 (f) Iteration 300Figure 4: Evolution of karate club network node embeddings obtained from a GCN model after anumber of semi-supervised training iterations. Colors denote class. Nodes of which labels wereprovided during training (one per class) are highlighted (grey outline). Grey links between nodesdenote graph edges. Best viewed on a computer screen.13Published as a conference paper at ICLR 2017B E XPERIMENTS ON MODEL DEPTHIn these experiments, we investigate the influence of model depth (number of layers) on classificationperformance. We report results on a 5-fold cross-validation experiment on the Cora, Citeseer andPubmed datasets (Sen et al., 2008) using all labels. In addition to the standard GCN model (Eq. 2),we report results on a model variant where we use residual connections (He et al., 2016) betweenhidden layers to facilitate training of deeper models by enabling the model to carry over informationfrom the previous layer’s input:H(l+1)=~D12~A~D12H(l)W(l)+H(l): (14)On each cross-validation split, we train for 400 epochs (without early stopping) using the Adamoptimizer (Kingma & Ba, 2015) with a learning rate of 0:01. Other hyperparameters are chosen asfollows: 0.5 (dropout rate, first and last layer), 5104(L2 regularization, first layer), 16 (numberof units for each hidden layer) and 0.01 (learning rate). Results are summarized in Figure 5.12345678910Number of layers0.500.550.600.650.700.750.800.850.90AccuracyCiteseerTrainTrain (Residual)TestTest (Residual)12345678910Number of layers0.550.600.650.700.750.800.850.900.95AccuracyCoraTrainTrain (Residual)TestTest (Residual)12345678910Number of layers0.760.780.800.820.840.860.88AccuracyPubmedTrainTrain (Residual)TestTest (Residual)Figure 5: Influence of model depth (number of layers) on classification performance. Markersdenote mean classification accuracy (training vs. testing) for 5-fold cross-validation. Shaded areasdenote standard error. We show results both for a standard GCN model (dashed lines) and a modelwith added residual connections (He et al., 2016) between hidden layers (solid lines).For the datasets considered here, best results are obtained with a 2- or 3-layer model. We observethat for models deeper than 7 layers, training without the use of residual connections can becomedifficult, as the effective context size for each node increases by the size of its Kth-order neighbor-hood (for a model with Klayers) with each additional layer. Furthermore, overfitting can becomean issue as the number of parameters increases with model depth.14
HyfU5MFSg
H1Gq5Q9el
ICLR.cc/2017/conference/-/paper197/official/review
"{\"title\": \"good paper with strong experiments\", \"rating\": \"7: Good paper, accept\", \"review(...TRUNCATED)
review
2017
ICLR.cc/2017/conference
Unsupervised Pretraining for Sequence to Sequence Learning
["Prajit Ramachandran", "Peter J. Liu", "Quoc V. Le"]
"This work presents a general unsupervised learning method to improve\nthe accuracy of sequence to s(...TRUNCATED)
"[\"Natural language processing\", \"Deep learning\", \"Semi-Supervised Learning\", \"Transfer Learn(...TRUNCATED)
https://openreview.net/forum?id=H1Gq5Q9el
https://openreview.net/pdf?id=H1Gq5Q9el
https://openreview.net/forum?id=H1Gq5Q9el&noteId=HyfU5MFSg
"Under review as a conference paper at ICLR 2017UNSUPERVISED PRETRAINING FORSEQUENCE TO SEQUENCE LEA(...TRUNCATED)
r1L2IyIVe
H1Gq5Q9el
ICLR.cc/2017/conference/-/paper197/official/review
"{\"title\": \"review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Aut(...TRUNCATED)
review
2017
ICLR.cc/2017/conference
Unsupervised Pretraining for Sequence to Sequence Learning
["Prajit Ramachandran", "Peter J. Liu", "Quoc V. Le"]
"This work presents a general unsupervised learning method to improve\nthe accuracy of sequence to s(...TRUNCATED)
"[\"Natural language processing\", \"Deep learning\", \"Semi-Supervised Learning\", \"Transfer Learn(...TRUNCATED)
https://openreview.net/forum?id=H1Gq5Q9el
https://openreview.net/pdf?id=H1Gq5Q9el
https://openreview.net/forum?id=H1Gq5Q9el&noteId=r1L2IyIVe
"Under review as a conference paper at ICLR 2017UNSUPERVISED PRETRAINING FORSEQUENCE TO SEQUENCE LEA(...TRUNCATED)
S1iDqXoVl
H1Gq5Q9el
ICLR.cc/2017/conference/-/paper197/official/review
"{\"title\": \"the paper addresses a very important issue of exploiting non-parallel training data, (...TRUNCATED)
review
2017
ICLR.cc/2017/conference
Unsupervised Pretraining for Sequence to Sequence Learning
["Prajit Ramachandran", "Peter J. Liu", "Quoc V. Le"]
"This work presents a general unsupervised learning method to improve\nthe accuracy of sequence to s(...TRUNCATED)
"[\"Natural language processing\", \"Deep learning\", \"Semi-Supervised Learning\", \"Transfer Learn(...TRUNCATED)
https://openreview.net/forum?id=H1Gq5Q9el
https://openreview.net/pdf?id=H1Gq5Q9el
https://openreview.net/forum?id=H1Gq5Q9el&noteId=S1iDqXoVl
"Under review as a conference paper at ICLR 2017UNSUPERVISED PRETRAINING FORSEQUENCE TO SEQUENCE LEA(...TRUNCATED)
Syhdnc0Qx
Sys6GJqxl
ICLR.cc/2017/conference/-/paper160/official/review
"{\"title\": \"interesting and insightful work on adversarial examples for deep CNNs for image class(...TRUNCATED)
review
2017
ICLR.cc/2017/conference
Delving into Transferable Adversarial Examples and Black-box Attacks
["Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song"]
"An intriguing property of deep neural networks is the existence of adversarial examples, which can (...TRUNCATED)
["Computer vision", "Deep learning", "Applications"]
https://openreview.net/forum?id=Sys6GJqxl
https://openreview.net/pdf?id=Sys6GJqxl
https://openreview.net/forum?id=Sys6GJqxl&noteId=Syhdnc0Qx
"Published as a conference paper at ICLR 2017DELVING INTO TRANSFERABLE ADVERSARIAL EX-AMPLES AND BLA(...TRUNCATED)
HJeU-eaQx
Sys6GJqxl
ICLR.cc/2017/conference/-/paper160/official/review
"{\"title\": \"Review for Liu et al\", \"rating\": \"5: Marginally below acceptance threshold\", \"r(...TRUNCATED)
review
2017
ICLR.cc/2017/conference
Delving into Transferable Adversarial Examples and Black-box Attacks
["Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song"]
"An intriguing property of deep neural networks is the existence of adversarial examples, which can (...TRUNCATED)
["Computer vision", "Deep learning", "Applications"]
https://openreview.net/forum?id=Sys6GJqxl
https://openreview.net/pdf?id=Sys6GJqxl
https://openreview.net/forum?id=Sys6GJqxl&noteId=HJeU-eaQx
"Published as a conference paper at ICLR 2017DELVING INTO TRANSFERABLE ADVERSARIAL EX-AMPLES AND BLA(...TRUNCATED)
ryLKyXLVg
Sys6GJqxl
ICLR.cc/2017/conference/-/paper160/official/review
"{\"title\": \"good in-depth exploration but strongly recommend a rewrite\", \"rating\": \"6: Margin(...TRUNCATED)
review
2017
ICLR.cc/2017/conference
Delving into Transferable Adversarial Examples and Black-box Attacks
["Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song"]
"An intriguing property of deep neural networks is the existence of adversarial examples, which can (...TRUNCATED)
["Computer vision", "Deep learning", "Applications"]
https://openreview.net/forum?id=Sys6GJqxl
https://openreview.net/pdf?id=Sys6GJqxl
https://openreview.net/forum?id=Sys6GJqxl&noteId=ryLKyXLVg
"Published as a conference paper at ICLR 2017DELVING INTO TRANSFERABLE ADVERSARIAL EX-AMPLES AND BLA(...TRUNCATED)
Hkg1A2IVx
BkSmc8qll
ICLR.cc/2017/conference/-/paper309/official/review
"{\"title\": \"Review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The(...TRUNCATED)
review
2017
ICLR.cc/2017/conference
Dynamic Neural Turing Machine with Continuous and Discrete Addressing Schemes
["Caglar Gulcehre", "Sarath Chandar", "Kyunghyun Cho", "Yoshua Bengio"]
"In this paper, we extend neural Turing machine (NTM) into a dynamic neural Turing machine (D-NTM) b(...TRUNCATED)
["Deep learning", "Natural language processing", "Reinforcement Learning"]
https://openreview.net/forum?id=BkSmc8qll
https://openreview.net/pdf?id=BkSmc8qll
https://openreview.net/forum?id=BkSmc8qll&noteId=Hkg1A2IVx
"Under review as a conference paper at ICLR 2017DYNAMIC NEURAL TURING MACHINE WITH CONTIN -UOUS AND (...TRUNCATED)
README.md exists but content is empty.
Downloads last month
18