forum_id
stringlengths
9
13
raw_ocr_text
stringlengths
4
631k
SJU4ayYgl
Published as a conference paper at ICLR 2017SEMI-SUPERVISED CLASSIFICATION WITHGRAPH CONVOLUTIONAL NETWORKSThomas N. KipfUniversity of AmsterdamT.N.Kipf@uva.nlMax WellingUniversity of AmsterdamCanadian Institute for Advanced Research (CIFAR)M.Welling@uva.nlABSTRACTWe present a scalable approach for semi-supervised learning on graph-structureddata that is based on an efficient variant of convolutional neural networks whichoperate directly on graphs. We motivate the choice of our convolutional archi-tecture via a localized first-order approximation of spectral graph convolutions.Our model scales linearly in the number of graph edges and learns hidden layerrepresentations that encode both local graph structure and features of nodes. Ina number of experiments on citation networks and on a knowledge graph datasetwe demonstrate that our approach outperforms related methods by a significantmargin.1 I NTRODUCTIONWe consider the problem of classifying nodes (such as documents) in a graph (such as a citationnetwork), where labels are only available for a small subset of nodes. This problem can be framedas graph-based semi-supervised learning, where label information is smoothed over the graph viasome form of explicit graph-based regularization (Zhu et al., 2003; Zhou et al., 2004; Belkin et al.,2006; Weston et al., 2012), e.g. by using a graph Laplacian regularization term in the loss function:L=L0+Lreg;withLreg=Xi;jAijkf(Xi)f(Xj)k2=f(X)>f(X): (1)Here,L0denotes the supervised loss w.r.t. the labeled part of the graph, f()can be a neural network-like differentiable function, is a weighing factor and Xis a matrix of node feature vectors Xi. =DAdenotes the unnormalized graph Laplacian of an undirected graph G= (V;E)withNnodesvi2V, edges (vi;vj)2E, an adjacency matrix A2RNN(binary or weighted) anda degree matrix Dii=PjAij. The formulation of Eq. 1 relies on the assumption that connectednodes in the graph are likely to share the same label. This assumption, however, might restrictmodeling capacity, as graph edges need not necessarily encode node similarity, but could containadditional information.In this work, we encode the graph structure directly using a neural network model f(X;A)andtrain on a supervised target L0for all nodes with labels, thereby avoiding explicit graph-basedregularization in the loss function. Conditioning f()on the adjacency matrix of the graph willallow the model to distribute gradient information from the supervised loss L0and will enable it tolearn representations of nodes both with and without labels.Our contributions are two-fold. Firstly, we introduce a simple and well-behaved layer-wise prop-agation rule for neural network models which operate directly on graphs and show how it can bemotivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011).Secondly, we demonstrate how this form of a graph-based neural network model can be used forfast and scalable semi-supervised classification of nodes in a graph. Experiments on a number ofdatasets demonstrate that our model compares favorably both in classification accuracy and effi-ciency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.1Published as a conference paper at ICLR 20172 F AST APPROXIMATE CONVOLUTIONS ON GRAPHSIn this section, we provide theoretical motivation for a specific graph-based neural network modelf(X;A)that we will use in the rest of this paper. We consider a multi-layer Graph ConvolutionalNetwork (GCN) with the following layer-wise propagation rule:H(l+1)=~D12~A~D12H(l)W(l): (2)Here, ~A=A+INis the adjacency matrix of the undirected graph Gwith added self-connections.INis the identity matrix, ~Dii=Pj~AijandW(l)is a layer-specific trainable weight matrix. ()denotes an activation function, such as the ReLU() = max(0;).H(l)2RNDis the matrix of ac-tivations in the lthlayer;H(0)=X. In the following, we show that the form of this propagation rulecan be motivated1via a first-order approximation of localized spectral filters on graphs (Hammondet al., 2011; Defferrard et al., 2016).2.1 S PECTRAL GRAPH CONVOLUTIONSWe consider spectral convolutions on graphs defined as the multiplication of a signal x2RN(ascalar for every node) with a filter g=diag()parameterized by 2RNin the Fourier domain,i.e.:g?x=UgU>x; (3)whereUis the matrix of eigenvectors of the normalized graph Laplacian L=IND12AD12=UU>, with a diagonal matrix of its eigenvalues andU>xbeing the graph Fourier transformofx. We can understand gas a function of the eigenvalues of L, i.e.g(). Evaluating Eq. 3 iscomputationally expensive, as multiplication with the eigenvector matrix UisO(N2). Furthermore,computing the eigendecomposition of Lin the first place might be prohibitively expensive for largegraphs. To circumvent this problem, it was suggested in Hammond et al. (2011) that g()can bewell-approximated by a truncated expansion in terms of Chebyshev polynomials Tk(x)up toKthorder:g0()KXk=00kTk(~); (4)with a rescaled ~ =2maxIN.maxdenotes the largest eigenvalue of L.02RKis now avector of Chebyshev coefficients. The Chebyshev polynomials are recursively defined as Tk(x) =2xTk1(x)Tk2(x), withT0(x) = 1 andT1(x) =x. The reader is referred to Hammond et al.(2011) for an in-depth discussion of this approximation.Going back to our definition of a convolution of a signal xwith a filterg0, we now have:g0?xKXk=00kTk(~L)x; (5)with ~L=2maxLIN; as can easily be verified by noticing that (UU>)k=UkU>. Note thatthis expression is now K-localized since it is a Kth-order polynomial in the Laplacian, i.e. it dependsonly on nodes that are at maximum Ksteps away from the central node ( Kth-order neighborhood).The complexity of evaluating Eq. 5 is O(jEj), i.e. linear in the number of edges. Defferrard et al.(2016) use this K-localized convolution to define a convolutional neural network on graphs.2.2 L AYER -WISELINEAR MODELA neural network model based on graph convolutions can therefore be built by stacking multipleconvolutional layers of the form of Eq. 5, each layer followed by a point-wise non-linearity. Now,imagine we limited the layer-wise convolution operation to K= 1(see Eq. 5), i.e. a function that islinear w.r.t.Land therefore a linear function on the graph Laplacian spectrum.1We provide an alternative interpretation of this propagation rule based on the Weisfeiler-Lehman algorithm(Weisfeiler & Lehmann, 1968) in Appendix A.2Published as a conference paper at ICLR 2017In this way, we can still recover a rich class of convolutional filter functions by stacking multiplesuch layers, but we are not limited to the explicit parameterization given by, e.g., the Chebyshevpolynomials. We intuitively expect that such a model can alleviate the problem of overfitting onlocal neighborhood structures for graphs with very wide node degree distributions, such as socialnetworks, citation networks, knowledge graphs and many other real-world graph datasets. Addition-ally, for a fixed computational budget, this layer-wise linear formulation allows us to build deepermodels, a practice that is known to improve modeling capacity on a number of domains (He et al.,2016).In this linear formulation of a GCN we further approximate max2, as we can expect that neuralnetwork parameters will adapt to this change in scale during training. Under these approximationsEq. 5 simplifies to:g0?x00x+01(LIN)x=00x01D12AD12x; (6)with two free parameters 00and01. The filter parameters can be shared over the whole graph.Successive application of filters of this form then effectively convolve the kth-order neighborhood ofa node, where kis the number of successive filtering operations or convolutional layers in the neuralnetwork model.In practice, it can be beneficial to constrain the number of parameters further to address overfittingand to minimize the number of operations (such as matrix multiplications) per layer. This leaves uswith the following expression:g?xIN+D12AD12x; (7)with a single parameter =00=01. Note that IN+D12AD12now has eigenvalues inthe range [0;2]. Repeated application of this operator can therefore lead to numerical instabilitiesand exploding/vanishing gradients when used in a deep neural network model. To alleviate thisproblem, we introduce the following renormalization trick :IN+D12AD12!~D12~A~D12, with~A=A+INand~Dii=Pj~Aij.We can generalize this definition to a signal X2RNCwithCinput channels (i.e. a C-dimensionalfeature vector for every node) and Ffilters or feature maps as follows:Z=~D12~A~D12X; (8)where 2RCFis now a matrix of filter parameters and Z2RNFis the convolved signalmatrix. This filtering operation has complexity O(jEjFC), as~AX can be efficiently implementedas a product of a sparse matrix with a dense matrix.3 S EMI-SUPERVISED NODE CLASSIFICATIONHaving introduced a simple, yet flexible model f(X;A)for efficient information propagation ongraphs, we can return to the problem of semi-supervised node classification. As outlined in the in-troduction, we can relax certain assumptions typically made in graph-based semi-supervised learn-ing by conditioning our model f(X;A)both on the data Xand on the adjacency matrix Aof theunderlying graph structure. We expect this setting to be especially powerful in scenarios where theadjacency matrix contains information not present in the data X, such as citation links between doc-uments in a citation network or relations in a knowledge graph. The overall model, a multi-layerGCN for semi-supervised learning, is schematically depicted in Figure 1.3.1 E XAMPLEIn the following, we consider a two-layer GCN for semi-supervised node classification on a graphwith a symmetric adjacency matrix A(binary or weighted). We first calculate ^A=~D12~A~D12ina pre-processing step. Our forward model then takes the simple form:Z=f(X;A) = softmax^AReLU^AXW(0)W(1): (9)3Published as a conference paper at ICLR 2017Cinput layerX1X2X3X4Foutput layerZ1Z2Z3Z4hiddenlayersY1Y41(a) Graph Convolutional Network30 20 10 0 10 20 303020100102030 (b) Hidden layer activationsFigure 1: Left: Schematic depiction of multi-layer Graph Convolutional Network (GCN) for semi-supervised learning with Cinput channels and Ffeature maps in the output layer. The graph struc-ture (edges shown as black lines) is shared over layers, labels are denoted by Yi.Right : t-SNE(Maaten & Hinton, 2008) visualization of hidden layer activations of a two-layer GCN trained onthe Cora dataset (Sen et al., 2008) using 5%of labels. Colors denote document class.Here,W(0)2RCHis an input-to-hidden weight matrix for a hidden layer with Hfeature maps.W(1)2RHFis a hidden-to-output weight matrix. The softmax activation function, defined assoftmax(xi) =1Zexp(xi)withZ=Piexp(xi), is applied row-wise. For semi-supervised multi-class classification, we then evaluate the cross-entropy error over all labeled examples:L=Xl2YLFXf=1YlflnZlf; (10)whereYLis the set of node indices that have labels.The neural network weights W(0)andW(1)are trained using gradient descent. In this work, weperform batch gradient descent using the full dataset for every training iteration, which is a viableoption as long as datasets fit in memory. Using a sparse representation for A, memory requirementisO(jEj), i.e. linear in the number of edges. Stochasticity in the training process is introduced viadropout (Srivastava et al., 2014). We leave memory-efficient extensions with mini-batch stochasticgradient descent for future work.3.2 I MPLEMENTATIONIn practice, we make use of TensorFlow (Abadi et al., 2015) for an efficient GPU-based imple-mentation2of Eq. 9 using sparse-dense matrix multiplications. The computational complexity ofevaluating Eq. 9 is then O(jEjCHF ), i.e. linear in the number of graph edges.4 R ELATED WORKOur model draws inspiration both from the field of graph-based semi-supervised learning and fromrecent work on neural networks that operate on graphs. In what follows, we provide a brief overviewon related work in both fields.4.1 G RAPH -BASED SEMI-SUPERVISED LEARNINGA large number of approaches for semi-supervised learning using graph representations have beenproposed in recent years, most of which fall into two broad categories: methods that use someform of explicit graph Laplacian regularization and graph embedding-based approaches. Prominentexamples for graph Laplacian regularization include label propagation (Zhu et al., 2003), manifoldregularization (Belkin et al., 2006) and deep semi-supervised embedding (Weston et al., 2012).2Code to reproduce our experiments is available at https://github.com/tkipf/gcn .4Published as a conference paper at ICLR 2017Recently, attention has shifted to models that learn graph embeddings with methods inspired bythe skip-gram model (Mikolov et al., 2013). DeepWalk (Perozzi et al., 2014) learns embeddingsvia the prediction of the local neighborhood of nodes, sampled from random walks on the graph.LINE (Tang et al., 2015) and node2vec (Grover & Leskovec, 2016) extend DeepWalk with moresophisticated random walk or breadth-first search schemes. For all these methods, however, a multi-step pipeline including random walk generation and semi-supervised training is required where eachstep has to be optimized separately. Planetoid (Yang et al., 2016) alleviates this by injecting labelinformation in the process of learning embeddings.4.2 N EURAL NETWORKS ON GRAPHSNeural networks that operate on graphs have previously been introduced in Gori et al. (2005);Scarselli et al. (2009) as a form of recurrent neural network. Their framework requires the repeatedapplication of contraction maps as propagation functions until node representations reach a stablefixed point. This restriction was later alleviated in Li et al. (2016) by introducing modern practicesfor recurrent neural network training to the original graph neural network framework. Duvenaudet al. (2015) introduced a convolution-like propagation rule on graphs and methods for graph-levelclassification. Their approach requires to learn node degree-specific weight matrices which does notscale to large graphs with wide node degree distributions. Our model instead uses a single weightmatrix per layer and deals with varying node degrees through an appropriate normalization of theadjacency matrix (see Section 3.1).A related approach to node classification with a graph-based neural network was recently introducedin Atwood & Towsley (2016). They report O(N2)complexity, limiting the range of possible appli-cations. In a different yet related model, Niepert et al. (2016) convert graphs locally into sequencesthat are fed into a conventional 1D convolutional neural network, which requires the definition of anode ordering in a pre-processing step.Our method is based on spectral graph convolutional neural networks, introduced in Bruna et al.(2014) and later extended by Defferrard et al. (2016) with fast localized convolutions. In contrastto these works, we consider here the task of transductive node classification within networks ofsignificantly larger scale. We show that in this setting, a number of simplifications (see Section 2.2)can be introduced to the original frameworks of Bruna et al. (2014) and Defferrard et al. (2016) thatimprove scalability and classification performance in large-scale networks.5 E XPERIMENTSWe test our model in a number of experiments: semi-supervised document classification in cita-tion networks, semi-supervised entity classification in a bipartite graph extracted from a knowledgegraph, an evaluation of various graph propagation models and a run-time analysis on random graphs.5.1 D ATASETSWe closely follow the experimental setup in Yang et al. (2016). Dataset statistics are summarizedin Table 1. In the citation network datasets—Citeseer, Cora and Pubmed (Sen et al., 2008)—nodesare documents and edges are citation links. Label rate denotes the number of labeled nodes that areused for training divided by the total number of nodes in each dataset. NELL (Carlson et al., 2010;Yang et al., 2016) is a bipartite graph dataset extracted from a knowledge graph with 55,864 relationnodes and 9,891 entity nodes.Table 1: Dataset statistics, as reported in Yang et al. (2016).Dataset Type Nodes Edges Classes Features Label rateCiteseer Citation network 3,327 4,732 6 3,703 0:036Cora Citation network 2,708 5,429 7 1,433 0:052Pubmed Citation network 19,717 44,338 3 500 0:003NELL Knowledge graph 65,755 266,144 210 5,414 0:0015Published as a conference paper at ICLR 2017Citation networks We consider three citation network datasets: Citeseer, Cora and Pubmed (Senet al., 2008). The datasets contain sparse bag-of-words feature vectors for each document and a listof citation links between documents. We treat the citation links as (undirected) edges and constructa binary, symmetric adjacency matrix A. Each document has a class label. For training, we only use20 labels per class, but all feature vectors.NELL NELL is a dataset extracted from the knowledge graph introduced in (Carlson et al., 2010).A knowledge graph is a set of entities connected with directed, labeled edges (relations). We followthe pre-processing scheme as described in Yang et al. (2016). We assign separate relation nodesr1andr2for each entity pair (e1;r;e 2)as(e1;r1)and(e2;r2). Entity nodes are described bysparse feature vectors. We extend the number of features in NELL by assigning a unique one-hotrepresentation for every relation node, effectively resulting in a 61,278-dim sparse feature vector pernode. The semi-supervised task here considers the extreme case of only a single labeled exampleper class in the training set. We construct a binary, symmetric adjacency matrix from this graph bysetting entries Aij= 1, if one or more edges are present between nodes iandj.Random graphs We simulate random graph datasets of various sizes for experiments where wemeasure training time per epoch. For a dataset with Nnodes we create a random graph assigning2Nedges uniformly at random. We take the identity matrix INas input feature matrix X, therebyimplicitly taking a featureless approach where the model is only informed about the identity of eachnode, specified by a unique one-hot vector. We add dummy labels Yi= 1for every node.5.2 E XPERIMENTAL SET-UPUnless otherwise noted, we train a two-layer GCN as described in Section 3.1 and evaluate pre-diction accuracy on a test set of 1,000 labeled examples. We provide additional experiments usingdeeper models with up to 10 layers in Appendix B. We choose the same dataset splits as in Yanget al. (2016) with an additional validation set of 500 labeled examples for hyperparameter opti-mization (dropout rate for all layers, L2 regularization factor for the first GCN layer and number ofhidden units). We do not use the validation set labels for training.For the citation network datasets, we optimize hyperparameters on Cora only and use the same setof parameters for Citeseer and Pubmed. We train all models for a maximum of 200 epochs (trainingiterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0:01and early stopping with awindow size of 10, i.e. we stop training if the validation loss does not decrease for 10 consecutiveepochs. We initialize weights using the initialization described in Glorot & Bengio (2010) andaccordingly (row-)normalize input feature vectors. On the random graph datasets, we use a hiddenlayer size of 32 units and omit regularization (i.e. neither dropout nor L2 regularization).5.3 B ASELINESWe compare against the same baseline methods as in Yang et al. (2016), i.e. label propagation(LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifoldregularization (ManiReg) (Belkin et al., 2006) and skip-gram based graph embeddings (DeepWalk)(Perozzi et al., 2014). We omit TSVM (Joachims, 1999), as it does not scale to the large number ofclasses in one of our datasets.We further compare against the iterative classification algorithm (ICA) proposed in Lu & Getoor(2003) in conjunction with two logistic regression classifiers, one for local node features alone andone for relational classification using local features and an aggregation operator as described inSen et al. (2008). We first train the local classifier using all labeled training set nodes and useit to bootstrap class labels of unlabeled nodes for relational classifier training. We run iterativeclassification (relational classifier) with a random node ordering for 10 iterations on all unlabelednodes (bootstrapped using the local classifier). L2 regularization parameter and aggregation operator(count vs.prop, see Sen et al. (2008)) are chosen based on validation set performance for each datasetseparately.Lastly, we compare against Planetoid (Yang et al., 2016), where we always choose their best-performing model variant (transductive vs. inductive) as a baseline.6Published as a conference paper at ICLR 20176 R ESULTS6.1 S EMI-SUPERVISED NODE CLASSIFICATIONResults are summarized in Table 2. Reported numbers denote classification accuracy in percent. ForICA, we report the mean accuracy of 100 runs with random node orderings. Results for all otherbaseline methods are taken from the Planetoid paper (Yang et al., 2016). Planetoid* denotes the bestmodel for the respective dataset out of the variants presented in their paper.Table 2: Summary of results in terms of classification accuracy (in percent).Method Citeseer Cora Pubmed NELLManiReg [3] 60:1 59 :5 70 :7 21 :8SemiEmb [28] 59:6 59 :0 71 :1 26 :7LP [32] 45:3 68 :0 63 :0 26 :5DeepWalk [22] 43:2 67 :2 65 :3 58 :1ICA [18] 69:1 75 :1 73 :9 23 :1Planetoid* [29] 64:7(26s) 75:7(13s) 77:2(25s) 61:9(185s)GCN (this paper) 70:3(7s) 81:5(4s) 79:0(38s) 66:0(48s)GCN (rand. splits) 67:90:5 80:10:5 78:90:7 58:41:7We further report wall-clock training time in seconds until convergence (in brackets) for our method(incl. evaluation of validation error) and for Planetoid. For the latter, we used an implementation pro-vided by the authors3and trained on the same hardware (with GPU) as our GCN model. We trainedand tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracyof 100 runs with random weight initializations. We used the following sets of hyperparameters forCiteseer, Cora and Pubmed: 0.5 (dropout rate), 5104(L2 regularization) and 16(number of hid-den units); and for NELL: 0.1 (dropout rate), 1105(L2 regularization) and 64(number of hiddenunits).In addition, we report performance of our model on 10 randomly drawn dataset splits of the samesize as in Yang et al. (2016), denoted by GCN (rand. splits). Here, we report mean and standarderror of prediction accuracy on the test set split in percent.6.2 E VALUATION OF PROPAGATION MODELWe compare different variants of our proposed per-layer propagation model on the citation networkdatasets. We follow the experimental set-up described in the previous section. Results are summa-rized in Table 3. The propagation model of our original GCN model is denoted by renormalizationtrick (in bold). In all other cases, the propagation model of both neural network layers is replacedwith the model specified under propagation model . Reported numbers denote mean classificationaccuracy for 100 repeated runs with random weight matrix initializations. In case of multiple vari-ables iper layer, we impose L2 regularization on all weight matrices of the first layer.Table 3: Comparison of propagation models.Description Propagation model Citeseer Cora PubmedChebyshev filter (Eq. 5)K= 3PKk=0Tk(~L)Xk69:8 79:5 74:4K= 2 69 :6 81:2 73:81st-order model (Eq. 6) X0+D12AD12X1 68:3 80:0 77:5Single parameter (Eq. 7) (IN+D12AD12)X 69 :3 79:2 77:4Renormalization trick (Eq. 8) ~D12~A~D12X 70:3 81:5 79:01st-order term only D12AD12X 68 :7 80:5 77:8Multi-layer perceptron X 46 :5 55:1 71:43https://github.com/kimiyoung/planetoid7Published as a conference paper at ICLR 20176.3 T RAINING TIME PER EPOCH1k 10k 100k 1M 10M# Edges10-310-210-1100101Sec./epoch*GPUCPUFigure 2: Wall-clock time per epoch for randomgraphs. (*) indicates out-of-memory error.Here, we report results for the mean trainingtime per epoch (forward pass, cross-entropycalculation, backward pass) for 100 epochs onsimulated random graphs, measured in secondswall-clock time. See Section 5.1 for a detaileddescription of the random graph dataset usedin these experiments. We compare results ona GPU and on a CPU-only implementation4inTensorFlow (Abadi et al., 2015). Figure 2 sum-marizes the results.7 D ISCUSSION7.1 S EMI-SUPERVISED MODELIn the experiments demonstrated here, our method for semi-supervised node classification outper-forms recent related methods by a significant margin. Methods based on graph-Laplacian regular-ization (Zhu et al., 2003; Belkin et al., 2006; Weston et al., 2012) are most likely limited due to theirassumption that edges encode mere similarity of nodes. Skip-gram based methods on the other handare limited by the fact that they are based on a multi-step pipeline which is difficult to optimize.Our proposed model can overcome both limitations, while still comparing favorably in terms of ef-ficiency (measured in wall-clock time) to related methods. Propagation of feature information fromneighboring nodes in every layer improves classification performance in comparison to methods likeICA (Lu & Getoor, 2003), where only label information is aggregated.We have further demonstrated that the proposed renormalized propagation model (Eq. 8) offers bothimproved efficiency (fewer parameters and operations, such as multiplication or addition) and betterpredictive performance on a number of datasets compared to a na ̈ıve1st-order model (Eq. 6) orhigher-order graph convolutional models using Chebyshev polynomials (Eq. 5).7.2 L IMITATIONS AND FUTURE WORKHere, we describe several limitations of our current model and outline how these might be overcomein future work.Memory requirement In the current setup with full-batch gradient descent, memory requirementgrows linearly in the size of the dataset. We have shown that for large graphs that do not fit in GPUmemory, training on CPU can still be a viable option. Mini-batch stochastic gradient descent canalleviate this issue. The procedure of generating mini-batches, however, should take into account thenumber of layers in the GCN model, as the Kth-order neighborhood for a GCN with Klayers has tobe stored in memory for an exact procedure. For very large and densely connected graph datasets,further approximations might be necessary.Directed edges and edge features Our framework currently does not naturally support edge fea-tures and is limited to undirected graphs (weighted or unweighted). Results on NELL howevershow that it is possible to handle both directed edges and edge features by representing the originaldirected graph as an undirected bipartite graph with additional nodes that represent edges in theoriginal graph (see Section 5.1 for details).Limiting assumptions Through the approximations introduced in Section 2, we implicitly assumelocality (dependence on the Kth-order neighborhood for a GCN with Klayers) and equal impor-tance of self-connections vs. edges to neighboring nodes. For some datasets, however, it might bebeneficial to introduce a trade-off parameter in the definition of ~A:~A=A+IN: (11)4Hardware used: 16-core Intel RXeon RCPU E5-2640 v3 @ 2.60GHz, GeForce RGTX TITAN X8Published as a conference paper at ICLR 2017This parameter now plays a similar role as the trade-off parameter between supervised and unsuper-vised loss in the typical semi-supervised setting (see Eq. 1). Here, however, it can be learned viagradient descent.8 C ONCLUSIONWe have introduced a novel approach for semi-supervised classification on graph-structured data.Our GCN model uses an efficient layer-wise propagation rule that is based on a first-order approx-imation of spectral convolutions on graphs. Experiments on a number of network datasets suggestthat the proposed GCN model is capable of encoding both graph structure and node features in away useful for semi-supervised classification. In this setting, our model outperforms several recentlyproposed methods by a significant margin, while being computationally efficient.ACKNOWLEDGMENTSWe would like to thank Christos Louizos, Taco Cohen, Joan Bruna, Zhilin Yang, Dave Herman,Pramod Sinha and Abdul-Saboor Sheikh for helpful discussions. This research was funded by SAP.REFERENCESMart ́ın Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in neuralinformation processing systems (NIPS) , 2016.Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric frame-work for learning from labeled and unlabeled examples. Journal of machine learning research(JMLR) , 7(Nov):2399–2434, 2006.Ulrik Brandes, Daniel Delling, Marco Gaertler, Robert Gorke, Martin Hoefer, Zoran Nikoloski,and Dorothea Wagner. On modularity clustering. IEEE Transactions on Knowledge and DataEngineering , 20(2):172–188, 2008.Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locallyconnected networks on graphs. In International Conference on Learning Representations (ICLR) ,2014.Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr, and Tom M.Mitchell. Toward an architecture for never-ending language learning. In AAAI , volume 5, pp. 3,2010.Micha ̈el Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks ongraphs with fast localized spectral filtering. In Advances in neural information processing systems(NIPS) , 2016.Brendan L. Douglas. The Weisfeiler-Lehman method and graph isomorphism testing. arXiv preprintarXiv:1101.5211 , 2011.David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Al ́anAspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs for learning molecularfingerprints. In Advances in neural information processing systems (NIPS) , pp. 2224–2232, 2015.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neuralnetworks. In AISTATS , volume 9, pp. 249–256, 2010.Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains.InProceedings. 2005 IEEE International Joint Conference on Neural Networks. , volume 2, pp.729–734. IEEE, 2005.Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedingsof the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining .ACM, 2016.9Published as a conference paper at ICLR 2017David K. Hammond, Pierre Vandergheynst, and R ́emi Gribonval. Wavelets on graphs via spectralgraph theory. Applied and Computational Harmonic Analysis , 30(2):129–150, 2011.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016.Thorsten Joachims. Transductive inference for text classification using support vector machines. InInternational Conference on Machine Learning (ICML) , volume 99, pp. 200–209, 1999.Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In Interna-tional Conference on Learning Representations (ICLR) , 2015.Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neuralnetworks. In International Conference on Learning Representations (ICLR) , 2016.Qing Lu and Lise Getoor. Link-based classification. In International Conference on Machine Learn-ing (ICML) , volume 3, pp. 496–503, 2003.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research (JMLR) , 9(Nov):2579–2605, 2008.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed repre-sentations of words and phrases and their compositionality. In Advances in neural informationprocessing systems (NIPS) , pp. 3111–3119, 2013.Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural net-works for graphs. In International Conference on Machine Learning (ICML) , 2016.Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social repre-sentations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledgediscovery and data mining , pp. 701–710. ACM, 2014.Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.The graph neural network model. IEEE Transactions on Neural Networks , 20(1):61–80, 2009.Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad.Collective classification in network data. AI magazine , 29(3):93, 2008.Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine LearningResearch (JMLR) , 15(1):1929–1958, 2014.Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scaleinformation network embedding. In Proceedings of the 24th International Conference on WorldWide Web , pp. 1067–1077. ACM, 2015.Boris Weisfeiler and A. A. Lehmann. A reduction of a graph to a canonical form and an algebraarising during this reduction. Nauchno-Technicheskaya Informatsia , 2(9):12–16, 1968.Jason Weston, Fr ́ed ́eric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi-supervised embedding. In Neural Networks: Tricks of the Trade , pp. 639–655. Springer, 2012.Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning withgraph embeddings. In International Conference on Machine Learning (ICML) , 2016.Wayne W. Zachary. An information flow model for conflict and fission in small groups. Journal ofanthropological research , pp. 452–473, 1977.Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Sch ̈olkopf.Learning with local and global consistency. In Advances in neural information processing systems(NIPS) , volume 16, pp. 321–328, 2004.Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Semi-supervised learning using gaussian fieldsand harmonic functions. In International Conference on Machine Learning (ICML) , volume 3,pp. 912–919, 2003.10Published as a conference paper at ICLR 2017A R ELATION TO WEISFEILER -LEHMAN ALGORITHMA neural network model for graph-structured data should ideally be able to learn representations ofnodes in a graph, taking both the graph structure and feature description of nodes into account. Awell-studied framework for the unique assignment of node labels given a graph and (optionally) dis-crete initial node labels is provided by the 1-dim Weisfeiler-Lehman (WL-1) algorithm (Weisfeiler& Lehmann, 1968):Algorithm 1: WL-1 algorithm (Weisfeiler & Lehmann, 1968)Input: Initial node coloring (h(0)1;h(0)2;:::;h(0)N)Output: Final node coloring (h(T)1;h(T)2;:::;h(T)N)t 0;repeatforvi2Vdoh(t+1)i hashPj2Nih(t)j;t t+ 1;until stable node coloring is reached ;Here,h(t)idenotes the coloring (label assignment) of node vi(at iteration t) andNiis its set ofneighboring node indices (irrespective of whether the graph includes self-connections for every nodeor not). hash()is a hash function. For an in-depth mathematical discussion of the WL-1 algorithmsee, e.g., Douglas (2011).We can replace the hash function in Algorithm 1 with a neural network layer-like differentiablefunction with trainable parameters as follows:h(l+1)i =0@Xj2Ni1cijh(l)jW(l)1A; (12)wherecijis an appropriately chosen normalization constant for the edge (vi;vj). Further, we cantakeh(l)inow to be a vector of activations of nodeiin thelthneural network layer. W(l)is alayer-specific weight matrix and ()denotes a differentiable, non-linear activation function.By choosing cij=pdidj, wheredi=jNijdenotes the degree of node vi, we recover the propaga-tion rule of our Graph Convolutional Network (GCN) model in vector form (see Eq. 2)5.This—loosely speaking—allows us to interpret our GCN model as a differentiable and parameter-ized generalization of the 1-dim Weisfeiler-Lehman algorithm on graphs.A.1 N ODE EMBEDDINGS WITH RANDOM WEIGHTSFrom the analogy with the Weisfeiler-Lehman algorithm, we can understand that even an untrainedGCN model with random weights can serve as a powerful feature extractor for nodes in a graph. Asan example, consider the following 3-layer GCN model:Z= tanh^Atanh^Atanh^AXW(0)W(1)W(2); (13)with weight matrices W(l)initialized at random using the initialization described in Glorot & Bengio(2010). ^A,XandZare defined as in Section 3.1.We apply this model on Zachary’s karate club network (Zachary, 1977). This graph contains 34nodes, connected by 154 (undirected and unweighted) edges. Every node is labeled by one offour classes, obtained via modularity-based clustering (Brandes et al., 2008). See Figure 3a for anillustration.5Note that we here implicitly assume that self-connections have already been added to every node in thegraph (for a clutter-free notation).11Published as a conference paper at ICLR 2017(a) Karate club network (b) Random weight embeddingFigure 3: Left: Zachary’s karate club network (Zachary, 1977), colors denote communities obtainedvia modularity-based clustering (Brandes et al., 2008). Right : Embeddings obtained from an un-trained 3-layer GCN model (Eq. 13) with random weights applied to the karate club network. Bestviewed on a computer screen.We take a featureless approach by setting X=IN, whereINis theNbyNidentity matrix. Nisthe number of nodes in the graph. Note that nodes are randomly ordered (i.e. ordering contains noinformation). Furthermore, we choose a hidden layer dimensionality6of4and a two-dimensionaloutput (so that the output can immediately be visualized in a 2-dim plot).Figure 3b shows a representative example of node embeddings (outputs Z) obtained from an un-trained GCN model applied to the karate club network. These results are comparable to embeddingsobtained from DeepWalk (Perozzi et al., 2014), which uses a more expensive unsupervised trainingprocedure.A.2 S EMI-SUPERVISED NODE EMBEDDINGSOn this simple example of a GCN applied to the karate club network it is interesting to observe howembeddings react during training on a semi-supervised classification task. Such a visualization (seeFigure 4) provides insights into how the GCN model can make use of the graph structure (and offeatures extracted from the graph structure at later layers) to learn embeddings that are useful for aclassification task.We consider the following semi-supervised learning setup: we add a softmax layer on top of ourmodel (Eq. 13) and train using only a single labeled example per class (i.e. a total number of 4 labelednodes). We train for 300 training iterations using Adam (Kingma & Ba, 2015) with a learning rateof0:01on a cross-entropy loss.Figure 4 shows the evolution of node embeddings over a number of training iterations. The modelsucceeds in linearly separating the communities based on minimal supervision and the graph struc-ture alone. A video of the full training process can be found on our website7.6We originally experimented with a hidden layer dimensionality of 2(i.e. same as output layer), but observedthat a dimensionality of 4resulted in less frequent saturation of tanh()units and therefore visually morepleasing results.7http://tkipf.github.io/graph-convolutional-networks/12Published as a conference paper at ICLR 2017(a) Iteration 25 (b) Iteration 50(c) Iteration 75 (d) Iteration 100(e) Iteration 200 (f) Iteration 300Figure 4: Evolution of karate club network node embeddings obtained from a GCN model after anumber of semi-supervised training iterations. Colors denote class. Nodes of which labels wereprovided during training (one per class) are highlighted (grey outline). Grey links between nodesdenote graph edges. Best viewed on a computer screen.13Published as a conference paper at ICLR 2017B E XPERIMENTS ON MODEL DEPTHIn these experiments, we investigate the influence of model depth (number of layers) on classificationperformance. We report results on a 5-fold cross-validation experiment on the Cora, Citeseer andPubmed datasets (Sen et al., 2008) using all labels. In addition to the standard GCN model (Eq. 2),we report results on a model variant where we use residual connections (He et al., 2016) betweenhidden layers to facilitate training of deeper models by enabling the model to carry over informationfrom the previous layer’s input:H(l+1)=~D12~A~D12H(l)W(l)+H(l): (14)On each cross-validation split, we train for 400 epochs (without early stopping) using the Adamoptimizer (Kingma & Ba, 2015) with a learning rate of 0:01. Other hyperparameters are chosen asfollows: 0.5 (dropout rate, first and last layer), 5104(L2 regularization, first layer), 16 (numberof units for each hidden layer) and 0.01 (learning rate). Results are summarized in Figure 5.12345678910Number of layers0.500.550.600.650.700.750.800.850.90AccuracyCiteseerTrainTrain (Residual)TestTest (Residual)12345678910Number of layers0.550.600.650.700.750.800.850.900.95AccuracyCoraTrainTrain (Residual)TestTest (Residual)12345678910Number of layers0.760.780.800.820.840.860.88AccuracyPubmedTrainTrain (Residual)TestTest (Residual)Figure 5: Influence of model depth (number of layers) on classification performance. Markersdenote mean classification accuracy (training vs. testing) for 5-fold cross-validation. Shaded areasdenote standard error. We show results both for a standard GCN model (dashed lines) and a modelwith added residual connections (He et al., 2016) between hidden layers (solid lines).For the datasets considered here, best results are obtained with a 2- or 3-layer model. We observethat for models deeper than 7 layers, training without the use of residual connections can becomedifficult, as the effective context size for each node increases by the size of its Kth-order neighbor-hood (for a model with Klayers) with each additional layer. Furthermore, overfitting can becomean issue as the number of parameters increases with model depth.14
H1Gq5Q9el
Under review as a conference paper at ICLR 2017UNSUPERVISED PRETRAINING FORSEQUENCE TO SEQUENCE LEARNINGPrajit RamachandranUniversity of Illinois at Urbana-Champaignprajitram@gmail.comPeter J. Liu, Quoc V . LeGoogle Brainfpeterjliu,qvl g@google.comABSTRACTThis work presents a general unsupervised learning method to improve the accu-racy of sequence to sequence (seq2seq) models. In our method, the weights ofthe encoder and decoder of a seq2seq model are initialized with the pretrainedweights of two language models and then fine-tuned with labeled data. We ap-ply this method to challenging benchmarks in machine translation and abstractivesummarization and find that it significantly improves the subsequent supervisedmodels. Our main result is that the pretraining accelerates training and improvesgeneralization of seq2seq models, achieving state-of-the-art results on the WMTEnglish!German task, surpassing a range of methods using both phrase-basedmachine translation and neural machine translation. Our method achieves an im-provement of 1.3 BLEU from the previous best models on both WMT’14 andWMT’15 English!German. On summarization, our method beats the supervisedlearning baseline.1 I NTRODUCTIONSequence to sequence ( seq2seq ) models (Sutskever et al., 2014; Cho et al., 2014; Kalchbrenner& Blunsom, 2013; Allen, 1987; ̃Neco & Forcada, 1997) are extremely effective on a variety oftasks that require a mapping between a variable-length input sequence to a variable-length outputsequence. The main weakness of sequence to sequence models, and deep networks in general, liesin the fact that they can easily overfit when the amount of supervised training data is small.In this work, we propose a simple and effective technique for using unsupervised pretraining toimprove seq2seq models. Our proposal is to initialize both encoder and decoder networks withpretrained weights of two language models. These pretrained weights are then fine-tuned with thelabeled corpus.We benchmark this method on machine translation for English !German and abstractive summa-rization on CNN and Daily Mail articles. Our main result is that a seq2seq model, with pretraining,exceeds the strongest possible baseline in both neural machine translation and phrase-based machinetranslation. Our model obtains an improvement of 1.3 BLEU from the previous best models on bothWMT’14 and WMT’15 English !German. On abstractive summarization, our method achievescompetitive results to the strongest baselines.We also perform ablation study to understand the behaviors of the pretraining method. Our studyconfirms that among many other possible choices of using a language model in seq2seq with atten-tion, the above proposal works best. Our study also shows that, for translation, the main gains comefrom the improved generalization due to the pretrained features, whereas for summarization thegains come from the improved optimization due to pretraining the encoder which has been unrolledfor hundreds of timesteps. On both tasks, our proposed method always improves generalization onthe test sets.Work done as an intern on Google Brain.1Under review as a conference paper at ICLR 20172 U NSUPERVISED PRETRAINING FOR SEQUENCE TO SEQUENCE LEARNINGIn the following section, we will describe our basic unsupervised pretraining procedure for sequenceto sequence learning and how to modify sequence to sequence learning to effectively make use ofthe pretrained weights. We then show several extensions to improve the basic model.2.1 B ASIC PROCEDUREGiven an input sequence x1;x2;:::;x mand an output sequence yn;yn1;:::;y 1, the objective of se-quence to sequence learning is to maximize the likelihood p(yn;yn1;:::;y 1jx1;x2;:::;x m).Common sequence to sequence learning methods decompose this objective asp(yn;yn1;:::;y 1jx1;x2;:::;x m) =Qnt=1p(ytjyt1;:::;y 1;x1;x2;:::;x m).In sequence to sequence learning, an RNN encoder is used to represent x1;:::;x mas a hidden vector,which is given to an RNN decoder to produce the output sequence. Our method is based on theobservation that without the encoder, the decoder essentially acts like a language model on y’s.Similarly, the encoder with an additional output layer also acts like a language model. Thus it isnatural to use trained languages models to initialize the encoder and decoder.Therefore, the basic procedure of our approach is to pretrain both the seq2seq encoder and decodernetworks with language models, which can be trained on large amounts of unlabeled text data. Thiscan be seen in Figure 1, where the parameters in the shaded boxes are pretrained. In the followingwe will describe the method in detail using machine translation as an example application.A B C <EOS> W X Y ZW X Y Z <EOS>EmbeddingFirst RNN LayerSoftmaxSecond RNN LayerFigure 1: Pretrained sequence to sequence model. The red parameters are the encoder and the blueparameters are the decoder. All parameters in a shaded box are pretrained, either from the sourceside (light red) or target side (light blue) language model. Otherwise, they are randomly initialized.First, two monolingual datasets are collected, one for the source side language, and one for thetarget side language. A language model ( LM) is trained on each dataset independently, giving anLM trained on the source side corpus and an LM trained on the target side corpus.After two language models are trained, a multi-layer seq2seq model Mis constructed. The embed-ding and first LSTM layers of the encoder and decoder are initialized with the pretrained weights.To be even more efficient, the softmax of the decoder is initialized with the softmax of the pretrainedtarget side LM.2.2 I MPROVING THE MODELWe also employ three additional methods to further improve the model above. The three meth-ods are: a) Monolingual language modeling losses, b) Residual connections and c) Attention overmultiple layers (see Figure 2).Monolingual language modeling losses: After the seq2seq model Mis initialized with the twoLMs, it is fine-tuned with a labeled dataset. To ensure that the model does not overfit the labeleddata, we regularize the parameters that were pretrained by continuing to train with the monolinguallanguage modeling losses. The seq2seq and language modeling losses are weighted equally.2Under review as a conference paper at ICLR 2017WX+(a)A B C <EOS>WAttention(b)Figure 2: Two improvements to the baseline model: (a) residual connection, and (b) attention overmultiple layers.Residual connections: As described, the input vector to the decoder softmax layer is a randomvector because the high level (non-first) layers of the LSTM are randomly initialized. This slowsdown training and introduces random gradients to the pretrained parameters, reducing the effective-ness of pretraining. To circumvent this issue, we use a residual connection from the output of thefirst LSTM layer directly to the input of the softmax (see Figure 2-a).Attention over multiple layers: In all our models, we use an attention mechanism (Bahdanauet al., 2015), where the model attends over both top and first layer (see Figure 2-b). More concretely,given a query vector qtfrom the decoder, encoder states from the first layer h11;:::;h1T, and encoderstates from the last layer hL1;:::;hLT, we compute the attention context vector ctas follows:i=exp(qthNi)PTj=1exp(qthNj)c1t=TXi=1ih1icNt=TXi=1ihNict= [c1t;cNt]Note that attention weights iare only computed once using the top level encoder states.We also experiment with passing the attention vector ctas input into the next timestep (Luong et al.,2015b). Instead of passing cinto the first LSTM layer, we pass it as input to the second LSTM layerby concatenating it with the output of the first LSTM layer.We use all three improvements in our experiments. However, in general we notice that the benefitsof the attention modifications are minor in comparison with the benefits of the additional languagemodeling objectives and residual connections.3 E XPERIMENTSIn the following section, we apply our approach to two important tasks in seq2seq learning: machinetranslation and abstractive summarization. On each task, we compare against the previous bestsystems. We also perform ablation experiments to understand the behavior of each component ofour method.3.1 M ACHINE TRANSLATIONDataset and Evaluation: For machine translation, we evaluate our method on the WMTEnglish!German task (Bojar et al., 2015). We used the WMT 14 training dataset, which is slightlysmaller than the WMT 15 dataset. Because the dataset has some noisy examples, we used a lan-guage detection system to filter the training examples. Sentences pairs where either the source wasnot English or the target was not German were thrown away. This resulted in around 4 milliontraining examples. Following Sennrich et al. (2015b), we use subword units (Sennrich et al., 2015a)3Under review as a conference paper at ICLR 2017with 89500 merge operations, giving a vocabulary size around 90000. The validation set is theconcatenated newstest2012 and newstest2013, and our test sets are newstest2014 and newstest2015.Evaluation on the validation set was with case-sensitive BLEU (Papineni et al., 2002) on tokenizedtext using multi-bleu.perl . Evaluation on the test sets was with case-sensitive BLEU ondetokenized text using mteval-v13a.pl . The monolingual training datasets are the News CrawlEnglish and German corpora, each of which has more than a billion tokens.Experimental settings: The language models were trained in the same fashion as (Jozefowiczet al., 2016) We used a 1 layer 4096 dimensional LSTM with the hidden state projected downto 1024 units (Sak et al., 2014) and trained for one week on 32 Tesla K40 GPUs. Our seq2seqmodel was a 3 layer model, where the second and third layers each have 1000 hidden units. Themonolingual objectives, residual connection, and the modified attention were all used. We used theAdam optimizer (Kingma & Ba, 2015) and train with asynchronous SGD on 16 GPUs for speed.We used a learning rate of 5e-5 which is multiplied by 0.8 every 50K steps after an initial 400Ksteps, gradient clipping with norm 5.0 (Pascanu et al., 2013), and dropout of 0.2 on non-recurrentconnections (Zaremba et al., 2014). We used early stopping on validation set perplexity. A beamsize of 10 was used for decoding. Our ensemble is constructed with the 5 best performing modelson the validation set, which are trained with different hyperparameters.Results: Table 1 shows the results of our method in comparison with other baselines. Ourmethod achieves a new state-of-the-art for single model performance on both newstest2014 andnewstest2015, significantly outperforming the competitive semi-supervised backtranslation tech-nique (Sennrich et al., 2015b). Equally impressive is the fact that our best single model outperformsthe previous state of the art ensemble of 4 models. Our ensemble of 5 models matches or exceedsthe previous best ensemble of 12 models.BLEUSystem ensemble? newstest2014 newstest2015Phrase Based MT (Williams et al., 2016) - 21.9 23.7Supervised NMT (Jean et al., 2015) single - 22.4Edit Distance Transducer NMT (Stahlberg et al., 2016) single 21.7 24.1Edit Distance Transducer NMT (Stahlberg et al., 2016) ensemble 8 22.9 25.7Backtranslation (Sennrich et al., 2015b) single 22.7 25.7Backtranslation (Sennrich et al., 2015b) ensemble 4 23.8 26.5Backtranslation (Sennrich et al., 2015b) ensemble 12 24.7 27.6No pretraining single 21.3 24.3Pretrained seq2seq single 24.0 27.0Pretrained seq2seq ensemble 5 24.7 28.1Table 1: English!German performance on WMT test sets. Our pretrained model outperforms allother models. Note that the model without pretraining uses the LM objective.Ablation study: In order to better understand the effects of pretraining, we conducted an ablationstudy by modifying the pretraining scheme. Figure 3 shows the drop in validation BLEU of variousablations compared with the full model. The full model uses LMs trained with monolingual data toinitialize the encoder and decoder, in addition to the language modeling objective. In the following,we interpret the findings of the study. Note that some findings are specific to the translation task.Given the results from the ablation study, we can make the following observations:Pretraining the decoder is better than pretraining the encoder: Only pretraining the encoderleads to a 1.6 BLEU point drop while only pretraining the decoder leads to a 1.0 BLEUpoint drop.Pretrain as much as possible because the benefits compound: given the drops of no pre-training at all (2:0) and only pretraining the encoder ( 1:6), the additive estimate of thedrop of only pretraining the decoder side is 2:0(1:6) =0:4; however the actualdrop is1:0which is a much larger drop than the additive estimate.Pretraining the softmax is important: Pretraining only the embeddings and first LSTM layergives a large drop of 1.6 BLEU points.4Under review as a conference paper at ICLR 20172.01.51.00.50.0Difference/uni00A0in/uni00A0BLEU/uni00AD2.1Pretrain/uni00A0on/uni00A0parallel/uni00A0corpus/uni00AD2.0No/uni00A0pretraining/uni00AD2.0Only/uni00A0pretrain/uni00A0embeddings/uni00AD2.0No/uni00A0LM/uni00A0objective/uni00AD1.6Only/uni00A0pretrain/uni00A0encoder/uni00AD1.6Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM/uni00AD1.0Only/uni00A0pretrain/uni00A0decoder/uni00AD0.3Pretrain/uni00A0on/uni00A0WikipediaFigure 3: English!German ablation study measuring the difference in validation BLEU betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withmonolingual data to initialize the encoder and decoder, plus the language modeling objective.The language modeling objective is a strong regularizer: The drop in BLEU points ofpretraining the entire model and not using the LM objective is as bad as using the LMobjective without pretraining.Pretraining on a lot of unlabeled data is essential for learning to extract powerful features:If the model is initialized with LMs that are pretrained on the source part and target part oftheparallel corpus, the drop in performance is as large as not pretraining at all. However,performance remains strong when pretrained on the large, non-news Wikipedia corpus.To understand the contributions of unsupervised pretraining vs. supervised training, we track theperformance of pretraining as a function of dataset size. For this, we trained a a model with andwithout pretraining on random subsets of the English !German corpus. Both models use the ad-ditional LM objective. The results are summarized in Figure 4. When a 100% of the labeled datais used, the gap between the pretrained and no pretrain model is 2.0 BLEU points. However, thatgap grows when less data is available. When trained on 20% of the labeled data, the gap becomes3.8 BLEU points. This demonstrates that the pretrained models degrade less as the labeled datasetbecomes smaller.20 40 60 80 100Percent/uni00A0of/uni00A0entire/uni00A0labeled/uni00A0dataset/uni00A0used/uni00A0for/uni00A0training1516171819202122BLEUPretrainNo/uni00A0pretrainFigure 4: Validation performance of pretraining vs. no pretraining when trained on a subset of theentire labeled dataset for English !German translation.5Under review as a conference paper at ICLR 20173.2 A BSTRACTIVE SUMMARIZATIONDataset and Evaluation: For a low-resource abstractive summarization task, we use theCNN/Daily Mail corpus from (Hermann et al., 2015). Following Nallapati et al. (2016), we modifythe data collection scripts to restore the bullet point summaries. The task is to predict the bulletpoint summaries from a news article. The dataset has fewer than 300K document-summary pairs.To compare against Nallapati et al. (2016), we used the anonymized corpus. However, for our abla-tion study, we used the non-anonymized corpus.1We evaluate our system using full length ROUGE(Lin, 2004). For the anonymized corpus in particular, we considered each highlight as a separatesentence following Nallapati et al. (2016). In this setting, we used the English Gigaword corpus(Napoles et al., 2012) as our larger, unlabeled “monolingual” corpus, although all data used in thistask is in English.Experimental settings: We use subword units (Sennrich et al., 2015a) with 31500 merges, result-ing in a vocabulary size of about 32000. We use up to the first 600 tokens of the document andpredict the entire summary. Only one language model is trained and it is used to initialize both theencoder and decoder, since the source and target languages are the same. However, the encoderand decoder are not tied. The LM is a one-layer LSTM of size 1024 trained in a similar fashion toJozefowicz et al. (2016). For the seq2seq model, we use the same settings as the machine translationexperiments. The only differences are that we use a 2 layer model with the second layer having1024 hidden units, and that the learning rate is multiplied by 0.8 every 30K steps after an initial100K steps.Results: Table 2 summarizes our results on the anonymized version of the corpus. Our pretrainedmodel is only able to match the previous baseline seq2seq of Nallapati et al. (2016). However, ourmodel is a unidirectional LSTM while they use a bidirectional LSTM. They also use a longer contextof 800 tokens, whereas we used a context of 600 tokens due to GPU memory issues. Furthermore,they use pretrained word2vec (Mikolov et al., 2013) vectors to initialize their word embeddings. Aswe show in our ablation study, just pretraining the embeddings itself gives a large improvement.System ROUGE-1 ROUGE-2 ROUGE-LSeq2seq + pretrained embeddings (Nallapati et al., 2016) 32.49 11.84 29.47+ temporal attention (Nallapati et al., 2016) 35.46 13.30 32.65Pretrained seq2seq 32.56 11.89 29.44Table 2: Results on the anonymized CNN/Daily Mail dataset.Ablation study: We performed an ablation study similar to the one performed on the machinetranslation model. The results are reported in Figure 5. Here we report the drops on ROUGE-1,ROUGE-2, and ROUGE-L on the non-anonymized validation set.Given the results from our ablation study, we can make the following observations:Pretraining improves optimization: in contrast with the machine translation model, it ismore beneficial to only pretrain the encoder than only the decoder of the summarizationmodel. One interpretation is that pretraining enables the gradient to flow much furtherback in time than randomly initialized weights. This may also explain why pretraining onthe parallel corpus is no worse than pretraining on a larger monolingual corpus.The language modeling objective is a strong regularizer: A model without the LM objectivehas a significant drop in ROUGE scores.Human evaluation: As ROUGE may not be able to capture the quality of summarization, wealso performed a small qualitative study to understand the human impression of the summariesproduced by different models. We took 200 random documents and compared the performance of1We encourage future researchers to use the non-anonymized version because it is a more realistic summa-rization setting with a larger vocabulary. Our numbers on the non-anonymized test set are 35:56ROUGE-1,14:60ROUGE-2, and 25:08ROUGE-L. We did not consider highlights as separate sentences.6Under review as a conference paper at ICLR 2017/uni00AD5/uni00AD4/uni00AD3/uni00AD2/uni00AD10Difference/uni00A0in/uni00A0ROUGENo/uni00A0pretraining Only/uni00A0pretrain/uni00A0decoder No/uni00A0LM/uni00A0objective Only/uni00A0pretrain/uni00A0embeddings Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM Only/uni00A0pretrain/uni00A0encoder Pretrain/uni00A0on/uni00A0parallel/uni00A0corpusROUGE/uni00AD1ROUGE/uni00AD2ROUGE/uni00ADLFigure 5: Summarization ablation study measuring the difference in validation ROUGE betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withunlabeled data to initialize the encoder and decoder, plus the language modeling objective.a pretrained and non-pretrained system. The document, gold summary, and the two system outputswere presented to a human evaluator who was asked to rate each system output on a scale of 1-5with 5 being the best score. The system outputs were presented in random order and the evaluatordid not know the identity of either output. The evaluator noted if there were repetitive phrases orsentences in either system outputs. Unwanted repetition was also noticed by Nallapati et al. (2016).Table 3 and 4 show the results of the study. In both cases, the pretrained system outperforms thesystem without pretraining in a statistically significant manner. The better optimization enabled bypretraining improves the generated summaries and decreases unwanted repetition in the output.NP>PNP = P NP<P29 88 83Table 3: The count of how often the no pretrain system ( NP) achieves a higher, equal, and lowerscore than the pretrained system ( P) in the side-by-side study where the human evaluator gave eachsystem a score from 1-5. The sign statistical test gives a p-value of <0:0001 for rejecting the nullhypothesis that there is no difference in the score obtained by either system.No pretrainNo repeats RepeatsPretrainNo repeats 67 65Repeats 24 44Table 4: The count of how often the pretrain and no pretrain systems contain repeated phrases orsentences in their outputs in the side-by-side study. McNemar’s test gives a p-value of <0:0001for rejecting the null hypothesis that the two systems repeat the same proportion of times. Thepretrained system clearly repeats less than the system without pretraining.4 R ELATED WORKUnsupervised pretraining has been intensively studied in the past years, most notably is the workby Dahl et al. (2012) who found that pretraining with deep belief networks improved feedforwardacoustic models. More recent acoustic models have found pretraining unnecessary (Xiong et al.,7Under review as a conference paper at ICLR 20172016; Zhang et al., 2016; Chan et al., 2015), probably because the reconstruction objective of deepbelief networks is too easy. In contrast, we find that pretraining language models by next stepprediction significantly improves seq2seq on challenging real world datasets.Despite its appeal, unsupervised learning is rarely shown to improve supervised training. Dai & Le(2015) was amongst the rare studies which showed the benefits of pretraining in a semi-supervisedlearning setting. Their method is similar to our method except that they did not have a decodernetwork and thus could not apply to seq2seq learning. Similarly, Zhang & Zong (2016) found ituseful to add an additional task of sentence reordering of source-side monolingual data for neuralmachine translation. Various forms of transfer or multitask learning with seq2seq framework alsohave the flavors of our algorithm (Zoph et al., 2016; Luong et al., 2015a; Firat et al., 2016).Perhaps most closely related to our method is the work by Gulcehre et al. (2015), who combined alanguage model with an already trained seq2seq model by fine-tuning additional deep output layers.Empirically, their method produces small improvements over the supervised baseline. We suspectthat their method does not produce significant gains because (i) the models are trained independentlyof each other and are not fine-tuned (ii) the LM is combined with the seq2seq model after the lastlayer, wasting the benefit of the low level LM features, and (iii) only using the LM on the decoderside. Venugopalan et al. (2016) addressed (i) but still experienced minor improvements. Usingpretrained GloVe embedding vectors (Pennington et al., 2014) had more impact.Related to our approach in principle is the work by Chen et al. (2016) who proposed a two-term,theoretically motivated unsupervised objective for unpaired input-output samples. Though they didnot apply their method to seq2seq learning, their framework can be modified to do so. In that case,the first term pushes the output to be highly probable under some scoring model, and the secondterm ensures that the output depends on the input. In the seq2seq setting, we interpret the first termas a pretrained language model scoring the output sequence. In our work, we fold the pretrainedlanguage model into the decoder. We believe that using the pretrained language model only forscoring is less efficient that using all the pretrained weights. Our use of labeled examples satisfiesthe second term. These connections provide a theoretical grounding for our work.In our experiments, we benchmark our method on machine translation, where other unsupervisedmethods are shown to give promising results (Sennrich et al., 2015b; Cheng et al., 2016). In back-translation (Sennrich et al., 2015b), the trained model is used to decode unlabeled data to yield extralabeled data. One can argue that this method may not have a natural analogue to other tasks such assummarization. We note that their technique is complementary to ours, and may lead to additionalgains in machine translation. The method of using autoencoders in Cheng et al. (2016) is promising,though it can be argued that autoencoding is an easy objective and language modeling may force theunsupervised models to learn better features.5 C ONCLUSIONWe presented a novel unsupervised pretraining method to improve sequence to sequence learning.The method can aid in both generalization and optimization. Our scheme involves pretraining twolanguage models in the source and target domain, and initializing the embeddings, first LSTM layers,and softmax of a sequence to sequence model with the weights of the language models. Using ourmethod, we achieved state-of-the-art machine translation results on both WMT’14 and WMT’15English to German.A key advantage of this technique is that it is flexible and can be applied to a large variety of tasks,such as summarization, where it surpasses the supervised learning baseline.ACKNOWLEDGMENTSWe thank George Dahl, Andrew Dai, Laurent Dinh, Stephan Gouws, Geoffrey Hinton, Rafal Joze-fowicz, Pooya Khorrami, Phillip Louis, Ramesh Nallapati, Arvind Neelakantan, Xin Pan, Abi See,Rico Sennrich, Luke Vilnis, Yuan Yu and the Google Brain team for their help with the project.8Under review as a conference paper at ICLR 2017REFERENCESRobert B. Allen. Several studies on natural language and back-propagation. IEEE First International Confer-ence on Neural Networks , 1987.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning toalign and translate. In ICLR , 2015.Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, PhilippKoehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, andMarco Turchi. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the TenthWorkshop on Statistical Machine Translation , 2015.William Chan, Navdeep Jaitly, Quoc V . Le, and Oriol Vinyals. Listen, attend and spell. arXiv preprintarXiv:1508.01211 , 2015.Jianshu Chen, Po-Sen Huang, Xiaodong He, Jianfeng Gao, and Li Deng. Unsupervised learning of predictorsfrom unpaired input-output samples. abs/1606.04646, 2016.Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. Semi-supervised learningfor neural machine translation. arXiv preprint arXiv:1606.04596 , 2016.Kyunghyun Cho, Bart Van Merri ̈enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, HolgerSchwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statisti-cal machine translation. In EMNLP , 2014.G. E. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing , 20(1):30–42, 2012. ISSN 1558-7916.Andrew M. Dai and Quoc V . Le. Semi-supervised sequence learning. In NIPS . 2015.Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman-Vural, and Kyunghyun Cho. Zero-resource translation with multi-lingual neural machine translation. arXiv preprint arXiv:1606.04164 , 2016.Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares,Holger Schwenk, and Yoshua Bengio. On using monolingual corpora in neural machine translation. arXivpreprint arXiv:1503.03535 , 2015.Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman,and Phil Blunsom. Teaching machines to read and comprehend. In NIPS . 2015.S ́ebastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. Montreal neural ma-chine translation systems for WMT’15. In Proceedings of the Tenth Workshop on Statistical Machine Trans-lation , 2015.Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits oflanguage modeling. arXiv preprint arXiv:1602.02410 , 2016.Nal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In EMNLP , 2013.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR , 2015.Chin-Yew Lin. ROUGE: a package for automatic evaluation of summaries. In Proceedings of the Workshop onText Summarization Branches Out (WAS 2004) , 2004.Minh-Thang Luong, Quoc V . Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task sequence tosequence learning. In ICLR , 2015a.Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based neuralmachine translation. In EMNLP , 2015b.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed representations ofwords and phrases and their compositionality. In NIPS . 2013.Ramesh Nallapati, Bing Xiang, and Bowen Zhou. Sequence-to-sequence RNNs for text summarization. arXivpreprint arXiv:1602.06023 , 2016.Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. Annotated gigaword. In Proceedings of theJoint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction . ACL,2012.9Under review as a conference paper at ICLR 2017Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: A method for automatic evaluation ofmachine translation. In ACL, 2002.Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks.ICML , 2013.Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representa-tion. In EMNLP , 2014.Hasim Sak, Andrew W. Senior, and Franc ̧oise Beaufays. Long short-term memory recurrent neural networkarchitectures for large scale acoustic modeling. In INTERSPEECH , 2014.Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subwordunits. arXiv preprint arXiv:1508.07909 , 2015a.Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with mono-lingual data. arXiv preprint arXiv:1511.06709 , 2015b.Felix Stahlberg, Eva Hasler, and Bill Byrne. The edit distance transducer in action: The university of cambridgeenglish-german system at wmt16. In Proceedings of the First Conference on Machine Translation , pp. 377–384, Berlin, Germany, August 2016. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W/W16/W16-2324 .Ilya Sutskever, Oriol Vinyals, and Quoc V . Le. Sequence to sequence learning with neural networks. In NIPS .2014.Subhashini Venugopalan, Lisa Anne Hendricks, Raymond Mooney, and Kate Saenko. Improving LSTM-basedvideo description with linguistic knowledge mined from text. arXiv preprint arXiv:1604.01729 , 2016.Philip Williams, Rico Sennrich, Maria Nadejde, Matthias Huck, Barry Haddow, and Ond ˇrej Bojar. Edinburgh’sstatistical machine translation systems for wmt16. In Proceedings of the First Conference on MachineTranslation , pp. 399–410, Berlin, Germany, August 2016. Association for Computational Linguistics. URLhttp://www.aclweb.org/anthology/W/W16/W16-2327 .W. Xiong, Jasha Droppo, Xuedong Huang, Frank Seide, Mike Seltzer, Andreas Stolcke, Dong Yu, and GeoffreyZweig. Achieving human parity in conversational speech recognition. abs/1610.05256, 2016.Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprintarXiv:1409.2329 , 2014.Jiajun Zhang and Chengqing Zong. Exploiting source-side monolingual data in neural machine translation. InEMNLP , 2016.Yu Zhang, William Chan, and Navdeep Jaitly. Very deep convolutional networks for end-to-end speech recog-nition. abs/1610.03022, 2016. URL http://arxiv.org/abs/1610.03022 .Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. Transfer learning for low-resource neural machinetranslation. In EMNLP , 2016.Ramon P. ̃Neco and Mikel L. Forcada. Asynchronous translations with recurrent neural nets. Neural Networks ,1997.10Under review as a conference paper at ICLR 2017APPENDIXSELECTED SUMMARIZATION OUTPUTSSource Document( cnn ) like phone booths and typewriters , record stores are a vanishing breed – another victimof the digital age . camelot music . virgin megastores . wherehouse music . tower records. all of them gone . corporate america has largely abandoned brick - and - mortar musicretailing to a scattering of independent stores , many of them in scruffy urban neighborhoods. and that s not necessarily a bad thing . yes , it s harder in the spotify era to find a place to gobuy physical music . but many of the remaining record stores are succeeding – even thriving– by catering to a passionate core of customers and collectors . on saturday , hundreds ofmusic retailers will hold events to commemorate record store day , an annual celebration of, well , your neighborhood record store . many stores will host live performances , drawings, book signings , special sales of rare or autographed vinyl and other happenings . some willeven serve beer . to their diehard customers , these places are more than mere stores : theyare cultural institutions that celebrate music history ( the entire duran duran oeuvre , all inone place ! ) , display artifacts ( aretha franklin on vinyl ! ) , and nurture the local musicscene ( hey , here s a cd by your brother s metal band ! ) . they also employ knowledgeableclerks who will be happy to debate the relative merits of blood on the tracks and blonde onblonde . or maybe , like jack black in high fidelity , just mock your lousy taste in music . soif you re a music geek , drop by . but you might think twice before asking if they stock i justcalled to say i love you .Ground Truth summarysaturday is record store day , celebrated at music stores around the world . many stores willhost live performances , drawings and special sales of rare vinyl .No pretraincorporate america has largely abandoned brick - brick - mortar music . many of the remainingrecord stores are succeeding – even thriving – by catering to a passionate core of customers .Pretrainedhundreds of music retailers will hold events to commemorate record store day . many storeswill host live performances , drawings , book signings , special sales of rare or autographedvinyl .Table 5: The pretrained model outputs a highly informative summary, while the no pretrain modeloutputs irrelevant details.11Under review as a conference paper at ICLR 2017Source Document( cnn ) hey , look what i did . that small boast on social media can trigger a whirlwind thatspins into real - life grief , as a texas veterinarian found out after shooting a cat . dr. kristenlindsey allegedly shot an arrow into the back of an orange tabby s head and posted a proudphoto this week on facebook of herself smiling , as she dangled its limp body by the arrow sshaft . lindsey added a comment , cnn affiliate kbtx reported . my first bow kill , lol . the onlygood feral tomcat is one with an arrow through it s head ! vet of the year award ... gladlyaccepted . callers rang the phones hot at washington county s animal clinic , where lindseyworked , to vent their outrage . web traffic crashed its website . high price of public shamingon the internet then an animal rescuer said that lindsey s prey was probably not a feral catbut the pet of an elderly couple , who called him tiger . he had gone missing on wednesday ,the same day that lindsey posted the photo of the slain cat . cnn has not been able to confirmthe claim . as the firestorm grew , lindsey wrote in the comments underneath her post : no idid not lose my job . lol . psshh . like someone would get rid of me . i m awesome ! thatprediction was wrong . the clinic fired lindsey , covered her name on its marquee with ducttape , and publicly distanced itself from her actions . our goal now is to go on and try to fixour black eye and hope that people are reasonable and understand that those actions do ntanyway portray what we re for here at washington animal clinic , said dr. bruce buenger . weput our heart and soul into this place . the clinic told wbtx that lindsey was not available forcomment . cnn is reaching out to her . she removed her controversial post then eventuallyshut down her facebook page . callers also complained to the brenham police department andwashington county animal control , as her facebook post went viral . the sheriff s office inaustin county , where the cat was apparently shot , is investigating , and lindsey could facecharges . its dispatchers were overloaded with calls , the sheriff posted on facebook . we areasking you to please take it easy on our dispatchers . as soon as the investigation is complete ,we will post the relevant information here on this page , the post read . animal rights activistsare pushing for charges . animal cruelty must be taken seriously , and the guilty parties shouldbe punished to the fullest extent of the law , said cat advocacy activist becky robinson . herorganization , alley cat allies , is offering a $ 7,500 reward for evidence leading to the arrestand conviction of the person who shot the cat . but others stood up for lindsey . she s amazing. she s caring , said customer shannon stoddard . she s a good vet , so maybe her bad choiceof posting something on facebook was not good . but i do nt think she should be judged forit . she dropped off balloons at the animal clinic for lindsey with a thank you note . cnn sjeremy grisham contributed to this report .Ground Truth summarydr. kristen lindsey has since removed the post of her holding the dead cat by an arrow . heremployer fired her ; the sheriff s office is investigating . activist offers $ 7,500 reward .No pretraindr. kristen lindsey allegedly shot an arrow into the back of an orange orange tabby s head . its the only good good tomcat is one with an arrow through it s head ! vet vet of the year award.Pretrainedlindsey lindsey , a texas veterinarian , shot an arrow into the back of an orange tabby s head. she posted a photo of herself smiling , as she dangled its limp body by the arrow s shaft .lindsey could face charges , the sheriff s department says .Table 6: The pretrained model outputs a highly relevant summary but makes a mistake on the felineexecutioner’s name. The no pretrain model degenerates into irrelevant details and repeats itself.12Under review as a conference paper at ICLR 2017Source Documenteugenie bouchard s run of poor form continued as the top seed was beaten 6 - 3 , 6 - 1 byamerican lauren davis in the second round at the family circle cup in charleston on wednesday. davis , 21 , had lost her only career meeting with bouchard , but was in control this timeagainst the world no 7 . davis won nine of the final 11 games of the match and broke bouchards serve twice in the final set to pull off the upset . eugenie bouchard fires down a serve duringher second - round match at the family circle cup bouchard shows her frustrations duringher straight - sets defeat by lauren davis on wednesday i ve never beaten here before , so icame out knowing i had nothing to lose , said davis , ranked 66th in the world . bouchardwas a semi-finalist last year but had struggled in the lead - up to the charlston green - clayevent , losing three of her last five matches to lower - ranked opponents . davis used precisegroundstrokes to keep bouchard on her heels throughout the second set . davis broke bouchards serve to love to take a 3 - 1 lead as the 21 - year - old canadian had a double fault , thensailed a forehand long to fall behind . two games later , davis hit a backhand to send backbouchard s 102mph serve for a winner down the left sideline and a 5 - 1 lead . davis hitsa forehand on her way to an impressive win against the canadian top seed bouchard hasstruggled recently , this time slumping to her fourth defeat in six matches the match endedon bouchard s forehand into the net , davis waving to the cheering crowd . bouchard said: as soon as it was over , definitely a bit of anger , but also kind of this confusion , slash ,like quest to find what s wrong , like this kind of searching feeling that , ok , like i knowsomething s off . i know something s not right . so i want to find it . i was definitely a littlebit slow today , overpowered . usually , i m the one dominating . so it was definitely , just , ido nt know , just not good .Ground Truth summaryeugenie bouchard suffered her fourth defeat in six matches . the canadian top seed lost tolauren davis at the family circle cup . world no 66 davis won 6 - 3 , 6 - 1 in the second roundin charleston . davis won nine of the final 11 games of the match to seal victory . click herefor all the latest news from charleston .No pretrainbouchard beat american lauren davis 6 - 3 , 6 - 1 in the second round of the family circle cupin charleston on wednesday . bouchard had lost her only career meeting with bouchard butwas in control this time against the world no 7 . bouchard won nine of the final 11 games ofthe match and broke bouchard s serve twice in the final set to pull off the upset .Pretrainedeugenie bouchard was beaten 6 - 3 , 6 - 1 by american lauren davis in the second round .davis had lost her only career meeting with bouchard , but was in control this time against theworld no 7 . davis hit a backhand to send back bouchard s 102mph serve for a winner downthe left sideline .Table 7: Both models output a relevant summary, but the no pretrain model uses the same name torefer to both players.13Under review as a conference paper at ICLR 2017Source Document( cnn ) mike rowe is coming to a river near you . sometimes , you hear about a person whomakes you feel good about humanity , but bad about yourself , rowe says . on thursday sepisode of somebody s got ta do it , rowe meets up with chad pregracke , the founder ofliving lands & waters , who does just that . pregracke wants to clean up the nation s riversone piece of detritus at a time . his quota ? always more . read mike rowe s facebook poston how to break our litter habit . since he founded the nonprofit in 1998 at the ripe age of 23, pregracke and more than 87,000 volunteers have collected 8.4 million pounds of trash fromu.s. waterways . those efforts helped him earn the 2013 cnn hero of the year award , alongwith numerous other honors . wherever you are , no matter if there s a stream , a creek , alake , whatever , that needs to be cleaned up , you can do it . just organize it and do it , he toldcnn s anderson cooper after his win . pregracke also gives rowe a tour of the 150 - foot , solar- powered barge that the living lands & waters staff calls home during lengthy cleanups . thepart - home , part - office , part - dumpster has seven bedrooms , two bathrooms , a classroomand a kitchen – and just happens to be made from a recycled strip club . according to theorganization s latest annual report , pregracke has made it his mission in 2015 to remove500,000 more pounds of trash . if you d like to help achieve this goal , visit his website tolearn how to help : livinglandsandwaters.org / get - involved / .Ground Truth summarychad pregracke was the 2013 cnn hero of the year . mike rowe visited pregracke for an episodeof somebody s got ta do it .No pretrainrowe meets up with chad pregracke , founder of living lands & waters . pregracke and morethan 87,000 volunteers collected 8.4 million pounds of trash from u.s. waterways .Pretrainedrowe is the founder of living lands & waters , who does just that . pregracke also gives rowea tour of the 150 - foot barge that the living lands & waters gets .Table 8: A failure case. The pretrained model outputs irrelevant details while the no pretrain modelsuccessfully summarizes the document.14Under review as a conference paper at ICLR 2017SELECTED ENGLISH!GERMAN OUTPUTSSourceMayor Bloomberg told reporters that, because of that court order, the city had suspended thereopening of the public space and protesters were informed, however, that local laws do notallow them to re-install with camping shops and sleeping bags.Ground TruthB ̈urgermeister Bloomberg stellt vor der Presse klar , das aufgrund dieser richterlichen Anord-nung die erneute ̈Offnung des Platzes f ̈ur den Publikumsverkehr und die Demonstrantenaufgehoben worden sei . Die Demonstranten wies er darauf hin , dass die Stadtgesetze ihnennicht erlaubten , sich erneut mit Zelten und Schlafs ̈acken an diesem Ort einzurichten .No pretrainDer B ̈urgermeister Bloomberg sagte den Reportern , dass die Stadt aufgrund dieser Gericht-sentscheidung die Wiederer ̈offnung des ̈offentlichen Raumes und die Information derDemonstranten ausgesetzt habe , dass die lokalen Gesetze ihnen nicht erlauben , mit denCampingpl ̈atzen und Schlafs ̈acken neu zu installieren .PretrainedB ̈urgermeister Bloomberg erkl ̈arte gegen ̈uber Journalisten , dass die Stadt aufgrund dieserGerichtsentscheidung die Wiederer ̈offnung des ̈offentlichen Raums ausgesetzt habe und dassdie Demonstranten dar ̈uber informiert wurden , dass die ̈ortlichen Gesetze es ihnen nichterlauben w ̈urden , sich mit Campingpl ̈atzen und Schlafs ̈alen neu zu installieren .Table 9: The no pretrain model makes a complete mistranslation when outputting ”und die Infor-mation der Demonstranten ausgesetzt habe”. That translates to ”the reopening of the public spaceand the information [noun] of the protesters were suspended”, instead of informing the protesters.Furthermore, it wrongly separated the two sentences, so the first sentence has extra words and thesecond sentence is left without a subject. The pretrained model does not make any of these mistakes.However, both models make a vocabulary mistake of ”zu installieren”, which is typically only usedto refer to installing software. A human evaluator fluent in both German and English said that thepretrained version was better.15Under review as a conference paper at ICLR 2017SourceThe low February temperatures, not only did they cause losses of millions for the agriculturalsector, but they limited the possibilities of the state economy to grow, causing a contraction ofthe economic activity in general of 3.6 percent in the first half of the year, mainly supportedby the historic fall of 31.16 per cent in agriculture, which affected the dynamics of othereconomic sectors.Ground TruthDie niedrigen Temperaturen im Februar verursachten nicht nur Verluste in Millionenh ̈ohe inder Landwirtschaft , sondern steckten dar ̈uber hinaus dem Wachstum der Staatswirtschaftenge Grenzen und verursachten im ersten Vierteljahr einen allgemeinen R ̈uckgang derWirtschaftst ̈atigkeit um 3,6 Prozent Dieser geht haupts ̈achlich auf den historischen Abbauder landwirtschaftlichen Entwicklung um 31,16 Prozent zur ̈uck , der sich bremsend auf weit-ere Wirtschaftssektoren auswirkte .No pretrainDie niedrigen Temperaturen im Februar f ̈uhrten nicht nur zu Verlusten f ̈ur die Landwirtschaft, sondern sie beschr ̈ankten die M ̈oglichkeiten der staatlichen Wirtschaft , wachsen zu wach-sen , wodurch die Wirtschaftst ̈atigkeit insgesamt von 3,6 Prozent in der ersten H ̈alfte desJahres , haupts ̈achlich durch den historischen R ̈uckgang von 31.16 % in der Landwirtschaft ,beeinflusst wurde , was die Dynamik anderer Wirtschaftssektoren betraf .PretrainedDie niedrigen Temperaturen im Februar f ̈uhrten nicht nur zu Verlusten von Millionen f ̈ur denAgrarsektor , sondern beschr ̈ankten die M ̈oglichkeiten der Staatswirtschaft , zu wachsen , waszu einer Schrumpfung der Wirtschaftst ̈atigkeit im Allgemeinen von 3,6 Prozent in der erstenH ̈alfte des Jahres f ̈uhrte , haupts ̈achlich durch den historischen Einbruch von 316 Prozent inder Landwirtschaft , der die Dynamik anderer Wirtschaftsbereiche beeinflusst hatte .Table 10: The human evaluator noted that the pretrained version is better, as it correctly captures themeaning and sentence structure of the middle. The no pretrain model does not misses translatingthe word ”million”, repeats itself in ”wachsen zu wachsen”, and puts the verb ”beeinflusst wurde”is an unnatural position. However, the pretrained model makes a mistake in the percentage (316%instead of 31.16%).16Under review as a conference paper at ICLR 2017SourceTo facilitate the inception of the Second World War, they allowed bankers and politiciansto create a latent conflict situation by saddling Germany with huge war reparations, therebymaking a radicalist example of the impoverished masses, it remained only to introduce asufficiently convincing culprit and a leader with a simple solution, while also creating a multi-racial Czechoslovakia with a strong German minority to play, and indeed did, the role of afifth colony, once the war had been ignited.Ground TruthUm den Zweiten Weltkrieg einfacher entfachen zu k ̈onnen , ließen die Banker durch diePolitik eine latente Konfliktsituation schaffen , indem sie Deutschland mit gigantischenKriegsreparationen belegten ; dadurch schufen sie die V oraussetzung verarmter Massen , sodass sie den Deutschen nur noch einen ausreichend starken F ̈uhrer unterjubeln mussten , derdie Schuldigen benannte und einfache L ̈osungen anbot ; ein weiterer Faktor war die Schaf-fung des Vielv ̈olkerstaates Tschechoslowakei mit einer starken deutschen Minderheit , die dieFunktion einer f ̈unften Kolonne einnehmen sollte und auch einnahm , um den Kriegsbrandzu entfachen .No pretrainUm die Gr ̈undung des Zweiten Weltkriegs zu erleichtern , erm ̈oglichte es den Bankern undPolitikern , eine latente Konfliktlage zu schaffen , indem sie Deutschland mit enormen Repa-rationsforderungen konfrontierte , wodurch ein radikalislamistisches Beispiel der verarmtenMassen entstand , es blieb nur , einen ausreichend aussagekr ̈aftigen Schuldigen und einenF ̈uhrer mit einer einfachen L ̈osung zu etablieren , w ̈ahrend gleichzeitig eine multi-ethnischeTschechoslowakei mit einer starken deutschen Minderheit zu spielen war und tats ̈achlich dieRolle einer f ̈unften Kolonie war .PretrainedUm die Einf ̈uhrung des Zweiten Weltkrieges zu erleichtern , ließen sie Banker und Politikereine latente Konfliktlage schaffen , indem sie Deutschland mit riesigen Reparationszahlungenbelieferten , wodurch ein radikalislamistisches Beispiel der verarmten Massen entstand , esblieb nur , einen ausreichend ̈uberzeugenden Schuldigen und einen F ̈uhrer mit einer einfachenL ̈osung zu pr ̈asentieren , w ̈ahrend gleichzeitig eine multiethnische Tschechoslowakei miteiner starken deutschen Minderheit geschaffen wurde , um zu spielen , und tats ̈achlich , dieRolle einer f ̈unften Kolonie , sobald der Krieg entfacht worden war .Table 11: An example where the English source is poorly worded. Both models output poor trans-lations, but the evaluator noted that the pretrained version is still better than the no pretrain version.Interestingly, both models mistranslate ”radical” as ”radikalislamistisches”, which means ”radicalIslam”, which is probably a bias in the training data.17Under review as a conference paper at ICLR 2017SourceThe total vote count will also be done if at the end of the ordinary calculation is establishedthat the difference between the winner and the candidate placed on second position is equalto or less than one percentage point, as long as there is a request of the representative ofthe political party whose candidate came on the second position, case in which there will beexcluded the electoral boxes that have been considered during the partial recount.Ground TruthDie Stimmenausz ̈ahlung kann auch in ihrer Gesamtheit erfolgen , wenn nach Abschlussder ordentlichen Berechnung festgestellt wird , dass der Unterschied zwischen dem mut-maßlichen Gewinner und dem Kandidaten auf dem zweiten Platz gleich oder geringer als einProzent ist , vorausgesetzt es liegt ein ausdr ̈ucklicher Antrag von einem Vertreter der Partei ,deren Kandidat Zweiter geworden ist , vor . In diesem Fall w ̈urden die Wahlpakete , die einerteilweisen Ausz ̈ahlung ausgesetzt wurden , ausgeschlossen .No pretrainDie gesamte Stimmenanzahl wird auch dann erreicht , wenn am Ende der ordentlichenBerechnung festgestellt wird , dass der Unterschied zwischen dem Sieger und dem Kandi-daten , der auf der zweiten Position liegt , gleich oder weniger als einen Prozentpunkt betr ̈agt, vorausgesetzt , dass der Vertreter der Partei , deren Kandidat auf der zweiten Position ist , derFall ist , in dem die Wahlunterlagen , die w ̈ahrend der teilweisen R ̈uckz ̈ahlung ber ̈ucksichtigtwurden , ausgeschlossen werden .PretrainedDie Gesamtzahl der Stimmzettel wird auch dann durchgef ̈uhrt , wenn am Ende der or-dentlichen Berechnung festgestellt wird , dass der Unterschied zwischen dem Gewinner unddem auf den zweiten Platz platzierten Kandidaten gleich oder weniger als einen Prozent-punkt betr ̈agt , solange es einen Antrag des Vertreters der politischen Partei gibt , dessenKandidat auf die zweite Position kam , in dem es die Wahlzettel ausklammert , die w ̈ahrendder Teilz ̈ahlung ber ̈ucksichtigt wurden .Table 12: Another example where the English source is poorly worded. Both models get the struc-ture right, but have a variety of problematic translations. Both models miss the meaning of ”totalvote count”. They both also translate ”electoral boxes” poorly - the no pretrain model calls it ”elec-toral paperwork” while the pretrained model calls it ”ballots”. These failures may be because of thepoorly worded English source. The human evaluator found them both equally poor.18
Sys6GJqxl
Published as a conference paper at ICLR 2017DELVING INTO TRANSFERABLE ADVERSARIAL EX-AMPLES AND BLACK -BOX ATTACKSYanpei Liu, Xinyun ChenShanghai Jiao Tong UniversityChang Liu, Dawn SongUniversity of the California, BerkeleyABSTRACTAn intriguing property of deep neural networks is the existence of adversarial ex-amples, which can transfer among different architectures. These transferable ad-versarial examples may severely hinder deep neural network-based applications.Previous works mostly study the transferability using small scale datasets. In thiswork, we are the first to conduct an extensive study of the transferability overlarge models and a large scale dataset, and we are also the first to study the trans-ferability of targeted adversarial examples with their target labels. We study bothnon-targeted andtargeted adversarial examples, and show that while transferablenon-targeted adversarial examples are easy to find, targeted adversarial examplesgenerated using existing approaches almost never transfer with their target labels.Therefore, we propose novel ensemble-based approaches to generating transfer-able adversarial examples. Using such approaches, we observe a large proportionof targeted adversarial examples that are able to transfer with their target labels forthe first time. We also present some geometric studies to help understanding thetransferable adversarial examples. Finally, we show that the adversarial examplesgenerated using ensemble-based approaches can successfully attack Clarifai.com,which is a black-box image classification system.1 I NTRODUCTIONRecent research has demonstrated that for a deep architecture, it is easy to generate adversarialexamples, which are close to the original ones but are misclassified by the deep architecture (Szegedyet al. (2013); Goodfellow et al. (2014)). The existence of such adversarial examples may have severeconsequences, which hinders vision-understanding-based applications, such as autonomous driving.Most of these studies require explicit knowledge of the underlying models. It remains an openquestion how to efficiently find adversarial examples for a black-box model.Several works have demonstrated that some adversarial examples generated for one model mayalso be misclassified by another model. Such a property is referred to as transferability , whichcan be leveraged to perform black-box attacks. This property has been exploited by constructinga substitute of the black-box model, and generating adversarial instances against the substitute toattack the black-box system (Papernot et al. (2016a;b)). However, so far, transferability is mostlyexamined over small datasets, such as MNIST (LeCun et al. (1998)) and CIFAR-10 (Krizhevsky &Hinton (2009)). It has yet to be better understood transferability over large scale datasets, such asImageNet (Russakovsky et al. (2015)).In this work, we are the first to conduct an extensive study of the transferability of different adver-sarial instance generation strategies applied to different state-of-the-art models trained over a largescale dataset. In particular, we study two types of adversarial examples: (1) non-targeted adversar-ial examples, which can be misclassified by a network, regardless of what the misclassified labelsmay be; and (2) targeted adversarial examples, which can be classified by a network as a targetlabel. We examine several existing approaches searching for adversarial examples based on a singlemodel. While non-targeted adversarial examples are more likely to transfer, we observe few targetedadversarial examples that are able to transfer with their target labels.Work is done while visiting UC Berkeley.1Published as a conference paper at ICLR 2017We further propose a novel strategy to generate transferable adversarial images using an ensembleof multiple models. In our evaluation, we observe that this new strategy can generate non-targetedadversarial instances with better transferability than other methods examined in this work. Also, forthe first time, we observe a large proportion of targeted adversarial examples that are able to transferwith their target labels.We study geometric properties of the models in our evaluation. In particular, we show that thegradient directions of different models are orthogonal to each other. We also show that decisionboundaries of different models align well with each other, which partially illustrates why adversarialexamples can transfer.Last, we study whether generated adversarial images can attack Clarifai.com, a commercial com-pany providing state-of-the-art image classification services. We have no knowledge about the train-ing dataset and the types of models used by Clarifai.com; meanwhile, the label set of Clarifai.comis quite different from ImageNet’s. We show that even in this case, both non-targeted and targetedadversarial images transfer to Clarifai.com. This is the first work documenting the success of gen-erating both non-targeted and targeted adversarial examples for a black-box state-of-the-art onlineimage classification system, whose model and training dataset are unknown to the attacker.Contributions and organization. We summarize our main contributions as follows:For ImageNet models, we show that while existing approaches are effective to generatenon-targeted transferable adversarial examples (Section 3), only few targeted adversarialexamples generated by existing methods can transfer (Section 4).We propose novel ensemble-based approaches to generate adversarial examples (Sec-tion 5). Our approaches enable a large portion of targeted adversarial examples to transferamong multiple models for the first time.We are the first to present that targeted adversarial examples generated for models trainedon ImageNet can transfer to a black-box system, i.e., Clarifai.com, whose model, trainingdata, and label set is unknown to us (Section 7). In particular, Clarifai.com’s label set isvery different from ImageNet’s.We conduct the first analysis of geometric properties for large models trained over Ima-geNet (Section 6), and the results reveal several interesting findings, such as the gradientdirections of different models are orthogonal to each other.In the following, we first discuss related work, and then present the background knowledge andexperiment setup in Section 2. Then we present each of our experiments and conclusions in thecorresponding section as mentioned above.Related work. Transferability of adversarial examples was first examined by Szegedy et al.(2013), which studied the transferability (1) between different models trained over the same dataset;and (2) between the same or different model trained over disjoint subsets of a dataset; However,Szegedy et al. (2013) only studied MNIST.The study of transferability was followed by Goodfellow et al. (2014), which attributed the phe-nomenon of transferability to the reason that the adversarial perturbation is highly aligned with theweight vector of the model. Again, this hypothesis was tested using MNIST and CIFAR-10 datasets.We show that this is not the case for models trained over ImageNet.Papernot et al. (2016a;b) examined constructing a substitute model to attack a black-box targetmodel. To train the substitute model, they developed a technique that synthesizes a training set andannotates it by querying the target model for labels. They demonstrate that using this approach,black-box attacks are feasible towards machine learning services hosted by Amazon, Google, andMetaMind. Further, Papernot et al. (2016a) studied the transferability between deep neural networksand other models such as decision tree, kNN, etc.Our work differs from Papernot et al. (2016a;b) in three aspects. First, in these works, only the modeland the training process are a black box, but the training set and the test set are controlled by theattacker; in contrast, we attack Clarifai.com, whose model, training data, training process, and eventhe test label set are unknown to the attacker. Second, the datasets studied in these works are small2Published as a conference paper at ICLR 2017scale, i.e., MNIST and GTSRB (Stallkamp et al. (2012)); in our work, we study the transferabilityover larger models and a larger dataset, i.e., ImageNet. Third, to attack black-box machine learningsystems, we do not query the systems for constructing the substitute model ourselves.In a concurrent and independent work, Moosavi-Dezfooli et al. (2016) showed the existence of auniversal perturbation for each model, which can transfer across different images. They also showthat the adversarial images generated using these universal perturbations can transfer across differentmodels on ImageNet. However, they only examine the non-targeted transferability, while our workstudies both non-targeted and targeted transferability over ImageNet.2 A DVERSARIAL DEEPLEARNING AND TRANSFERABILITY2.1 T HE ADVERSARIAL DEEP LEARNING PROBLEMWe assume a classifier f(x)outputs a category (or a label) as the prediction. Given an originalimagex, with ground truth label y, the adversarial deep learning problem is to seek for adversarialexamples for the classifier f(x). Specifically, we consider two classes of adversarial examples.Anon-targeted adversarial example x?is an instance that is close to x, in which case x?shouldhave the same ground truth as x, whilef(x?)6=y. For the problem to be non-trivial, we assumef(x) =ywithout loss of generality. A targeted adversarial example x?is close toxand satisfiesf(x?) =y?, wherey?is a target label specified by the adversary, and y?6=y.2.2 A PPROACHES FOR GENERATING ADVERSARIAL EXAMPLESIn this work, we consider three classes of approaches for generating adversarial examples:optimization-based approaches, fast gradient approaches, and fast gradient sign approaches. Eachclass has non-targeted and targeted versions respectively.2.2.1 A PPROACHES FOR GENERATING NON -TARGETED ADVERSARIAL EXAMPLESFormally, given an image xwith ground truth y=f(x), searching for a non-targeted adversarialexample can be modeled as searching for an instance x?to satisfy the following constraints:f(x?)6=y (1)d(x;x?)B (2)whered(;)is a metric to quantify the distance between an original image and its adversarial coun-terpart, andB, called distortion , is an upper bound placed on this distance. Without loss of gener-ality, we consider model fis composed of a network J(x), which outputs the probability for eachcategory, so that foutputs the category with the highest probability.Optimization-based approach. One approach is to approximate the solution to the followingoptimization problem:argminx?d(x;x?)`(1y;J(x?)) (3)where 1yis the one-hot encoding of the ground truth label y,`is a loss function to measure thedistance between the prediction and the ground truth, and is a constant to balance constraints (2)and (1), which is empirically determined. Here, loss function `is used to approximate constraint (1),and its choice can affect the effectiveness of searching for an adversarial example. In this work, wechoose`(u;v) = log (1uv), which is shown to be effective by Carlini & Wagner (2016).Fast gradient sign (FGS). Goodfellow et al. (2014) proposed the fast gradient sign (FGS) methodso that the gradient needs be computed only once to generate an adversarial example. FGS can beused to generate adversarial images to meet the L1norm bound. Formally, non-targeted adversarialexamples are constructed asx? clip(x+Bsgn(rx`(1y;J(x))))Here, clip(x)is used to clip each dimension of xto the range of pixel values, i.e., [0;255] in thiswork. We make a slight variation to choose `(u;v) = log (1uv), which is the same as used inthe optimization-based approach.3Published as a conference paper at ICLR 2017Fast gradient (FG). The fast gradient approach (FG) is similar to FGS, but instead of movingalong the gradient sign direction, FG moves along the gradient direction. In particular, we havex? clip(x+Brx`(1y;J(x))jjrx`(1y;J(x))jj))Here, we assume the distance metric in constraint (2), d(x;x?) =jjxx?jjis a norm of xx?.The term sgn(rx`)in FGS is replaced byrx`jjrx`jjto meet this distance constraint.We call both FGS and FG fast gradient-based approaches .2.2.2 A PPROACHES FOR GENERATING TARGETED ADVERSARIAL EXAMPLESA targeted adversarial image x?is similar to a non-targeted one, but constraint (1) is replaced byf(x?) =y?(4)wherey?is the target label given by the adversary. For the optimization-based approach, we ap-proximate the solution by solving the following dual objective:argminx?d(x;x?) +`0(1y?;J(x?)) (5)In this work, we choose the standard cross entropy loss `0(u;v) =Piuilogvi.For FGS and FG, we construct adversarial examples as follows:x? clip(xBsgn(rx`0(1y?;J(x)))) (FGS)x? clip(xBrx`0(1y?;J(x))jjrx`0(1y?;J(x))jj) (FG)where`0is the same as the one used for the optimization-based approach.2.3 E VALUATION METHODOLOGYFor the rest of the paper, we focus on examining the transferability among state-of-the-art modelstrained over ImageNet (Russakovsky et al. (2015)). In this section, we detail the models to beexamined, the dataset to be evaluated, and the measurements to be used.Models. We examine five networks, ResNet-50, ResNet-101, ResNet-152 (He et al. (2015))1,GoogLeNet (Szegedy et al. (2014))2, and VGG-16 (Simonyan & Zisserman (2014))3. We retrievethe pre-trained models for each network online. The performance of these models on the ILSVRC2012 (Russakovsky et al. (2015)) validation set can be found in our online technical report: Liu et al.(2016). We choose these models to study the transferability between homogeneous architectures(i.e., ResNet models) and heterogeneous architectures.Dataset. It is less meaningful to examine the transferability of an adversarial image between twomodels which cannot classify the original image correctly. Therefore, from the ILSVRC 2012 val-idation set, we randomly choose 100 images, which can be classified correctly by all five modelsin our examination. These 100 images form our test set. To perform targeted attacks, we manuallychoose a target label for each image, so that its semantics is far from the ground truth. The imagesand target labels in our evaluation can be found on website4.Measuring transferability. Given two models, we measure the non-targeted transferability bycomputing the percentage of the adversarial examples generated for one model that can be classifiedcorrectly for the other. We refer to this percentage as accuracy . A lower accuracy means betternon-targeted transferability. We measure the targeted transferability by computing the percentage ofthe adversarial examples generated for one model that are classified as the target label by the othermodel. We refer to this percentage as matching rate . A higher matching rate means better targetedtransferability. For clarity, the reported results are only based on top-1 accuracy. Top-5 accuracy’scounterparts can be found in our online technical report: Liu et al. (2016).1https://github.com/KaimingHe/deep-residual-networks2https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet3https://gist.github.com/ksimonyan/211839e770f7b538e2d84https://github.com/sunblaze-ucb/transferability-advdnn-pub4Published as a conference paper at ICLR 2017Distortion. Besides transferability, another important factor is the distortion between adversarialimages and the original ones. We measure the distortion by root mean square deviation , i.e., RMSD,which is computed as d(x?;x) =pPi(x?ixi)2=N, wherex?andxare the vector representationsof an adversarial image and the original one respectively, Nis the dimensionality of xandx?, andxidenotes the pixel value of the i-th dimension of x, within range [0;255], and similar for x?i.3 N ON-TARGETED ADVERSARIAL EXAMPLESIn this section, we examine different approaches for generating non-targeted adversarial images.3.1 O PTIMIZATION -BASED APPROACHTo apply the optimization-based approach for a single model, we initialize x?to bexand use AdamOptimizer (Kingma & Ba (2014)) to optimize Objective (3) . We find that we can tune the RMSDby adjusting the learning rate of Adam and . We find that, for each model, we can use a smalllearning rate to generate adversarial images with small RMSD, i.e. <2, with any. In fact, we findthat when initializing x?withx, Adam Optimizer will search for an adversarial example around x,even when we set to be 0, i.e., not restricting the distance between x?andx. Therefore, we setto be 0for all experiments using optimization-based approaches throughout the paper. Althoughthese adversarial examples with small distortions can successfully fool the target model, however,they cannot transfer well to other models (details can be found in our online technical report: Liuet al. (2016)).We increase the learning rate to allow the optimization algorithm to search for adversarial imageswith larger distortion. In particular, we set the learning rate to be 4. We run Adam Optimizer for 100iterations to generate the adversarial images. We observe that the loss converges after 100 iterations.An alternative optimization-based approach leading to similar results can be found in our onlinetechnical report: Liu et al. (2016).Non-targeted adversarial examples transfer. We generate non-targeted adversarial examples onone network, but evaluate them on another, and Table 1 Panel A presents the results. From the table,we can observe thatThe diagonal contains all 0 values. This says that all adversarial images generated for onemodel can mislead the same model.A large proportion of non-targeted adversarial images generated for one model using theoptimization-based approach can transfer to another.Although the three ResNet models share similar architectures which differ only in the hy-perparameters, adversarial examples generated against a ResNet model do not necessarilytransfer to another ResNet model better than other non-ResNet models. For example, theadversarial examples generated for VGG-16 have lower accuracy on ResNet-50 than thosegenerated for ResNet-152 or ResNet-101.3.2 F AST GRADIENT -BASED APPROACHESWe then examine the effectiveness of fast gradient-based approaches. A good property of fastgradient-based approaches is that all generated adversarial examples lie in a 1-D subspace. There-fore, we can easily approximate the minimal distortion in this subspace of transferable adversarialexamples between two models. In the following, we first control the RMSD to study fast gradient-based approaches’ effectiveness. Second, we study the transferable minimal distortions of fastgradient-based approaches.3.2.1 E FFECTIVENESS AND TRANSFERABILITY OF THE FAST GRADIENT -BASEDAPPROACHESSince the distortion Band the RMSD of the generated adversarial images are highly correlated, wecan choose this hyperparameter Bto generate adversarial images with a given RMSD. In Table 15Published as a conference paper at ICLR 2017RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 22.83 0% 13% 18% 19% 11%ResNet-101 23.81 19% 0% 21% 21% 12%ResNet-50 22.86 23% 20% 0% 21% 18%VGG-16 22.51 22% 17% 17% 0% 5%GoogLeNet 22.58 39% 38% 34% 19% 0%Panel A: Optimization-based approachRMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 23.45 4% 13% 13% 20% 12%ResNet-101 23.49 19% 4% 11% 23% 13%ResNet-50 23.49 25% 19% 5% 25% 14%VGG-16 23.73 20% 16% 15% 1% 7%GoogLeNet 23.45 25% 25% 17% 19% 1%Panel B: Fast gradient approachTable 1: Transferability of non-targeted adversarial images generated between pairs of models. Thefirst column indicates the average RMSD of all adversarial images generated for the model in thecorresponding row. The cell (i;j)indicates the accuracy of the adversarial images generated formodeli(row) evaluated over model j(column). Results of top-5 accuracy can be found in ouronline technical report: Liu et al. (2016).Panel B, we generate adversarial images using FG such that the average RMSD is almost the sameas those generated using the optimization-based approach. We observe that the diagonal values inthe table are all positive, which means that FG cannot fully mislead the models. A potential reasonis that, FG can be viewed as approximating the optimization, but is tailored for speed over accuracy.On the other hand, the values of non-diagonal cells in the table, which correspond to the accuraciesof adversarial images generated for one model but evaluated on another, are comparable with or lessthan their counterparts in the optimization-based approach. This shows that non-targeted adversarialexamples generated by FG exhibit transferability as well.We also evaluate FGS, but the transferability of the generated images is worse than the ones gen-erated using either FG or optimization-based approaches. The results can be found in our onlinetechnical report: Liu et al. (2016). It shows that when RMSD is around 23, the accuracies of theadversarial images generated by FGS is greater than their counterparts for FG. We hypothesize thereason why transferability of FGS is worse to this fact.3.2.2 A DVERSARIAL IMAGES WITH MINIMAL TRANSFERABLE RMSDFor an image xand two models M1;M 2, we can approximate the minimal distortion Balong adirection, such thatxB=x+Bgenerated for M1is adversarial for both M1andM2. Hereisthe direction, i.e., sgn(rx`)for FGS, andrx`=jjrx`jjfor FG.We refer to the minimal transferable RMSD from M1toM2using FG (or FGS) as the RMSD ofa transferable adversarial example xBwith the minimal transferable distortion BfromM1toM2using FG (or FGS). The minimal transferable RMSD can illustrate the tradeoff between distortionand transferability.In the following, we approximate the minimal transferable RMSD through a linear search by sam-plingBevery 0.1 step. We choose the linear-search method rather than binary-search method todetermine the minimal transferable RMSD because the adversarial images generated from an origi-nal image may come from multiple intervals. The experiment can be found in our online technicalreport: Liu et al. (2016).Minimal transferable RMSD using FG and FGS. Figure 1 plots the cumulative distributionfunction (CDF) of the minimal transferable RMSD from VGG-16 to ResNet-152 using non-targetedFG (Figure 1a) and FGS (Figure 1b). From the figures, we observe that both FG and FGS can find100% transferable adversarial images with RMSD less than 80:91and86:56respectively. Further,6Published as a conference paper at ICLR 2017(a) Fast Gradient (b) Fast Gradient SignFigure 1: The CDF of the minimal transferable RMSD from VGG-16 to ResNet-152 using FG (a)and FGS (b). The green line labels the median minimal transferable RMSD, while the red line labelsthe minimal transferable RMSD to reach 90% percentage.RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 23.13 100% 2% 1% 1% 1%ResNet-101 23.16 3% 100% 3% 2% 1%ResNet-50 23.06 4% 2% 100% 1% 1%VGG-16 23.59 2% 1% 2% 100% 1%GoogLeNet 22.87 1% 1% 0% 1% 100%Table 2: The matching rate of targeted adversarial images generated using the optimization-basedapproach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)indicates that matching rate of the targeted adversarial images generated for model i(row)when evaluated on model j(column). The top-5 results can be found in our online technical re-port: Liu et al. (2016).the FG method can generate transferable attacks with smaller RMSD than FGS. A potential rea-son is that while FGS minimizes the distortion’s L1norm, FG minimizes its L2norm, which isproportional to RMSD.3.3 C OMPARISON WITH RANDOM PERTURBATIONSWe also evaluate the test accuracy when we add a Gaussian noise to the 100 images in our testset. The concrete results can be found in our online technical report: Liu et al. (2016), where weshow the conclusion that the “transferability” of this approach is significantly worse than eitheroptimization-based approaches or fast gradient-based approaches.4 T ARGETED ADVERSARIAL EXAMPLESIn this section, we examine the transferability of targeted adversarial images. Table 2 presentsthe results for using optimization-based approach. We observe that (1) the prediction of targetedadversarial images can match the target labels when evaluated on the same model that is used togenerate the adversarial examples; but (2) the targeted adversarial images can be rarely predictedas the target labels by a different model. We call the latter that the target labels do not transfer .Even when we increase the distortion, we still do not observe improvements on making target labeltransfer. Some results can be found in our online technical report: Liu et al. (2016). Even if wecompute the matching rate based on top-5 accuracy, the highest matching rate is only 10%. Theresults can be found in our online technical report: Liu et al. (2016).We also examine the targeted adversarial images generated by fast gradient-based approaches, andwe observe that the target labels do not transfer as well. The results can be found in our onlinetechnical report: Liu et al. (2016). In fact, most targeted adversarial images cannot mislead themodel, for which the adversarial images are generated, to predict the target labels, regardless of howlarge the distortion is used. We attribute it to the fact that the fast gradient-based approaches only7Published as a conference paper at ICLR 2017search for attacks in a 1-D subspace. In this subspace, the total possible predictions may contain asmall subset of all labels, which usually does not contain the target label. In Section 6, we studydecision boundaries regarding this issue.We also evaluate the matching rate of images added with Gaussian noise, as described in Section 3.3.However, we observe that the matching rate of any of the 5 models is 0%. Therefore, we concludethat by adding Gaussian noise, the attacker cannot generate successful targeted adversarial examplesat all, let alone targeted transferability.5 E NSEMBLE -BASED APPROACHESWe hypothesize that if an adversarial image remains adversarial for multiple models, then it is morelikely to transfer to other models as well. We develop techniques to generate adversarial images formultiple models. The basic idea is to generate adversarial images for the ensemble of the models .Formally, given kwhite-box models with softmax outputs being J1;:::;Jk, an original image x,and its ground truth y,the ensemble-based approach solves the following optimization problem (fortargeted attack):argminx?log(kXi=1iJi(x?))1y?+d(x;x?) (6)wherey?is the target label specified by the adversary,PiJi(x?)is the ensemble model, and iare the ensemble weights,Pki=1i= 1. Note that (6) is the targeted objective. The non-targetedcounterpart can be derived similarly. In doing so, we hope the generated adversarial images remainadversarial for an additional black-box model Jk+1.We evaluate the effectiveness of the ensemble-based approach. For each of the five models, we treatit as the black-box model to attack, and generate adversarial images for the ensemble of the restfour, which is considered as white-box. We evaluate the generated adversarial images over all fivemodels. Throughout the rest of the paper, we refer to the approaches evaluated in Section 3 and 4 asthe approaches using a single model, and to the ensemble-based approaches discussed in this sectionas the approaches using an ensemble model.Optimization-based approach. We use Adam to optimize the objective (6) with equal ensembleweights across all models in the ensemble to generate targeted adversarial examples. In particular,we set the learning rate of Adam to be 8for each model. In each iteration, we compute the Adamupdate for each model, sum up the four updates, and add the aggregation onto the image. We run 100iterations of updates, and we observe that the loss converges after 100 iterations. By doing so, for thefirst time, we observe a large proportion of the targeted adversarial images whose target labels cantransfer. The results are presented in Table 3. We observe that not all targeted adversarial imagescan be misclassified to the target labels by the models used in the ensemble. This suggests thatwhile searching for an adversarial example for the ensemble model, there is no direct supervision tomislead any individual model in the ensemble to predict the target label. Further, from the diagonalnumbers of the table, we observe that the transferability to ResNet models is better than to VGG-16or GoogLeNet, when adversarial examples are generated against all models except the target model.We also evaluate non-targeted adversarial images generated by the ensemble-based approach. Weobserve that the generated adversarial images have almost perfect transferability. We use the sameprocedure as for the targeted version, except the objective to generate the adversarial images. Weevaluate the generated adversarial images over all models. The results are presented in Table 4.The generated adversarial images all have RMSDs around 17, which are lower than 22 to 23 ofthe optimization-based approach using a single model (See Table 1 for comparison). When theadversarial images are evaluated over models which are not used to generate the attack, the accuracyis no greater than 6%. For a reference, the corresponding accuracies for all approaches evaluated inSection 3 using one single model are at least 12%. Our experiments demonstrate that the ensemble-based approaches can generate almost perfectly transferable adversarial images.Fast gradient-based approach. The results for non-targeted fast gradient-based approaches ap-plied to the ensemble can be found in our online technical report: Liu et al. (2016). We observethat the diagonal values are not zero, which is the same as we observed in the results for FG and8Published as a conference paper at ICLR 2017RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet-ResNet-152 30.68 38% 76% 70% 97% 76%-ResNet-101 30.76 75% 43% 69% 98% 73%-ResNet-50 30.26 84% 81% 46% 99% 77%-VGG-16 31.13 74% 78% 68% 24% 63%-GoogLeNet 29.70 90% 87% 83% 99% 11%Table 3: The matching rate of targeted adversarial images generated using the optimization-basedapproach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)indicates that percentage of the targeted adversarial images generated for the ensemble of thefour models except model i(row) is predicted as the target label by model j(column). In each row,the minus sign “” indicates that the model of the row is not used when generating the attacks.Results of top-5 matching rate can be found in our online technical report: Liu et al. (2016).RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet-ResNet-152 17.17 0% 0% 0% 0% 0%-ResNet-101 17.25 0% 1% 0% 0% 0%-ResNet-50 17.25 0% 0% 2% 0% 0%-VGG-16 17.80 0% 0% 0% 6% 0%-GoogLeNet 17.41 0% 0% 0% 0% 5%Table 4: Accuracy of non-targeted adversarial images generated using the optimization-based ap-proach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)corresponds to the accuracy of the attack generated using four models except model i(row)when evaluated over model j(column). In each row, the minus sign “ ” indicates that the modelof the row is not used when generating the attacks. Results of top-5 accuracy can be found in ouronline technical report: Liu et al. (2016).FGS applied to a single model. We hypothesize a potential reason is that the gradient directions ofdifferent models in the ensemble are orthogonal to each other, as we will illustrate in Section 6. Inthis case, the gradient direction of the ensemble is almost orthogonal to the one of each model in theensemble. Therefore searching along this direction may require large distortion to reach adversarialexamples.For targeted adversarial examples generated using FG and FGS based on an ensemble model, theirtransferability is no better than the ones generated using a single model. The results can be found inour online technical report: Liu et al. (2016). We hypothesize the same reason to explain this: thereare only few possible target labels in total in the 1-D subspace.6 G EOMETRIC PROPERTIES OF DIFFERENT MODELSIn this section, we show some geometric properties of the models to try to better understand transfer-able adversarial examples. Prior works also try to understand the geometic properties of adversarialexamples theoretically (Fawzi et al. (2016)) or empirically (Goodfellow et al. (2014)). In this work,we examine large models trained over a large dataset with 1000 labels, whose geometric propertiesare never examined before. This allows us to make new observations to better understand the modelsand their adversarial examples.The gradient directions of different models in our evaluation are almost orthogonal to eachother. We study whether the adversarial directions of different models align with each other. Wecalculate cosine value of the angle between gradient directions of different models, and the resultscan be found in our online technical report: Liu et al. (2016). We observe that all non-diagonalvalues are close to 0, which indicates that for most images, their gradient directions with respect todifferent models are orthogonal to each other.Decision boundaries of the non-targeted approaches using a single model. We study the deci-sion boundary of different models to understand why adversarial examples transfer. We choose two9Published as a conference paper at ICLR 2017Figure 2: The example image to study the decision boundary. Its ID in ILSVRC 2012 validation setis 49443, and its ground truth label is “anemone fish.”VGG-16 ResNet-50 ResNet-101 ResNet-152 GoogLeNetZoom-in20 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 20201510505101520 Zoom-out100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100Figure 3: Decision regions of different models. We pick the same two directions for all plots: one isthe gradient direction of VGG-16 (x-axis), and the other is a random orthogonal direction (y-axis).Each point in the span plane shows the predicted label of the image generated by adding a noise tothe original image (e.g., the origin corresponds to the predicted label of the original image). Theunits of both axises are 1 pixel values. All sub-figure plots the regions on the span plane using thesame color for the same label. The image is in Figure 2.normalized orthogonal directions 1;2, one being the gradient direction of VGG-16 and the otherbeing randomly chosen. Each point (u;v)in this 2-D plane corresponds to the image x+u1+v2,wherexis the pixel value vector of the original image. For each model, we plot the label of theimage corresponding to each point, and get Figure 3 using the image in Figure 2.We can observe that for all models, the region that each model can predict the image correctlyis limited to the central area. Also, along the gradient direction, the classifiers are soon misled.One interesting finding is that along this gradient direction, the first misclassified label for the threeResNet models (corresponding to the light green region) is the label “orange”. A more detailedstudy can be found in our online technical report: Liu et al. (2016). When we look at the zoom-out figures, however, the labels of images that are far away from the original one are different fordifferent models, even among ResNet models.On the other hand, in Table 5, we show the total number of regions in each plane. In fact, for eachplane, there are at most 21 different regions in all planes. Compared with the 1,000 total categoriesin ImageNet, this is only 2.1% of all categories. That means, for all other 97.9% labels, no targetedadversarial example exists in each plane. Such a phenomenon partially explains why fast gradient-based approaches can hardly find targeted adversarial images.Further, in Figure 4, we draw the decision boundaries of all models on the same plane as describedabove. We can observe that10Published as a conference paper at ICLR 2017Model VGG-16 ResNet-50 ResNet-101 ResNet-152 GoogLeNet# of labels 10 9 21 10 21Table 5: The number of all possible predicted labels for each model in the same plane described in Figure 3.50 0 50 100604020020406080VGG-16ResNet-101ResNet-152ResNet-50GoogLeNetFigure 4: The decision boundary to sep-arate the region within which all pointsare classified as the ground truth label(encircled by each closed curve) fromothers. The plane is the same one de-scribed in Figure 3. The origin ofthe coordinate plane corresponds to theoriginal image. The units of both axisesare 1 pixel values.50 0 50 1006040200204060 ResNet-101VGG-16ResNet-50ResNet-152GoogLeNetFigure 5: The decision boundary to separate theregion within which all points are classified as thetarget label (encircled by each closed curve) fromothers. The plane is spanned by the targeted ad-versarial direction and a random orthogonal di-rection. The targeted adversarial direction is com-puted as the difference between the original imagein Figure 2 and the adversarial image generated bythe optimization-based approach for an ensemble.The ensemble contains all models except ResNet-101. The origin of the coordinate plane corre-sponds to the original image. The units of bothaxises are 1 pixel values.The boundaries align with each other very well. This partially explains why non-targetedadversarial images can transfer among models.The boundary diameters along the gradient direction is less than the ones along the ran-dom direction. A potential reason is that moving a variable along its gradient directioncan change the loss function (i.e., the probability of the ground truth label) significantly.Therefore along the gradient direction it will take fewer steps to move out of the groundtruth region than a random direction.An interesting finding is that even though we move left along the x-axis, which is equivalentto maximizing the ground truth’s prediction probability, it also reaches the boundary muchsooner than moving along a random direction. We attribute this to the non-linearity of theloss function: when the distortion is larger, the gradient direction also changes dramatically.In this case, moving along the original gradient direction no longer increases the probabilityto predict the ground truth label (details can be found in our online technical report: Liuet al. (2016)).As for VGG-16 model, there is a small hole within the region corresponding to the groundtruth. This may partially explain why non-targeted adversarial images with small distortionexist, but do not transfer well. This hole does not exist in other models’ decision planes. Inthis case, non-targeted adversarial images in this hole do not transfer.Decision boundaries of the targeted ensemble-based approaches. In addition, we choose thetargeted adversarial direction of the ensemble of all models except ResNet-101 and a random or-thogonal direction, and we plot decision boundaries on the plane spanned by these two directionvectors in Figure 5. We observe that the regions of images, which are predicted as the target label,align well for the four models in the ensemble. However, for the model not used to generate theadversarial image, i.e., ResNet-101, it also has a non-empty region such that the prediction is suc-cessfully misled to the target label, although the area is much smaller. Meanwhile, the region withineach closed curve of the models almost has the same center.11Published as a conference paper at ICLR 20177 R EAL WORLD EXAMPLE :ADVERSARIAL EXAMPLES FOR CLARIFAI .COMClarifai.com is a commercial company providing state-of-the-art image classification services. Wehave no knowledge about the dataset and types of models used behind Clarifai.com, except that wehave black-box access to the services. The labels returned from Clarifai.com are also different fromthe categories in ILSVRC 2012. We submit all 100 original images to Clarifai.com and the returnedlabels are correct based on a subjective measure.We also submit 400 adversarial images in total, where 200 of them are targeted adversarial examples,and the rest 200 are non-targeted ones. As for the 200 targeted adversarial images, 100 of themare generated using the optimization-based approach based on VGG-16 (the same ones evaluatedin Table 2), and the rest 100 are generated using the optimization-based approach based on anensemble of all models except ResNet-152 (the same ones evaluated in Table 3). The 200 non-targeted adversarial examples are generated similarly (the same ones evaluated in Table 1 and 4).For non-targeted adversarial examples, we observe that for both the ones generated using VGG-16and those generated using the ensemble, most of them can transfer to Clarifai.com.More importantly, a large proportion of our targeted adversarial examples are misclassified by Clari-fai.com as well. We observe that 57% of the targeted adversarial examples generated using VGG-16,and76% of the ones generated using the ensemble can mislead Clarifai.com to predict labels irrele-vant to the ground truth.Further, our experiment shows that for targeted adversarial examples, 18% of those generated us-ing the ensemble model can be predicted as labels close to the target label by Clarifai.com. Thecorresponding number for the targeted adversarial examples generated using VGG-16 is 2%. Con-sidering that in the case of attacking Clarifai.com, the labels given by the target model are differentfrom those given by our models, it is fairly surprising to see that when using the ensemble-basedapproach, there is still a considerable proportion of our targeted adversarial examples that can mis-lead this black-box model to make predictions semantically similar to our target labels. All thesenumbers are computed based on a subjective measure, and we include some examples in Table 6.More examples can be found in our online technical report: Liu et al. (2016).originalimagetruelabelClarifai.comresults oforiginal imagetargetlabeltargetedadversarialexampleClarifai.com resultsof targetedadversarial exampleviaductbridge,sight,arch,river,skywindowscreenwindow,wall,old,decoration,designhip, rosehip,rosehipfruit,fall,food,little,wildlifestupa,topeBuddha,gold,temple,celebration,artisticdogsled,dogsled,dogsleighgroup together,four,sledge,sled,enjoymenthip, rosehip,rosehipcherry,branch,fruit,food,season12Published as a conference paper at ICLR 2017pug,pug-dogpug,friendship,adorable,purebred,sitsea lionsea seal,ocean,head,sea,cuteOldEnglishsheep-dog,bobtailpoodle,retriever,loyalty,sit,twoabayaveil,spirituality,religion,people,illustrationmaillot,tank suitbeach,woman,adult,wear,portraitamphib-ian,amphibi-ousvehicletransportationsystem,vehicle,man,print,retropatas,hussarmonkey,Erythro-cebuspatasprimate,monkey,safari,sit,lookingbee eaterornithology,avian,beak,wing,featherTable 6: Original images and adversarial images evaluated over Clarifai.com. For labels returnedfrom Clarifai.com, we sort the labels firstly by rareness: how many times a label appears in theClarifai.com results for all adversarial images and original images, and secondly by confidence.Only top 5 labels are provided.8 C ONCLUSIONIn this work, we are the first to conduct an extensive study of the transferability of both non-targetedand targeted adversarial examples generated using different approaches over large models and alarge scale dataset. Our results confirm that the transferability for non-targeted adversarial exam-ples are prominent even for large models and a large scale dataset. On the other hand, we find thatit is hard to use existing approaches to generate targeted adversarial examples whose target labelscan transfer. We develop novel ensemble-based approaches, and demonstrate that they can gen-erate transferable targeted adversarial examples with a high success rate. Meanwhile, these newapproaches exhibit better performance on generating non-targeted transferable adversarial examplesthan previous work. We also show that both non-targeted and targeted adversarial examples gen-erated using our new approaches can successfully attack Clarifai.com, which is a black-box imageclassification system. Furthermore, we study some geometric properties to better understand thetransferable adversarial examples.ACKNOWLEDGMENTSThis material is in part based upon work supported by the National Science Foundation under GrantNo. TWC-1409915. Any opinions, findings, and conclusions or recommendations expressed in thismaterial are those of the author(s) and do not necessarily reflect the views of the National ScienceFoundation.REFERENCESNicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. arXivpreprint arXiv:1608.04644 , 2016.13Published as a conference paper at ICLR 2017Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers:from adversarial to random noise. In Advances in Neural Information Processing Systems , pp.1624–1632, 2016.Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarialexamples. arXiv preprint arXiv:1412.6572 , 2014.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. arXiv preprint arXiv:1512.03385 , 2015.Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR ,abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980 .Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial exam-ples and black-box attacks. arXiv preprint arXiv:1611.02770 , 2016.Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universaladversarial perturbations. arXiv preprint arXiv:1610.08401 , 2016.Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: fromphenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 ,2016a.Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and AnanthramSwami. Practical black-box attacks against deep learning systems using adversarial examples.arXiv preprint arXiv:1602.02697 , 2016b.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. CoRR , abs/1409.1556, 2014. URL http://arxiv.org/abs/1409.1556 .J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man vs. computer: Benchmarking machinelearning algorithms for traffic sign recognition. Neural Networks , (0):–, 2012. ISSN 0893-6080.doi: 10.1016/j.neunet.2012.02.016. URL http://www.sciencedirect.com/science/article/pii/S0893608012000457 .Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow,and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 , 2013.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov,Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions.CoRR , abs/1409.4842, 2014. URL http://arxiv.org/abs/1409.4842 .14
BkSmc8qll
Under review as a conference paper at ICLR 2017DYNAMIC NEURAL TURING MACHINE WITH CONTIN -UOUS AND DISCRETE ADDRESSING SCHEMESCaglar Gulcehre, Sarath Chandar, Kyunghyun Choy, Yoshua BengioUniversity of Montreal, name.lastname@umontreal.cayNew York University, name.lastname@nyu.eduABSTRACTIn this paper, we extend neural Turing machine (NTM) into a dynamic neural Turingmachine (D-NTM) by introducing a trainable memory addressing scheme. Thisaddressing scheme maintains for each memory cell two separate vectors, content andaddress vectors. This allows the D-NTM to learn a wide variety of location-basedaddressing strategies including both linear and nonlinear ones. We implementthe D-NTM with both continuous, differentiable and discrete, non-differentiableread/write mechanisms. We investigate the mechanisms and effects for learning toread and write to a memory through experiments on Facebook bAbI tasks using bothafeedforward andGRU -controller. The D-NTM is evaluated on a set of FacebookbAbI tasks and shown to outperform NTM and LSTM baselines. We also providefurther experimental results on sequential MNIST, associative recall and copy tasks.1 I NTRODUCTIONDesigning general-purpose learning algorithms is one of the long-standing goals of artificial intelligence.Despite the success of deep learning in this area (see, e.g., (Goodfellow et al., 2016)) there are still a setof complex tasks that are not well addressed by conventional neural networks. Those tasks often require aneural network to be equipped with an explicit, external memory in which a larger, potentially unbounded,set of facts need to be stored. They include, but are not limited to, episodic question-answering (Westonet al., 2015b; Hermann et al., 2015; Hill et al., 2015), compact algorithms (Zaremba et al., 2015),dialogue (Serban et al., 2016; Vinyals & Le, 2015) and video caption generation (Yao et al., 2015).Recently two promising approaches based on neural networks to this type of tasks have been proposed.Memory networks (Weston et al., 2015b) explicitly store all the facts, or information, available foreach episode in an external memory (as continuous vectors) and use the attention-based mechanismto index them when returning an output. On the other hand, neural Turing machines (NTM, (Graveset al., 2014)) read each fact in an episode and decides whether to read, write the fact or do both tothe external, differentiable memory.A crucial difference between these two models is that the memory network does not have a mechanismto modify the content of the external memory, while the NTM does. In practice, this leads to easierlearning in the memory network, which in turn resulted in it being used more in real tasks (Bordes et al.,2015; Dodge et al., 2015). On the contrary, the NTM has mainly been tested on a series of small-scale,carefully-crafted tasks such as copy and associative recall. The NTM, however is more expressive,precisely because it can store and modify the internal state of the network as it processes an episode.The original NTM supports two modes of addressing (which can be used simultaneously.) They arecontent-based and location-based addressing. We notice that the location-based strategy is based onlinear addressing. The distance between each pair of consecutive memory cells is fixed to a constant.We address this limitation, in this paper, by introducing a learnable address vector for each memorycell of the NTM with least recently used memory addressing mechanism, and we call this variant adynamic neural Turing machine (D-NTM).We evaluate the proposed D-NTM on the full set of Facebook bAbI task (Weston et al., 2015b)using either continuous , differentiable attention or discrete , non-differentiable attention (Zaremba &Sutskever, 2015) as an addressing strategy. Our experiments reveal that it is possible to use the discrete,non-differentiable attention mechanism, and in fact, the D-NTM with the discrete attention and GRUcontroller outperforms the one with the continuous attention. After we published our paper on arXiv, anew extension of NTM called DNC (Graves et al., 2016) has also provided results on bAbI task as well.1Under review as a conference paper at ICLR 2017We also provide results on sequential-MNIST and algorithmic tasks proposed by (Graves et al., 2014)in order to investigate the ability of our model when dealing with long-term dependencies.Our Contributions1. We propose a generalization of Neural Turing Machine called a dynamic neural Turing machine(D-NTM) which employs a learnable and location-based addressing.2.We demonstrate the application of neural Turing machines on a more natural and less toyish task:episodic question-answering besides the toy tasks. We provide detailed analysis of our model onthis task.3.We propose to use the discrete attention mechanism and empirically show that, it can outperformthe continuous attention based addressing for episodic QA task.4. We propose a curriculum strategy for our model with the feedforward controller and discreteattention that improves our results significantly.2 D YNAMIC NEURAL TURING MACHINEThe proposed dynamic neural Turing machine (D-NTM) extends the neural Turing machine (NTM,(Graves et al., 2014)) which has a modular design. The NTM consists of two main modules, a controllerand, a memory. The controller, which is often implemented as a recurrent neural network, issues acommand to the memory so as to read, write to and erase a subset of memory cells. Although thememory was originally envisioned as an integrated module, it is not necessary, and the memory maybe an external, black box (Zaremba & Sutskever, 2015).2.1 C ONTROLLERAt each time step t, the controller (1) receives an input value xt, (2) addresses and reads the memory andcreates the content vector t, (3) erases/writes a portion of the memory, (4) updates its own hidden stateht, and (5) outputs a value yt(if needed.) In this paper, we use both a gated recurrent unit (GRU, (Choet al., 2014)) and a feedforward-controller to implement the controller such that for a GRU controllerht=GRU(xt;ht1;t) (1)or for a feedforward-controllerht=(xt;t): (2)2.2 M EMORYWe use a rectangular matrix M2RN(dc+da)to denoteNmemory cells. Unlike the original NTM,we partition each memory cell vector into two parts:M= [A;C]:The first part A2RNdais a learnable address matrix, and the second C2RNdca content matrix.In other words, each memory cell miis nowmi= [ai;ci]:The address part aiis considered a model parameter that is updated during training. During inference,the address part is not overwritten by the controller and remains constant. On the other hand, thecontent part ciis both read and written by the controller both during training and inference. At thebeginning of each episode, the content part of the memory is refreshed to be an all-zero matrix, C0=0.This introduction of the learnable address portion for each memory cell allows the model to learnsophisticated location-based addressing strategies. A similar addressing mechanism is also exploredin (Reed & de Freitas, 2015) in the context of learning program traces.2.3 M EMORY ADDRESSINGMemory addressing in the D-NTM is equivalent to computing an N-dimensional address vector. The D-NTM computes three such vectors for respectively reading wt2RN, erasing et2Rdcand writing ut2RN. Specifically for writing, the controller further computes a candidate memory content vector ct22Under review as a conference paper at ICLR 2017Address 1ContentAddress 2ContentAddress 3ContentAddress 4ContentAddress 5ContentAddress 6ContentAddress 7ContentControllerMemoryContentReaderWriterStoryFact t-1Fact tQuestionAnswer.........Figure 1: A graphical illustration of the proposed dynamic neural Turing machine with therecurrent-controller. The controller receives the fact as a continuous vector encoded by a recurrentneural network, computes the read and write weights for addressing the memory. If the D-NTMautomatically detects that a query has been received, it returns an answer and terminates.Rdcbased on its current hidden state of the controller ht2Rdhand the input of the controller scaled witha scalar gate twhich is a function of the hidden state and the input of the controller as well, see Eqn 4.t=f(ht;xt); (3)ct=ReLU (Wmht+tWxxt+bm): (4)Reading With the read vector wt, the content vector read from the memory t2Rda+dcis retrievedbyt= (wt)>Mt1; (5)where wtis a row vector.Erasing and Writing Given the erase, write and candidate memory content vectors ( et,utj, andctrespectively) generated by a simple MLP conditioned on the hidden state of the controller ht, thememory matrix is updated by,Ct[j] = (1etutj)Ct1[j] +utjct: (6)where the subscript jinCt[j]denotes thej-th row of the content part Ctof the memory matrix Mt.No Operation (NOP) As found in (Joulin & Mikolov, 2015), an additional NOP action might bebeneficial for the controller notto access the memory once in a while. We model this situation bydesignating one memory cell as a NOP cell. Reading or writing from this memory cell is ignored.2.4 L EARNINGOnce the proposed D-NTM is executed, it returns the output distribution p(yjx1;:::;xT). As a result,we define a cost function as the negative log-likelihood:C() =1NNXn=1logp(ynjxn1;:::;xnT); (7)whereis a set of all the parameters. As the proposed D-NTM, just like the original NTM, is fullyend-to-end differentiable, we can compute the gradient of this cost function by using backpropagationand learn the parameters of the model with a gradient-based optimization algorithm, such as stochasticgradient descent, to train it end-to-end.3Under review as a conference paper at ICLR 20173 A DDRESSING MECHANISM3.1 A DDRESS VECTORSEach of the address vectors (both read and write) is computed in the same way. The way they arecomputed are very similar to the content based addressing in (Graves et al., 2014). First, the controllercomputes a key vector:kt=W>kht+bk;where Wk2RN(da+dc)andbk2Rda+dcif the read head is being computed, otherwiseWk2RNdcandbk2Rdcif the write head weights are being computed. They can be the parametersfor a specific head (either read or write.) Also, the sharpening factor t2R1is computed as:softplus (x) =log(exp(x) + 1) (8)t=softplus (u>ht+b) + 1: (9)uandbare the parameters of the sharpening t.The address vector is then computed by,zti=tSkt;mti(10)wti=exp(zti)Pjexp(ztj); (11)where the similarity function S2R0is defined asS(x;y) =xy(jjxjjjjyjj+):3.2 M ULTI -STEP ADDRESSINGAt each time-step, controller may require more than one-step for accessing to the memory. The originalNTM addresses this by implementing multiple sets of read, erase and write heads. In this paper, weexplore an option of allowing each head to operate more than once at each time step, similar to themulti-hop mechanism from the end-to-end memory network (Sukhbaatar et al., 2015).3.3 D YNAMIC LEAST RECENTLY USED ADDRESSINGWe introduce a memory addressing schema that can learn to put more emphasis on the least recentlyused (LRU) memory locations. As observed in (Santoro et al., 2016; Rae et al., 2016), we find it easierto learn the write operations with the use of LRU addressing.To learn a LRU based addressing, first we compute the exponentially moving averages of the logits ( zt)asvt,vt= 0:1vt1+ 0:9zt. We rescale the accumulated vtwitht, such that the controller adjuststhe influence of how much previously written memory locations should effect the attention weightsof a particular time-step. Next, we subtract vtfromztin order to reduce the weights of previouslyread or written memory locations. tis a shallow MLP with a scalar output and it is conditioned onthe hidden state of the controller. tis parametrized with the parameters uandb,t=sigmoid (u>ht+b); (12)wt=softmax (zttvt1): (13)This addressing method increases the weights of the least recently used rows of the memory. Themagnitude of the influence of the least-recently used memory locations is being learned and adjustedwitht. Our LRU addressing is dynamic due to the model’s ability to switch between pure content-basedaddressing and LRU. During the training, we do not backpropagate through vt. Due to the dynamicnature of this addressing mechanism, it can be used for both read and write operations. If needed,the model will automatically learn to disable LRU while reading from the memory.4 G ENERATING DISCRETE ADDRESS VECTORSIn this section, we describe the discrete attention based addressing strategy.4Under review as a conference paper at ICLR 2017Discrete Addressing Let us use wto denote an address vector (either read, write or erase) at timet. By definition in Eq. (10), every element in this address vector is positive and sums up to one. Inother words, we can treat this vector as the probabilities of a categorical distribution C(w)withdim(w)choices:p(j) =wj;wherewjis thej-th element of w. We can readily sample from this categorical distribution and forman one-hot vector ~wsuch that~wk=I(k=j);wherejC(w), andIis an indicator function.Training We use this sampling-based strategy for all the heads during training. This clearly makesthe use of backpropagation infeasible to compute the gradient, as the sampling procedure is notdifferentiable. Thus, we use REINFORCE (Williams, 1992) together with the three variance reductiontechniques–global baseline, input-dependent baseline and variance normalization– suggested in (Mnih& Gregor, 2014).Let us define R(x) = logp(yjx1;:::;xT)as a reward. We first center and re-scale the reward by~R(x) =R(x)bp2+;wherebandis running average and standard deviation of R. We can further center it for each inputxseparately, i.e.,~R(x) ~R(x)b(x);whereb(x)is computed by a baseline network which takes as input xand predicts its estimated reward.The baseline network is trained to minimize the Huber loss (Huber, 1964) between the true reward~R(x)and the predicted reward b(x). We use the Huber loss, which is defined byH(x) =x2forjxj;(2jxj);otherwise,due to its robustness. As a further measure to reduce the variance, we regularize the negative entropyof all those category distributions to facilitate a better exploration during training (Xu et al., 2015).Then, the cost function for each training example is approximated asCn() =logp(yjx1:T;~w1:J;~u1:J;~e1:J)JXj=1~R(xn)(logp( ~wjjx1:T) + logp(~ujjx1:T) + logp(~ejjx1:T))HJXj=1(H(wjjx1:T) +H(ujjx1:T) +H(ejjx1:T)):whereJis the number of addressing steps, His the entropy regularization coefficient, and Hdenotesthe entropy.Inference Once training is over, we switch to a deterministic strategy. We simply choose an elementofwwith the largest value to be the index of the target memory cell, such that~wk=I(k=argmax (w)):Curriculum Learning for the Discrete Attention Training discrete attention with feed-forwardcontroller and REINFORCE is challenging. We propose to use a curriculum strategy for trainingwith the discrete attention in order to tackle this problem. For each minibatch, we sample from abinomial distribution with the probability pt,tBin(pt). The model will either use the discreteor the continuous-attention based on the t. We start the training procedure with p0= 1and duringthe trainingptis annealed to 0by settingpt=p0p1+t.We can rewrite the weights wtas in Equation 14, where it is expressed as the combination of continuousattention weights wtand discrete attention weights ~wtwithtbeing a binary variable that choosesto use one of them,wt twt+ (1t)~wt: (14)5Under review as a conference paper at ICLR 2017By using this curriculum learning strategy, at the beginning of the training, the model learns to usethe memory mainly with the continuous attention. As we anneal the pt, the model will rely more onthe discrete attention.5 R EGULARIZING DYNAMIC NEURAL TURING MACHINESWhen the controller of D-NTM is a powerful recurrent neural network, it is important to regularizetraining of the D-NTM so as to avoid suboptimal solutions in which the D-NTM ignores the memoryand works as a simple recurrent neural network.Read-Write Consistency Regularizer One such suboptimal solution we have observed in ourpreliminary experiments with the proposed D-NTM is that the D-NTM uses the address part Aofthe memory matrix simply as an additional weight matrix, rather than as a means to accessing thecontent part C. We found that this pathological case can be effectively avoided by encouraging the readhead to point to a memory cell which has also been pointed by the write head. This can be implementedas the following regularization term:Rrw(w;u) =TXt0=1jj1(1t0t0Xt=1ut)>wt0jj22 (15)In the equations above, utis the write and wtis the read weights.Next Input Prediction as Regularization Temporal structure is a strong signal that should beexploited by the controller based on a recurrent neural network. We exploit this structure by lettingthe controller predict the input in the future. We maximize the predictability of the next input by thecontroller during training. This is equivalent to minimizing the following regularizer:Rpred(W) =logp(ft+1jft;wt;ut;Mt;W))whereftis the current input and ft+1is the input at next timestep. We found this regularizer to beeffective in our preliminary experiments and use it for bAbI tasks.6 R ELATED WORKA recurrent neural network (RNN), which is used as a controller in the proposed D-NTM, has animplicit memory in the form of recurring hidden states. Even with this implicit memory, a vanillaRNN is however known to have difficulties in storing information for long time-spans (Bengio et al.,1994; Hochreiter, 1991). Long short-term memory (LSTM, (Hochreiter & Schmidhuber, 1997)) andgated recurrent units (GRU, (Cho et al., 2014)) have been found to address this issue. However allthese models based solely on RNNs have been found to be limited when they are used to solve, e.g.,algorithmic tasks and episodic question-answering.In addition to the finite random access memory of the neural Turing machine, based on which theD-NTM is designed, other data structures have been proposed as external memory for neural networks.In (Sun et al., 1997; Grefenstette et al., 2015; Joulin & Mikolov, 2015), a continuous, differentiablestack was proposed. In (Zaremba et al., 2015; Zaremba & Sutskever, 2015), grid and tape storagesare used. These approaches differ from the NTM in that their memory is unbounded and can growindefinitely. On the other hand, they are often not randomly accessible.Memory networks (Weston et al., 2015b) form another family of neural networks with external memory.In this class of neural networks, information is stored explicitly as it is (in the form of its continuousrepresentation) in the memory, without being erased or modified during an episode. Memory networksand their variants have been applied to various tasks successfully (Sukhbaatar et al., 2015; Bordes et al.,2015; Dodge et al., 2015; Xiong et al., 2016). Miller et al. (2016) have also independently proposedthe idea of having separate key and value vectors for memory networks.Another related family of models is the attention-based neural networks. Neural networks withcontinuous or discrete attention over an input have shown promising results on a variety ofchallenging tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), speechrecognition (Chorowski et al., 2015), machine reading comprehension (Hermann et al., 2015) andimage caption generation (Xu et al., 2015).6Under review as a conference paper at ICLR 2017The latter two, the memory network and attention-based networks, are however clearly distinguishablefrom the D-NTM by the fact that they do not modify the content of the memory.7 E XPERIMENTSWe provide experimental results to demonstrate the abilities of our model, first on Facebook bAbItask (Weston et al., 2015a). We give detailed analysis and experimental results on this task. We alsocompare different variations of NTM on bAbI tasks. We have performed experiments on sequentialpermuted MNIST (Le et al., 2015) and on toy tasks to compare other published models on these taskswith a recurrent controller. The details of our experiments are provided in the supplementary material.7.1 E PISODIC QUESTION -ANSWERING :BABI TASKSIn this section, we evaluate the proposed D-NTM on the recently proposed episodic question-answeringtask called Facebook bAbI. We use the dataset with 10k training examples per sub-task provided byFacebook.1For each episode, the D-NTM reads a sequence of factual sentences followed by a question,all of which are given as natural language sentences. The D-NTM is expected to store and retrieverelevant information in the memory in order to answer the question based on the presented facts. Exactimplementation details and hyper-parameter settings are provided in the appendix.7.1.1 G OALSThe goal of this experiment is three-fold. First, we present for the first time the performance of amemory-based network that can both read and write dynamically on the Facebook bAbI tasks2. We aimto understand whether a model that has to learn to write an incoming fact to the memory, rather thanstoring it as it is, is able to work well, and to do so, we compare both the original NTM and proposedD-NTM against an LSTM-RNN.Second, we investigate the effect of having to learn how to write. The fact that the NTM needs tolearn to write likely has adverse effect on the overall performance, when compared to, for instance,end-to-end memory networks (MemN2N, (Sukhbaatar et al., 2015)) and dynamic memory network(DMN+, (Xiong et al., 2016)) both of which simply store the incoming facts as they are. We quantifythis effect in this experiment. Lastly, we show the effect of the proposed learnable addressing scheme.We further explore the effect of using a feedforward controller instead of the GRU controller. In additionto the explicit memory, the GRU controller can use its own internal hidden state as the memory. Onthe other hand, the feedforward controller must solely rely on the explicit memory, as it is the onlymemory available.7.1.2 R ESULTS AND ANALYSISIn Table 1, we first observe that the NTMs are indeed capable of solving this type of episodicquestion-answering better than the vanilla LSTM-RNN. Although the availability of explicit memoryin the NTM has already suggested this result, we note that this is the first time neural Turing machineshave been used in this specific task.All the variants of NTM with the GRU controller outperform the vanilla LSTM-RNN. However, not allof them perform equally well. First, it is clear that the proposed dynamic NTM (D-NTM) using the GRUcontroller outperforms the original NTM with the GRU controller (NTM, CBA only NTM vs. continuousD-NTM, Discrete D-NTM). As discussed earlier, the learnable addressing scheme of the D-NTM allowsthe controller to access the memory slots by location in a potentially nonlinear way. We expect it to helpwith tasks that have non-trivial access patterns, and as anticipated, we see a large gain with the D-NTMover the original NTM in the tasks of, for instance, 12 - Conjunction and 17 - Positional Reasoning.Among the recurrent variants of the proposed D-NTM, we notice significant improvements by usingdiscrete addressing over using continuous addressing. We conjecture that this is due to certain typesof tasks that require precise/sharp retrieval of a stored fact, in which case continuous addressingis in disadvantage over discrete addressing. This is evident from the observation that the D-NTMwith discrete addressing significantly outperforms that with continuous addressing in the tasks of 8 -1https://research.facebook.com/researchers/15439345391893482Similar experiments were done in the recently published (Graves et al., 2016), but D-NTM results for bAbItasks were already available in arxiv by that time.7Under review as a conference paper at ICLR 20171-step 1-step 1-step 1-step 3-steps 3-steps 3-steps 3-stepsLBACBA Soft Discrete LBACBA Soft DiscreteTask LSTM MemN2N DMN+ NTM NTM D-NTM D-NTM NTM NTM D-NTM D-NTM1 0.00 0.00 0.00 16.30 16.88 5.41 6.66 0.00 0.00 0.00 0.002 81.90 0.30 0.30 57.08 55.70 58.54 56.04 61.67 59.38 46.66 62.293 83.10 2.10 1.10 74.16 55.00 74.58 72.08 83.54 65.21 47.08 41.454 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.005 1.20 0.80 0.50 1.46 20.41 1.66 1.04 0.83 1.46 1.25 1.456 51.80 0.10 0.00 23.33 21.04 40.20 44.79 48.13 54.80 20.62 11.047 24.90 2.00 2.40 21.67 21.67 19.16 19.58 7.92 37.70 7.29 5.628 34.10 0.90 0.00 25.76 21.05 12.58 18.46 25.38 8.82 11.02 0.749 20.20 0.30 0.00 24.79 24.17 36.66 34.37 37.80 0.00 39.37 32.5010 30.10 0.00 0.00 41.46 33.13 52.29 50.83 56.25 23.75 20.00 20.8311 10.30 0.10 0.00 18.96 31.88 31.45 4.16 3.96 0.28 30.62 16.8712 23.40 0.00 0.00 25.83 30.00 7.70 6.66 28.75 23.75 5.41 4.5813 6.10 0.00 0.00 6.67 5.63 5.62 2.29 5.83 83.13 7.91 5.0014 81.00 0.10 0.20 58.54 59.17 60.00 63.75 61.88 57.71 58.12 60.2015 78.70 0.00 0.00 36.46 42.30 36.87 39.27 35.62 21.88 36.04 40.2616 51.90 51.80 45.30 71.15 71.15 49.16 51.35 46.15 50.00 46.04 45.4117 50.10 18.60 4.20 43.75 43.75 17.91 16.04 43.75 56.25 21.25 9.1618 6.80 5.30 2.10 3.96 47.50 3.95 3.54 47.50 47.50 6.87 1.6619 90.30 2.30 0.00 75.89 71.51 73.74 64.63 61.56 63.65 75.88 76.6620 2.10 0.00 0.00 1.25 0.00 2.70 3.12 0.40 0.00 3.33 0.00Avg.Err. 36.41 4.24 2.81 31.42 33.60 29.51 27.93 32.85 32.76 24.24 21.79Table 1: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples withthe GRU and feedforward controller. FF stands for the experiments that are conducted with feedforwardcontroller. Let us, note that LBArefers to NTM that uses both LBA and CBA. In this table, wecompare multi-step vs single-step addressing, original NTM with location based+content basedaddressing vs only content based addressing, and discrete vs continuous addressing on bAbI.Lists/Sets and 11 - Basic Coreference. Furthermore, this is in line with an earlier observation in (Xu et al.,2015), where discrete addressing was found to generalize better in the task of image caption generation.In Table 2, we also observe that the D-NTM with the feedforward controller and discrete attentionperforms worse than LSTM and D-NTM with continuous-attention. However, when the proposedcurriculum strategy from Sec. 4 is used, the average test error drops from 68.30 to 37.79.We empirically found training of the feedforward controller more difficult than that of the recurrentcontroller. We train our feedforward controller based models four times longer (in terms of the numberof updates) than the recurrent controller based ones in order to ensure that they are converged for mostof the tasks. On the other hand, the models trained with the GRU controller overfit on bAbI tasksvery quickly. For example, on tasks 3 and 16 the feedforward controller based model underfits (i.e.,high training loss) at the end of the training, whereas with the same number of units the model withthe GRU controller can overfit on those tasks after 3,000 updates only.When our results are compared to the variants of the memory network Weston et al. (2015b) (MemN2Nand DMN+), we notice a significant performance gap. We attribute this gap to the difficulty in learningto manipulate and store a complex input.FF FF FFSoft Discrete DiscreteTask D-NTM D-NTM D-NTM1 4.38 81.67 14.792 27.5 76.67 76.673 71.25 79.38 70.834 0.00 78.65 44.065 1.67 83.13 17.716 1.46 48.76 48.137 6.04 54.79 23.548 1.70 69.75 35.629 0.63 39.17 14.3810 19.80 56.25 56.2511 0.00 78.96 39.5812 6.25 82.5 32.0813 7.5 75.0 18.5414 17.5 78.75 24.7915 0.0 71.42 39.7316 49.65 71.46 71.1517 1.25 43.75 43.7518 0.24 48.13 2.9219 39.47 71.46 71.5620 0.0 76.56 9.79Avg.Err. 12.81 68.30 37.79Table 2: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples withfeedforward controller.We also provide further experiments investigating different extensions on D-NTM in the appendix.8Under review as a conference paper at ICLR 20177.2 S EQUENTIAL pMNISTIn sequential MNIST task, the pixels of the MNIST digits are provided to the model in scan line order,left to right and top to bottom (Le et al., 2015). At the end of sequence of pixels, the model predictsthe label of the digit in the sequence of pixels. We experiment D-NTM on the variation of sequentialMNIST where the order of the pixels is randomly shuffled, we call this task as permuted MNIST(pMNIST). An important contribution of this task to our paper, in particular, is to measure the model’sability to perform well when dealing with long-term dependencies. We report our results in Table 33, weobserve improvements over other models that we compare against. In Table 3, ”discrete addressing withMAB” refers to D-NTM model using REINFORCE with baseline computed from moving averages ofthe reward. Discrete addressing with IB refers to D-NTM using REINFORCE with input-based baseline.7.3 NTM T OYTASKSWe explore the possibility of using D-NTM to solve algorithmic tasks such as copy and associativerecall tasks. We train our model on the same lengths of sequences that is experimented in (Graveset al., 2014). We report our results in Table 4. We find out that D-NTM using continuous-attentioncan successfully learn the ”Copy” and ”Associative Recall” tasks.In Table 4, we train our model on sequences of the same length as the experiments in (Graves et al., 2014)and test the model on the sequences of the maximum length seen during the training. We consider modelto be successful on copy or associative recall if its validation cost (binary cross-entropy) is lower than0:02over the sequences of maximum length seen during the training. We set the threshold to 0:02todetermine whether a model is successful on a task. Because empirically we observe that the models havehigher validation costs perform badly in terms of generalization over the longer sequences. ”D-NTMdiscrete” model in this table is trained with REINFORCE using moving averages to estimate the baseline.Test AccD-NTM discrete MAB 89.6D-NTM discrete IB 92.3Soft D-NTM 93.4NTM 90.9I-RNN (Le et al., 2015) 82.0Zoneout (Krueger et al., 2016) 93.1LSTM (Krueger et al., 2016) 89.8Unitary-RNN (Arjovsky et al., 2015) 91.4Recurrent Dropout (Krueger et al., 2016) 92.5Table 3: Sequential pMNIST.Copy Tasks Associative RecallSoft D-NTM Success SuccessD-NTM discrete Success FailureNTM Success SuccessTable 4: NTM Toy Tasks.8 C ONCLUSION AND FUTURE WORKIn this paper we extend neural Turing machines (NTM) by introducing a learnable addressing schemewhich allows the NTM to be capable of performing highly nonlinear location-based addressing.This extension, to which we refer by dynamic NTM (D-NTM), is extensively tested with variousconfigurations, including different addressing mechanisms (continuous vs. discrete) and differentnumber of addressing steps, on the Facebook bAbI tasks. This is the first time an NTM-type modelwas tested on this task, and we observe that the NTM, especially the proposed D-NTM, performs betterthan vanilla LSTM-RNN. Furthermore, the experiments revealed that the discrete, discrete addressingworks better than the continuous addressing with the GRU controller, and our analysis reveals thatthis is the case when the task requires precise retrieval of memory content.Our experiments show that the NTM-based models can be weaker than other variants of memorynetworks which do not learn but have an explicit mechanism of storing incoming facts as they are. Weconjecture that this is due to the difficulty in learning how to write, manipulate and delete the contentof memory. Despite this difficulty, we find the NTM-based approach, such as the proposed D-NTM,3Let us note that, the current state of art on this task is recurrent batch normalization with LSTM (Cooijmanset al., 2016) with 95.6% accuracy. It is possible to use recurrent batch normalization in our model and potentiallyimprove our results on this task as well.9Under review as a conference paper at ICLR 2017to be a better, future-proof approach, because it can scale to a much longer horizon (where it becomesimpossible to explicitly store all the experiences.)OnpMNIST task, we show that our model can outperform other similar type of approaches proposedto deal with the long-term dependencies. On copy and associative recall tasks, we show that our modelcan solve the algorithmic problems that are proposed to solve with NTM type of models.The success of both the learnable address and the discrete addressing scheme suggests two futureresearch directions. First, we should try both of these schemes in a wider array of memory-based models,as they are not specific to the neural Turing machines. Second, the proposed D-NTM needs to beevaluated on a diverse set of applications, such as text summarization (Rush et al., 2015), visual question-answering (Antol et al., 2015) and machine translation, in order to make a more concrete conclusion.REFERENCESStanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick,and Devi Parikh. VQA: visual question answering. In 2015 IEEE International Conference onComputer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015 , pp. 2425–2433, 2015.Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. arXivpreprint arXiv:1511.06464 , 2015.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. In Proceedings Of The International Conference on RepresentationLearning (ICLR 2015) , 2015.Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradientdescent is difficult. Neural Networks, IEEE Transactions on , 5(2):157–166, 1994.Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple questionanswering with memory networks. arXiv preprint arXiv:1506.02075 , 2015.Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and YoshuaBengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation.arXiv preprint arXiv:1406.1078 , 2014.Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio.Attention-based models for speech recognition. arXiv preprint arXiv:1506.07503 , 2015.Tim Cooijmans, Nicolas Ballas, C ́esar Laurent, and Aaron Courville. Recurrent batch normalization.arXiv preprint arXiv:1603.09025 , 2016.Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, ArthurSzlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog systems.CoRR , abs/1511.06931, 2015.Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MITPress, 2016. URL http://www.deeplearningbook.org .Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401 ,2014.Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi ́nska, Sergio G ́omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al.Hybrid computing using a neural network with dynamic external memory. Nature , 538(7626):471–476, 2016.Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning totransduce with unbounded memory. In Advances in Neural Information Processing Systems , pp.1819–1827, 2015.Caglar Gulcehre, Marcin Moczulski, Misha Denil, and Yoshua Bengio. Noisy activation functions.arXiv preprint arXiv:1603.00391 , 2016.Karl Moritz Hermann, Tom ́aˇs Ko ˇcisk`y, Edward Grefenstette, Lasse Espeholt, Will Kay, MustafaSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. arXiv preprintarXiv:1506.03340 , 2015.10Under review as a conference paper at ICLR 2017Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Readingchildren’s books with explicit memory representations. arXiv preprint arXiv:1511.02301 , 2015.Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universit ̈atM ̈unchen , pp. 91, 1991.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural Computation , 9(8):1735–1780, 1997.Peter J. Huber. Robust estimation of a location parameter. Ann. Math. Statist. , 35(1):73–101, 03 1964.Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrentnets. In Advances in Neural Information Processing Systems , pp. 190–198, 2015.Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR , abs/1412.6980,2014.David Krueger, Tegan Maharaj, J ́anos Kram ́ar, Mohammad Pezeshki, Nicolas Ballas, Nan RosemaryKe, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regularizingrnns by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305 , 2016.Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networksof rectified linear units. arXiv preprint arXiv:1504.00941 , 2015.Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-basedneural machine translation. In Proceedings Of The Conference on Empirical Methods for NaturalLanguage Processing (EMNLP 2015) , 2015.Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston.Key-value memory networks for directly reading documents. CoRR , abs/1606.03126, 2016. URLhttp://arxiv.org/abs/1606.03126 .Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXivpreprint arXiv:1402.0030 , 2014.Jack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves,and Timothy P Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes.InAdvances in NIPS . 2016.Scott Reed and Nando de Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279 ,2015.Alexander M. Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentencesummarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural LanguageProcessing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015 , pp. 379–389, 2015.Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One-shotlearning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065 , 2016.Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Buildingend-to-end dialogue systems using generative hierarchical neural network models. In Proceedingsof the 30th AAAI Conference on Artificial Intelligence (AAAI-16) , 2016.Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks.arXiv preprint arXiv:1503.08895 , 2015.Guo-Zheng Sun, C. Lee Giles, and Hsing-Hen Chen. The neural network pushdown automaton:Architecture, dynamics and training. In Adaptive Processing of Sequences and Data Structures,International Summer School on Neural Networks , pp. 296–345, 1997.Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869 , 2015.Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai-complete questionanswering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698 , 2015a.Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of TheInternational Conference on Representation Learning (ICLR 2015) , 2015b. In Press.11Under review as a conference paper at ICLR 2017Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcementlearning. Machine Learning , 8:229–256, 1992.Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textualquestion answering. CoRR , abs/1603.01417, 2016.Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and YoshuaBengio. Show, attend and tell: Neural image caption generation with visual attention. In ProceedingsOf The International Conference on Representation Learning (ICLR 2015) , 2015.Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and AaronCourville. Describing videos by exploiting temporal structure. In Computer Vision (ICCV), 2015IEEE International Conference on . IEEE, 2015.Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR ,abs/1505.00521, 2015.Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithmsfrom examples. arXiv preprint arXiv:1511.07275 , 2015.12Under review as a conference paper at ICLR 2017A E XPERIMENTAL DETAILSA.1 M ODEL AND TRAINING DETAILS FOR B ABIWe use the same hyperparameters for all the tasks for a given model.A.1.1 F ACT REPRESENTATIONWe use a recurrent neural network with GRU units to encode a variable-length fact into a fixed-sizevector representation. This allows the D-NTM to exploit the word ordering in each fact, unlike whenfacts are encoded as bag-of-words vectors.A.1.2 C ONTROLLERWe experiment with both a recurrent and feedforward neural network as the controller that generatesthe read and write weights. The controller has 180 units. We train our feed-forward controller usingnoisy-tanh activation function (Gulcehre et al., 2016) since we were experiencing training difficultieswith sigmoid andtanh activation functions. We use both single-step and three-steps addressing withour GRU controller.A.1.3 M EMORYThe memory contains 120 memory cells. Each memory cell consists of a 16-dimensional addresspart and 28-dimensional content part.A.1.4 T RAINING DETAILSWe set aside a random 10% of the training examples as a validation set for each sub-task and useit for early-stopping and hyperparameter search. We train one D-NTM for each sub-task, usingAdam (Kingma & Ba, 2014) with its learning rate set to 0:003and0:007respectively for GRUand Feedforward controller. The size of each minibatch is 160, and each minibatch is constructeduniform-randomly from the training set.A.2 M ODEL AND TRAINING DETAILS FOR SEQUENTIAL MNISTOn sequential MNIST task we try to keep the capacity of our model to be close to our baselines. Weuse 100 GRU units in the controller and each content vector of size 8 and with address vectors of size8. We use a learning rate of 1e3and trained the model with adam optimizer. We did not use theread and write consistency regularization in any of our models.A.3 M ODEL AND TRAINING DETAILS FOR TOYTASKSOn both copy and associative recall tasks, we try to keep the capacity of our model to be close to ourbaselines. We use 100 GRU units in the controller and each content vector of has a size of 8 and usingaddress vector of size 8. We use a learning rate of 1e3and trained the model with adam optimizer.We did not use the read and write consistency regularization in any of our models. For the model withthe discrete attention we use REINFORCE with baseline computed using moving averages.B V ISUALIZATION OF DISCRETE ATTENTIONWe visualize the attention of D-NTM with GRU controller with discrete attention in Figure 2. Fromthis example, we can see that D-NTM has learned to find the correct supporting fact even withoutany supervision for the particular story in the visualization.C L EARNING CURVES FOR THE RECURRENT CONTROLLERIn Figure 3, we compare the learning curves of the continuous and discrete attention D-NTM modelwith recurrent controller on Task 1. Surprisingly, the discrete attention D-NTM converges faster thanthe continuous-attention model. The main difficulty of learning continuous-attention is due to thefact that learning to write with continuous-attention can be challenging.13Under review as a conference paper at ICLR 2017Figure 2: An example view of the discrete attention over the memory slots for both read (left) and writeheads(right). x-axis the denotes the memory locations that are being accessed and y-axis correspondsto the content in the particular memory location. In this figure, we visualize the discrete-attention modelwith 3-reading steps and on task- 20. It is easy to see that the NTM with discrete-attention accessesto the relevant part of the memory. We only visualize the last-step of the 3-steps writing. Becausewith discrete attention usually the model just reads the empty slots of the memory.0 50 100 150 200 250 3000.00.51.01.52.02.53.0Train nll hard attention modelTrain nll soft attention modelFigure 3: A visualization for the learning curves of continuous and discrete D-NTM models trainedon Task 1 using 3 steps. In most tasks, we observe that the discrete attention model with GRU controllerdoes converge faster than the continuous-attention model.D A C OMPARISON BETWEEN THE LEARNINGCURVES OF INPUT BASED BASELINE AND REGULAR BASELINE ON pMNISTIn Figure 4, we show the learning curves of input-based-baseline (ibb) and regular REINFORCE withmoving averages baseline (mab) on the pMNIST task. We observe that input-based-baseline in generalis much easier to optimize and converges faster as well. But it can quickly overfit to the task as well.E T RAININGWITH CONTINUOUS -ATTENTION AND TESTING WITH DISCRETE -ATTENTIONIn Table 5, we provide results investigating the effects of using discrete attention model at the test-timefor a model trained with feed-forward controller and continuous attention. DiscreteD-NTM modelbootstraps the discrete attention with the continuous attention, using the curriculum method that we have14Under review as a conference paper at ICLR 20170 20 40 60 80 1000.00.51.01.52.02.5validation learning curve of ibbvalidation learning curve of mabtraining learning curve of ibbtraining learning curve of mabFigure 4: We compare the learning curves of our D-NTM model using discrete attention on pMNISTtask with input-based baseline and regular REINFORCE baseline. The x-axis is the loss and y-axisis the number of epochs.introduced in Section ”Curriculum Learning for the Discrete Attention”. DiscreteyD-NTM model is thecontinuous-attention model which uses discrete-attention at the test time. We observe that the DiscreteyD-NTM model which is trained with continuous-attention outperforms Discrete D-NTM model.continuous Discrete DiscreteDiscreteyTask D-NTM D-NTM D-NTM D-NTM1 4.38 81.67 14.79 72.282 27.5 76.67 76.67 81.673 71.25 79.38 70.83 78.954 0.00 78.65 44.06 79.695 1.67 83.13 17.71 68.546 1.46 48.76 48.13 31.677 6.04 54.79 23.54 49.178 1.70 69.75 35.62 79.329 0.63 39.17 14.38 37.7110 19.80 56.25 56.25 25.6311 0.00 78.96 39.58 82.0812 6.25 82.5 32.08 74.3813 7.5 75.0 18.54 47.0814 17.5 78.75 24.79 77.0815 0.0 71.42 39.73 73.9616 49.65 71.46 71.15 53.0217 1.25 43.75 43.75 30.4218 0.24 48.13 2.92 11.4619 39.47 71.46 71.56 76.0520 0.0 76.56 9.79 13.96Avg 12.81 68.30 37.79 57.21Table 5: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with thefeedforward controller. DiscreteD-NTM model bootstraps the discrete attention with the continuousattention, using the curriculum method that we have introduced in Section 4. DiscreteyD-NTM modelis the continuous-attention model which uses discrete-attention at the test time.15Under review as a conference paper at ICLR 2017F D-NTM WITH BOW F ACT REPRESENTATIONIn Table 6, we provide results for D-NTM using BoW with positional encoding (PE) Sukhbaatar et al.(2015) as the representation of the input facts. The facts representations are provided as an input tothe GRU controller. In agreement to our results with the GRU fact representation, with the BoW factrepresentation we observe improvements with multi-step of addressing over single-step and discreteaddressing over continuous addressing.Soft Discrete Soft DiscreteTask D-NTM(1-step) D-NTM(1-step) D-NTM(3-steps) D-NTM(3-steps)1 0.00 0.00 0.00 0.002 61.04 59.37 56.87 55.623 55.62 57.5 62.5 57.54 27.29 24.89 26.45 27.085 13.55 12.08 15.83 14.786 13.54 14.37 21.87 13.337 8.54 6.25 8.75 14.588 1.69 1.36 3.01 3.029 17.7 16.66 37.70 17.0810 26.04 27.08 26.87 23.9511 20.41 3.95 2.5 2.2912 0.41 0.83 0.20 4.1613 3.12 1.04 4.79 5.8314 62.08 58.33 61.25 60.6215 31.66 26.25 0.62 0.0516 54.47 48.54 48.95 48.9517 43.75 31.87 43.75 30.6218 33.75 39.37 36.66 36.0419 64.63 69.21 67.23 65.4620 1.25 0.00 1.45 0.00Avg 27.02 24.98 26.36 24.05Table 6: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples withthe GRU controller and representations of facts are obtained with BoW using positional encoding.16
Hk8N3Sclg
"Published as a conference paper at ICLR 2017MULTI -AGENT COOPERATIONAND THE EMERGENCE OF (NATURAL )(...TRUNCATED)
SygGlIBcel
"Under review as a conference paper at ICLR 2017OPENING THE VOCABULARY OF NEURAL LANGUAGEMODELS WITH(...TRUNCATED)
HJtN5K9gx
"Under review as a conference paper at ICLR 2017LEARNING DISENTANGLED REPRESENTATIONSINDEEPGENERATIV(...TRUNCATED)
BkSqjHqxg
"Under review as a conference paper at ICLR 2017SKIP-GRAPH : LEARNING GRAPH EMBEDDINGS WITHAN ENCODE(...TRUNCATED)
H1oRQDqlg
"Under review as a conference paper at ICLR 2017LEARNING TO DRAW SAMPLES : W ITHAPPLICATIONTOAMORTIZ(...TRUNCATED)
S1Y0td9ee
"Under review as a conference paper at ICLR 2017SHIFT AGGREGATE EXTRACT NETWORKSFrancesco Orsini12, (...TRUNCATED)
README.md exists but content is empty.
Downloads last month
142