forum_id
stringlengths
9
13
raw_ocr_text
stringlengths
4
631k
SJU4ayYgl
Published as a conference paper at ICLR 2017SEMI-SUPERVISED CLASSIFICATION WITHGRAPH CONVOLUTIONAL NETWORKSThomas N. KipfUniversity of AmsterdamT.N.Kipf@uva.nlMax WellingUniversity of AmsterdamCanadian Institute for Advanced Research (CIFAR)M.Welling@uva.nlABSTRACTWe present a scalable approach for semi-supervised learning on graph-structureddata that is based on an efficient variant of convolutional neural networks whichoperate directly on graphs. We motivate the choice of our convolutional archi-tecture via a localized first-order approximation of spectral graph convolutions.Our model scales linearly in the number of graph edges and learns hidden layerrepresentations that encode both local graph structure and features of nodes. Ina number of experiments on citation networks and on a knowledge graph datasetwe demonstrate that our approach outperforms related methods by a significantmargin.1 I NTRODUCTIONWe consider the problem of classifying nodes (such as documents) in a graph (such as a citationnetwork), where labels are only available for a small subset of nodes. This problem can be framedas graph-based semi-supervised learning, where label information is smoothed over the graph viasome form of explicit graph-based regularization (Zhu et al., 2003; Zhou et al., 2004; Belkin et al.,2006; Weston et al., 2012), e.g. by using a graph Laplacian regularization term in the loss function:L=L0+Lreg;withLreg=Xi;jAijkf(Xi)f(Xj)k2=f(X)>f(X): (1)Here,L0denotes the supervised loss w.r.t. the labeled part of the graph, f()can be a neural network-like differentiable function, is a weighing factor and Xis a matrix of node feature vectors Xi. =DAdenotes the unnormalized graph Laplacian of an undirected graph G= (V;E)withNnodesvi2V, edges (vi;vj)2E, an adjacency matrix A2RNN(binary or weighted) anda degree matrix Dii=PjAij. The formulation of Eq. 1 relies on the assumption that connectednodes in the graph are likely to share the same label. This assumption, however, might restrictmodeling capacity, as graph edges need not necessarily encode node similarity, but could containadditional information.In this work, we encode the graph structure directly using a neural network model f(X;A)andtrain on a supervised target L0for all nodes with labels, thereby avoiding explicit graph-basedregularization in the loss function. Conditioning f()on the adjacency matrix of the graph willallow the model to distribute gradient information from the supervised loss L0and will enable it tolearn representations of nodes both with and without labels.Our contributions are two-fold. Firstly, we introduce a simple and well-behaved layer-wise prop-agation rule for neural network models which operate directly on graphs and show how it can bemotivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011).Secondly, we demonstrate how this form of a graph-based neural network model can be used forfast and scalable semi-supervised classification of nodes in a graph. Experiments on a number ofdatasets demonstrate that our model compares favorably both in classification accuracy and effi-ciency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.1Published as a conference paper at ICLR 20172 F AST APPROXIMATE CONVOLUTIONS ON GRAPHSIn this section, we provide theoretical motivation for a specific graph-based neural network modelf(X;A)that we will use in the rest of this paper. We consider a multi-layer Graph ConvolutionalNetwork (GCN) with the following layer-wise propagation rule:H(l+1)=~D12~A~D12H(l)W(l): (2)Here, ~A=A+INis the adjacency matrix of the undirected graph Gwith added self-connections.INis the identity matrix, ~Dii=Pj~AijandW(l)is a layer-specific trainable weight matrix. ()denotes an activation function, such as the ReLU() = max(0;).H(l)2RNDis the matrix of ac-tivations in the lthlayer;H(0)=X. In the following, we show that the form of this propagation rulecan be motivated1via a first-order approximation of localized spectral filters on graphs (Hammondet al., 2011; Defferrard et al., 2016).2.1 S PECTRAL GRAPH CONVOLUTIONSWe consider spectral convolutions on graphs defined as the multiplication of a signal x2RN(ascalar for every node) with a filter g=diag()parameterized by 2RNin the Fourier domain,i.e.:g?x=UgU>x; (3)whereUis the matrix of eigenvectors of the normalized graph Laplacian L=IND12AD12=UU>, with a diagonal matrix of its eigenvalues andU>xbeing the graph Fourier transformofx. We can understand gas a function of the eigenvalues of L, i.e.g(). Evaluating Eq. 3 iscomputationally expensive, as multiplication with the eigenvector matrix UisO(N2). Furthermore,computing the eigendecomposition of Lin the first place might be prohibitively expensive for largegraphs. To circumvent this problem, it was suggested in Hammond et al. (2011) that g()can bewell-approximated by a truncated expansion in terms of Chebyshev polynomials Tk(x)up toKthorder:g0()KXk=00kTk(~); (4)with a rescaled ~ =2maxIN.maxdenotes the largest eigenvalue of L.02RKis now avector of Chebyshev coefficients. The Chebyshev polynomials are recursively defined as Tk(x) =2xTk1(x)Tk2(x), withT0(x) = 1 andT1(x) =x. The reader is referred to Hammond et al.(2011) for an in-depth discussion of this approximation.Going back to our definition of a convolution of a signal xwith a filterg0, we now have:g0?xKXk=00kTk(~L)x; (5)with ~L=2maxLIN; as can easily be verified by noticing that (UU>)k=UkU>. Note thatthis expression is now K-localized since it is a Kth-order polynomial in the Laplacian, i.e. it dependsonly on nodes that are at maximum Ksteps away from the central node ( Kth-order neighborhood).The complexity of evaluating Eq. 5 is O(jEj), i.e. linear in the number of edges. Defferrard et al.(2016) use this K-localized convolution to define a convolutional neural network on graphs.2.2 L AYER -WISELINEAR MODELA neural network model based on graph convolutions can therefore be built by stacking multipleconvolutional layers of the form of Eq. 5, each layer followed by a point-wise non-linearity. Now,imagine we limited the layer-wise convolution operation to K= 1(see Eq. 5), i.e. a function that islinear w.r.t.Land therefore a linear function on the graph Laplacian spectrum.1We provide an alternative interpretation of this propagation rule based on the Weisfeiler-Lehman algorithm(Weisfeiler & Lehmann, 1968) in Appendix A.2Published as a conference paper at ICLR 2017In this way, we can still recover a rich class of convolutional filter functions by stacking multiplesuch layers, but we are not limited to the explicit parameterization given by, e.g., the Chebyshevpolynomials. We intuitively expect that such a model can alleviate the problem of overfitting onlocal neighborhood structures for graphs with very wide node degree distributions, such as socialnetworks, citation networks, knowledge graphs and many other real-world graph datasets. Addition-ally, for a fixed computational budget, this layer-wise linear formulation allows us to build deepermodels, a practice that is known to improve modeling capacity on a number of domains (He et al.,2016).In this linear formulation of a GCN we further approximate max2, as we can expect that neuralnetwork parameters will adapt to this change in scale during training. Under these approximationsEq. 5 simplifies to:g0?x00x+01(LIN)x=00x01D12AD12x; (6)with two free parameters 00and01. The filter parameters can be shared over the whole graph.Successive application of filters of this form then effectively convolve the kth-order neighborhood ofa node, where kis the number of successive filtering operations or convolutional layers in the neuralnetwork model.In practice, it can be beneficial to constrain the number of parameters further to address overfittingand to minimize the number of operations (such as matrix multiplications) per layer. This leaves uswith the following expression:g?xIN+D12AD12x; (7)with a single parameter =00=01. Note that IN+D12AD12now has eigenvalues inthe range [0;2]. Repeated application of this operator can therefore lead to numerical instabilitiesand exploding/vanishing gradients when used in a deep neural network model. To alleviate thisproblem, we introduce the following renormalization trick :IN+D12AD12!~D12~A~D12, with~A=A+INand~Dii=Pj~Aij.We can generalize this definition to a signal X2RNCwithCinput channels (i.e. a C-dimensionalfeature vector for every node) and Ffilters or feature maps as follows:Z=~D12~A~D12X; (8)where 2RCFis now a matrix of filter parameters and Z2RNFis the convolved signalmatrix. This filtering operation has complexity O(jEjFC), as~AX can be efficiently implementedas a product of a sparse matrix with a dense matrix.3 S EMI-SUPERVISED NODE CLASSIFICATIONHaving introduced a simple, yet flexible model f(X;A)for efficient information propagation ongraphs, we can return to the problem of semi-supervised node classification. As outlined in the in-troduction, we can relax certain assumptions typically made in graph-based semi-supervised learn-ing by conditioning our model f(X;A)both on the data Xand on the adjacency matrix Aof theunderlying graph structure. We expect this setting to be especially powerful in scenarios where theadjacency matrix contains information not present in the data X, such as citation links between doc-uments in a citation network or relations in a knowledge graph. The overall model, a multi-layerGCN for semi-supervised learning, is schematically depicted in Figure 1.3.1 E XAMPLEIn the following, we consider a two-layer GCN for semi-supervised node classification on a graphwith a symmetric adjacency matrix A(binary or weighted). We first calculate ^A=~D12~A~D12ina pre-processing step. Our forward model then takes the simple form:Z=f(X;A) = softmax^AReLU^AXW(0)W(1): (9)3Published as a conference paper at ICLR 2017Cinput layerX1X2X3X4Foutput layerZ1Z2Z3Z4hiddenlayersY1Y41(a) Graph Convolutional Network30 20 10 0 10 20 303020100102030 (b) Hidden layer activationsFigure 1: Left: Schematic depiction of multi-layer Graph Convolutional Network (GCN) for semi-supervised learning with Cinput channels and Ffeature maps in the output layer. The graph struc-ture (edges shown as black lines) is shared over layers, labels are denoted by Yi.Right : t-SNE(Maaten & Hinton, 2008) visualization of hidden layer activations of a two-layer GCN trained onthe Cora dataset (Sen et al., 2008) using 5%of labels. Colors denote document class.Here,W(0)2RCHis an input-to-hidden weight matrix for a hidden layer with Hfeature maps.W(1)2RHFis a hidden-to-output weight matrix. The softmax activation function, defined assoftmax(xi) =1Zexp(xi)withZ=Piexp(xi), is applied row-wise. For semi-supervised multi-class classification, we then evaluate the cross-entropy error over all labeled examples:L=Xl2YLFXf=1YlflnZlf; (10)whereYLis the set of node indices that have labels.The neural network weights W(0)andW(1)are trained using gradient descent. In this work, weperform batch gradient descent using the full dataset for every training iteration, which is a viableoption as long as datasets fit in memory. Using a sparse representation for A, memory requirementisO(jEj), i.e. linear in the number of edges. Stochasticity in the training process is introduced viadropout (Srivastava et al., 2014). We leave memory-efficient extensions with mini-batch stochasticgradient descent for future work.3.2 I MPLEMENTATIONIn practice, we make use of TensorFlow (Abadi et al., 2015) for an efficient GPU-based imple-mentation2of Eq. 9 using sparse-dense matrix multiplications. The computational complexity ofevaluating Eq. 9 is then O(jEjCHF ), i.e. linear in the number of graph edges.4 R ELATED WORKOur model draws inspiration both from the field of graph-based semi-supervised learning and fromrecent work on neural networks that operate on graphs. In what follows, we provide a brief overviewon related work in both fields.4.1 G RAPH -BASED SEMI-SUPERVISED LEARNINGA large number of approaches for semi-supervised learning using graph representations have beenproposed in recent years, most of which fall into two broad categories: methods that use someform of explicit graph Laplacian regularization and graph embedding-based approaches. Prominentexamples for graph Laplacian regularization include label propagation (Zhu et al., 2003), manifoldregularization (Belkin et al., 2006) and deep semi-supervised embedding (Weston et al., 2012).2Code to reproduce our experiments is available at https://github.com/tkipf/gcn .4Published as a conference paper at ICLR 2017Recently, attention has shifted to models that learn graph embeddings with methods inspired bythe skip-gram model (Mikolov et al., 2013). DeepWalk (Perozzi et al., 2014) learns embeddingsvia the prediction of the local neighborhood of nodes, sampled from random walks on the graph.LINE (Tang et al., 2015) and node2vec (Grover & Leskovec, 2016) extend DeepWalk with moresophisticated random walk or breadth-first search schemes. For all these methods, however, a multi-step pipeline including random walk generation and semi-supervised training is required where eachstep has to be optimized separately. Planetoid (Yang et al., 2016) alleviates this by injecting labelinformation in the process of learning embeddings.4.2 N EURAL NETWORKS ON GRAPHSNeural networks that operate on graphs have previously been introduced in Gori et al. (2005);Scarselli et al. (2009) as a form of recurrent neural network. Their framework requires the repeatedapplication of contraction maps as propagation functions until node representations reach a stablefixed point. This restriction was later alleviated in Li et al. (2016) by introducing modern practicesfor recurrent neural network training to the original graph neural network framework. Duvenaudet al. (2015) introduced a convolution-like propagation rule on graphs and methods for graph-levelclassification. Their approach requires to learn node degree-specific weight matrices which does notscale to large graphs with wide node degree distributions. Our model instead uses a single weightmatrix per layer and deals with varying node degrees through an appropriate normalization of theadjacency matrix (see Section 3.1).A related approach to node classification with a graph-based neural network was recently introducedin Atwood & Towsley (2016). They report O(N2)complexity, limiting the range of possible appli-cations. In a different yet related model, Niepert et al. (2016) convert graphs locally into sequencesthat are fed into a conventional 1D convolutional neural network, which requires the definition of anode ordering in a pre-processing step.Our method is based on spectral graph convolutional neural networks, introduced in Bruna et al.(2014) and later extended by Defferrard et al. (2016) with fast localized convolutions. In contrastto these works, we consider here the task of transductive node classification within networks ofsignificantly larger scale. We show that in this setting, a number of simplifications (see Section 2.2)can be introduced to the original frameworks of Bruna et al. (2014) and Defferrard et al. (2016) thatimprove scalability and classification performance in large-scale networks.5 E XPERIMENTSWe test our model in a number of experiments: semi-supervised document classification in cita-tion networks, semi-supervised entity classification in a bipartite graph extracted from a knowledgegraph, an evaluation of various graph propagation models and a run-time analysis on random graphs.5.1 D ATASETSWe closely follow the experimental setup in Yang et al. (2016). Dataset statistics are summarizedin Table 1. In the citation network datasets—Citeseer, Cora and Pubmed (Sen et al., 2008)—nodesare documents and edges are citation links. Label rate denotes the number of labeled nodes that areused for training divided by the total number of nodes in each dataset. NELL (Carlson et al., 2010;Yang et al., 2016) is a bipartite graph dataset extracted from a knowledge graph with 55,864 relationnodes and 9,891 entity nodes.Table 1: Dataset statistics, as reported in Yang et al. (2016).Dataset Type Nodes Edges Classes Features Label rateCiteseer Citation network 3,327 4,732 6 3,703 0:036Cora Citation network 2,708 5,429 7 1,433 0:052Pubmed Citation network 19,717 44,338 3 500 0:003NELL Knowledge graph 65,755 266,144 210 5,414 0:0015Published as a conference paper at ICLR 2017Citation networks We consider three citation network datasets: Citeseer, Cora and Pubmed (Senet al., 2008). The datasets contain sparse bag-of-words feature vectors for each document and a listof citation links between documents. We treat the citation links as (undirected) edges and constructa binary, symmetric adjacency matrix A. Each document has a class label. For training, we only use20 labels per class, but all feature vectors.NELL NELL is a dataset extracted from the knowledge graph introduced in (Carlson et al., 2010).A knowledge graph is a set of entities connected with directed, labeled edges (relations). We followthe pre-processing scheme as described in Yang et al. (2016). We assign separate relation nodesr1andr2for each entity pair (e1;r;e 2)as(e1;r1)and(e2;r2). Entity nodes are described bysparse feature vectors. We extend the number of features in NELL by assigning a unique one-hotrepresentation for every relation node, effectively resulting in a 61,278-dim sparse feature vector pernode. The semi-supervised task here considers the extreme case of only a single labeled exampleper class in the training set. We construct a binary, symmetric adjacency matrix from this graph bysetting entries Aij= 1, if one or more edges are present between nodes iandj.Random graphs We simulate random graph datasets of various sizes for experiments where wemeasure training time per epoch. For a dataset with Nnodes we create a random graph assigning2Nedges uniformly at random. We take the identity matrix INas input feature matrix X, therebyimplicitly taking a featureless approach where the model is only informed about the identity of eachnode, specified by a unique one-hot vector. We add dummy labels Yi= 1for every node.5.2 E XPERIMENTAL SET-UPUnless otherwise noted, we train a two-layer GCN as described in Section 3.1 and evaluate pre-diction accuracy on a test set of 1,000 labeled examples. We provide additional experiments usingdeeper models with up to 10 layers in Appendix B. We choose the same dataset splits as in Yanget al. (2016) with an additional validation set of 500 labeled examples for hyperparameter opti-mization (dropout rate for all layers, L2 regularization factor for the first GCN layer and number ofhidden units). We do not use the validation set labels for training.For the citation network datasets, we optimize hyperparameters on Cora only and use the same setof parameters for Citeseer and Pubmed. We train all models for a maximum of 200 epochs (trainingiterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0:01and early stopping with awindow size of 10, i.e. we stop training if the validation loss does not decrease for 10 consecutiveepochs. We initialize weights using the initialization described in Glorot & Bengio (2010) andaccordingly (row-)normalize input feature vectors. On the random graph datasets, we use a hiddenlayer size of 32 units and omit regularization (i.e. neither dropout nor L2 regularization).5.3 B ASELINESWe compare against the same baseline methods as in Yang et al. (2016), i.e. label propagation(LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifoldregularization (ManiReg) (Belkin et al., 2006) and skip-gram based graph embeddings (DeepWalk)(Perozzi et al., 2014). We omit TSVM (Joachims, 1999), as it does not scale to the large number ofclasses in one of our datasets.We further compare against the iterative classification algorithm (ICA) proposed in Lu & Getoor(2003) in conjunction with two logistic regression classifiers, one for local node features alone andone for relational classification using local features and an aggregation operator as described inSen et al. (2008). We first train the local classifier using all labeled training set nodes and useit to bootstrap class labels of unlabeled nodes for relational classifier training. We run iterativeclassification (relational classifier) with a random node ordering for 10 iterations on all unlabelednodes (bootstrapped using the local classifier). L2 regularization parameter and aggregation operator(count vs.prop, see Sen et al. (2008)) are chosen based on validation set performance for each datasetseparately.Lastly, we compare against Planetoid (Yang et al., 2016), where we always choose their best-performing model variant (transductive vs. inductive) as a baseline.6Published as a conference paper at ICLR 20176 R ESULTS6.1 S EMI-SUPERVISED NODE CLASSIFICATIONResults are summarized in Table 2. Reported numbers denote classification accuracy in percent. ForICA, we report the mean accuracy of 100 runs with random node orderings. Results for all otherbaseline methods are taken from the Planetoid paper (Yang et al., 2016). Planetoid* denotes the bestmodel for the respective dataset out of the variants presented in their paper.Table 2: Summary of results in terms of classification accuracy (in percent).Method Citeseer Cora Pubmed NELLManiReg [3] 60:1 59 :5 70 :7 21 :8SemiEmb [28] 59:6 59 :0 71 :1 26 :7LP [32] 45:3 68 :0 63 :0 26 :5DeepWalk [22] 43:2 67 :2 65 :3 58 :1ICA [18] 69:1 75 :1 73 :9 23 :1Planetoid* [29] 64:7(26s) 75:7(13s) 77:2(25s) 61:9(185s)GCN (this paper) 70:3(7s) 81:5(4s) 79:0(38s) 66:0(48s)GCN (rand. splits) 67:90:5 80:10:5 78:90:7 58:41:7We further report wall-clock training time in seconds until convergence (in brackets) for our method(incl. evaluation of validation error) and for Planetoid. For the latter, we used an implementation pro-vided by the authors3and trained on the same hardware (with GPU) as our GCN model. We trainedand tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracyof 100 runs with random weight initializations. We used the following sets of hyperparameters forCiteseer, Cora and Pubmed: 0.5 (dropout rate), 5104(L2 regularization) and 16(number of hid-den units); and for NELL: 0.1 (dropout rate), 1105(L2 regularization) and 64(number of hiddenunits).In addition, we report performance of our model on 10 randomly drawn dataset splits of the samesize as in Yang et al. (2016), denoted by GCN (rand. splits). Here, we report mean and standarderror of prediction accuracy on the test set split in percent.6.2 E VALUATION OF PROPAGATION MODELWe compare different variants of our proposed per-layer propagation model on the citation networkdatasets. We follow the experimental set-up described in the previous section. Results are summa-rized in Table 3. The propagation model of our original GCN model is denoted by renormalizationtrick (in bold). In all other cases, the propagation model of both neural network layers is replacedwith the model specified under propagation model . Reported numbers denote mean classificationaccuracy for 100 repeated runs with random weight matrix initializations. In case of multiple vari-ables iper layer, we impose L2 regularization on all weight matrices of the first layer.Table 3: Comparison of propagation models.Description Propagation model Citeseer Cora PubmedChebyshev filter (Eq. 5)K= 3PKk=0Tk(~L)Xk69:8 79:5 74:4K= 2 69 :6 81:2 73:81st-order model (Eq. 6) X0+D12AD12X1 68:3 80:0 77:5Single parameter (Eq. 7) (IN+D12AD12)X 69 :3 79:2 77:4Renormalization trick (Eq. 8) ~D12~A~D12X 70:3 81:5 79:01st-order term only D12AD12X 68 :7 80:5 77:8Multi-layer perceptron X 46 :5 55:1 71:43https://github.com/kimiyoung/planetoid7Published as a conference paper at ICLR 20176.3 T RAINING TIME PER EPOCH1k 10k 100k 1M 10M# Edges10-310-210-1100101Sec./epoch*GPUCPUFigure 2: Wall-clock time per epoch for randomgraphs. (*) indicates out-of-memory error.Here, we report results for the mean trainingtime per epoch (forward pass, cross-entropycalculation, backward pass) for 100 epochs onsimulated random graphs, measured in secondswall-clock time. See Section 5.1 for a detaileddescription of the random graph dataset usedin these experiments. We compare results ona GPU and on a CPU-only implementation4inTensorFlow (Abadi et al., 2015). Figure 2 sum-marizes the results.7 D ISCUSSION7.1 S EMI-SUPERVISED MODELIn the experiments demonstrated here, our method for semi-supervised node classification outper-forms recent related methods by a significant margin. Methods based on graph-Laplacian regular-ization (Zhu et al., 2003; Belkin et al., 2006; Weston et al., 2012) are most likely limited due to theirassumption that edges encode mere similarity of nodes. Skip-gram based methods on the other handare limited by the fact that they are based on a multi-step pipeline which is difficult to optimize.Our proposed model can overcome both limitations, while still comparing favorably in terms of ef-ficiency (measured in wall-clock time) to related methods. Propagation of feature information fromneighboring nodes in every layer improves classification performance in comparison to methods likeICA (Lu & Getoor, 2003), where only label information is aggregated.We have further demonstrated that the proposed renormalized propagation model (Eq. 8) offers bothimproved efficiency (fewer parameters and operations, such as multiplication or addition) and betterpredictive performance on a number of datasets compared to a na ̈ıve1st-order model (Eq. 6) orhigher-order graph convolutional models using Chebyshev polynomials (Eq. 5).7.2 L IMITATIONS AND FUTURE WORKHere, we describe several limitations of our current model and outline how these might be overcomein future work.Memory requirement In the current setup with full-batch gradient descent, memory requirementgrows linearly in the size of the dataset. We have shown that for large graphs that do not fit in GPUmemory, training on CPU can still be a viable option. Mini-batch stochastic gradient descent canalleviate this issue. The procedure of generating mini-batches, however, should take into account thenumber of layers in the GCN model, as the Kth-order neighborhood for a GCN with Klayers has tobe stored in memory for an exact procedure. For very large and densely connected graph datasets,further approximations might be necessary.Directed edges and edge features Our framework currently does not naturally support edge fea-tures and is limited to undirected graphs (weighted or unweighted). Results on NELL howevershow that it is possible to handle both directed edges and edge features by representing the originaldirected graph as an undirected bipartite graph with additional nodes that represent edges in theoriginal graph (see Section 5.1 for details).Limiting assumptions Through the approximations introduced in Section 2, we implicitly assumelocality (dependence on the Kth-order neighborhood for a GCN with Klayers) and equal impor-tance of self-connections vs. edges to neighboring nodes. For some datasets, however, it might bebeneficial to introduce a trade-off parameter in the definition of ~A:~A=A+IN: (11)4Hardware used: 16-core Intel RXeon RCPU E5-2640 v3 @ 2.60GHz, GeForce RGTX TITAN X8Published as a conference paper at ICLR 2017This parameter now plays a similar role as the trade-off parameter between supervised and unsuper-vised loss in the typical semi-supervised setting (see Eq. 1). Here, however, it can be learned viagradient descent.8 C ONCLUSIONWe have introduced a novel approach for semi-supervised classification on graph-structured data.Our GCN model uses an efficient layer-wise propagation rule that is based on a first-order approx-imation of spectral convolutions on graphs. Experiments on a number of network datasets suggestthat the proposed GCN model is capable of encoding both graph structure and node features in away useful for semi-supervised classification. In this setting, our model outperforms several recentlyproposed methods by a significant margin, while being computationally efficient.ACKNOWLEDGMENTSWe would like to thank Christos Louizos, Taco Cohen, Joan Bruna, Zhilin Yang, Dave Herman,Pramod Sinha and Abdul-Saboor Sheikh for helpful discussions. This research was funded by SAP.REFERENCESMart ́ın Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in neuralinformation processing systems (NIPS) , 2016.Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric frame-work for learning from labeled and unlabeled examples. Journal of machine learning research(JMLR) , 7(Nov):2399–2434, 2006.Ulrik Brandes, Daniel Delling, Marco Gaertler, Robert Gorke, Martin Hoefer, Zoran Nikoloski,and Dorothea Wagner. On modularity clustering. IEEE Transactions on Knowledge and DataEngineering , 20(2):172–188, 2008.Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locallyconnected networks on graphs. In International Conference on Learning Representations (ICLR) ,2014.Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr, and Tom M.Mitchell. Toward an architecture for never-ending language learning. In AAAI , volume 5, pp. 3,2010.Micha ̈el Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks ongraphs with fast localized spectral filtering. In Advances in neural information processing systems(NIPS) , 2016.Brendan L. Douglas. The Weisfeiler-Lehman method and graph isomorphism testing. arXiv preprintarXiv:1101.5211 , 2011.David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Al ́anAspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs for learning molecularfingerprints. In Advances in neural information processing systems (NIPS) , pp. 2224–2232, 2015.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neuralnetworks. In AISTATS , volume 9, pp. 249–256, 2010.Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains.InProceedings. 2005 IEEE International Joint Conference on Neural Networks. , volume 2, pp.729–734. IEEE, 2005.Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedingsof the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining .ACM, 2016.9Published as a conference paper at ICLR 2017David K. Hammond, Pierre Vandergheynst, and R ́emi Gribonval. Wavelets on graphs via spectralgraph theory. Applied and Computational Harmonic Analysis , 30(2):129–150, 2011.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016.Thorsten Joachims. Transductive inference for text classification using support vector machines. InInternational Conference on Machine Learning (ICML) , volume 99, pp. 200–209, 1999.Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In Interna-tional Conference on Learning Representations (ICLR) , 2015.Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neuralnetworks. In International Conference on Learning Representations (ICLR) , 2016.Qing Lu and Lise Getoor. Link-based classification. In International Conference on Machine Learn-ing (ICML) , volume 3, pp. 496–503, 2003.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research (JMLR) , 9(Nov):2579–2605, 2008.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed repre-sentations of words and phrases and their compositionality. In Advances in neural informationprocessing systems (NIPS) , pp. 3111–3119, 2013.Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural net-works for graphs. In International Conference on Machine Learning (ICML) , 2016.Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social repre-sentations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledgediscovery and data mining , pp. 701–710. ACM, 2014.Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.The graph neural network model. IEEE Transactions on Neural Networks , 20(1):61–80, 2009.Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad.Collective classification in network data. AI magazine , 29(3):93, 2008.Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine LearningResearch (JMLR) , 15(1):1929–1958, 2014.Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scaleinformation network embedding. In Proceedings of the 24th International Conference on WorldWide Web , pp. 1067–1077. ACM, 2015.Boris Weisfeiler and A. A. Lehmann. A reduction of a graph to a canonical form and an algebraarising during this reduction. Nauchno-Technicheskaya Informatsia , 2(9):12–16, 1968.Jason Weston, Fr ́ed ́eric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi-supervised embedding. In Neural Networks: Tricks of the Trade , pp. 639–655. Springer, 2012.Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning withgraph embeddings. In International Conference on Machine Learning (ICML) , 2016.Wayne W. Zachary. An information flow model for conflict and fission in small groups. Journal ofanthropological research , pp. 452–473, 1977.Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Sch ̈olkopf.Learning with local and global consistency. In Advances in neural information processing systems(NIPS) , volume 16, pp. 321–328, 2004.Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Semi-supervised learning using gaussian fieldsand harmonic functions. In International Conference on Machine Learning (ICML) , volume 3,pp. 912–919, 2003.10Published as a conference paper at ICLR 2017A R ELATION TO WEISFEILER -LEHMAN ALGORITHMA neural network model for graph-structured data should ideally be able to learn representations ofnodes in a graph, taking both the graph structure and feature description of nodes into account. Awell-studied framework for the unique assignment of node labels given a graph and (optionally) dis-crete initial node labels is provided by the 1-dim Weisfeiler-Lehman (WL-1) algorithm (Weisfeiler& Lehmann, 1968):Algorithm 1: WL-1 algorithm (Weisfeiler & Lehmann, 1968)Input: Initial node coloring (h(0)1;h(0)2;:::;h(0)N)Output: Final node coloring (h(T)1;h(T)2;:::;h(T)N)t 0;repeatforvi2Vdoh(t+1)i hashPj2Nih(t)j;t t+ 1;until stable node coloring is reached ;Here,h(t)idenotes the coloring (label assignment) of node vi(at iteration t) andNiis its set ofneighboring node indices (irrespective of whether the graph includes self-connections for every nodeor not). hash()is a hash function. For an in-depth mathematical discussion of the WL-1 algorithmsee, e.g., Douglas (2011).We can replace the hash function in Algorithm 1 with a neural network layer-like differentiablefunction with trainable parameters as follows:h(l+1)i =0@Xj2Ni1cijh(l)jW(l)1A; (12)wherecijis an appropriately chosen normalization constant for the edge (vi;vj). Further, we cantakeh(l)inow to be a vector of activations of nodeiin thelthneural network layer. W(l)is alayer-specific weight matrix and ()denotes a differentiable, non-linear activation function.By choosing cij=pdidj, wheredi=jNijdenotes the degree of node vi, we recover the propaga-tion rule of our Graph Convolutional Network (GCN) model in vector form (see Eq. 2)5.This—loosely speaking—allows us to interpret our GCN model as a differentiable and parameter-ized generalization of the 1-dim Weisfeiler-Lehman algorithm on graphs.A.1 N ODE EMBEDDINGS WITH RANDOM WEIGHTSFrom the analogy with the Weisfeiler-Lehman algorithm, we can understand that even an untrainedGCN model with random weights can serve as a powerful feature extractor for nodes in a graph. Asan example, consider the following 3-layer GCN model:Z= tanh^Atanh^Atanh^AXW(0)W(1)W(2); (13)with weight matrices W(l)initialized at random using the initialization described in Glorot & Bengio(2010). ^A,XandZare defined as in Section 3.1.We apply this model on Zachary’s karate club network (Zachary, 1977). This graph contains 34nodes, connected by 154 (undirected and unweighted) edges. Every node is labeled by one offour classes, obtained via modularity-based clustering (Brandes et al., 2008). See Figure 3a for anillustration.5Note that we here implicitly assume that self-connections have already been added to every node in thegraph (for a clutter-free notation).11Published as a conference paper at ICLR 2017(a) Karate club network (b) Random weight embeddingFigure 3: Left: Zachary’s karate club network (Zachary, 1977), colors denote communities obtainedvia modularity-based clustering (Brandes et al., 2008). Right : Embeddings obtained from an un-trained 3-layer GCN model (Eq. 13) with random weights applied to the karate club network. Bestviewed on a computer screen.We take a featureless approach by setting X=IN, whereINis theNbyNidentity matrix. Nisthe number of nodes in the graph. Note that nodes are randomly ordered (i.e. ordering contains noinformation). Furthermore, we choose a hidden layer dimensionality6of4and a two-dimensionaloutput (so that the output can immediately be visualized in a 2-dim plot).Figure 3b shows a representative example of node embeddings (outputs Z) obtained from an un-trained GCN model applied to the karate club network. These results are comparable to embeddingsobtained from DeepWalk (Perozzi et al., 2014), which uses a more expensive unsupervised trainingprocedure.A.2 S EMI-SUPERVISED NODE EMBEDDINGSOn this simple example of a GCN applied to the karate club network it is interesting to observe howembeddings react during training on a semi-supervised classification task. Such a visualization (seeFigure 4) provides insights into how the GCN model can make use of the graph structure (and offeatures extracted from the graph structure at later layers) to learn embeddings that are useful for aclassification task.We consider the following semi-supervised learning setup: we add a softmax layer on top of ourmodel (Eq. 13) and train using only a single labeled example per class (i.e. a total number of 4 labelednodes). We train for 300 training iterations using Adam (Kingma & Ba, 2015) with a learning rateof0:01on a cross-entropy loss.Figure 4 shows the evolution of node embeddings over a number of training iterations. The modelsucceeds in linearly separating the communities based on minimal supervision and the graph struc-ture alone. A video of the full training process can be found on our website7.6We originally experimented with a hidden layer dimensionality of 2(i.e. same as output layer), but observedthat a dimensionality of 4resulted in less frequent saturation of tanh()units and therefore visually morepleasing results.7http://tkipf.github.io/graph-convolutional-networks/12Published as a conference paper at ICLR 2017(a) Iteration 25 (b) Iteration 50(c) Iteration 75 (d) Iteration 100(e) Iteration 200 (f) Iteration 300Figure 4: Evolution of karate club network node embeddings obtained from a GCN model after anumber of semi-supervised training iterations. Colors denote class. Nodes of which labels wereprovided during training (one per class) are highlighted (grey outline). Grey links between nodesdenote graph edges. Best viewed on a computer screen.13Published as a conference paper at ICLR 2017B E XPERIMENTS ON MODEL DEPTHIn these experiments, we investigate the influence of model depth (number of layers) on classificationperformance. We report results on a 5-fold cross-validation experiment on the Cora, Citeseer andPubmed datasets (Sen et al., 2008) using all labels. In addition to the standard GCN model (Eq. 2),we report results on a model variant where we use residual connections (He et al., 2016) betweenhidden layers to facilitate training of deeper models by enabling the model to carry over informationfrom the previous layer’s input:H(l+1)=~D12~A~D12H(l)W(l)+H(l): (14)On each cross-validation split, we train for 400 epochs (without early stopping) using the Adamoptimizer (Kingma & Ba, 2015) with a learning rate of 0:01. Other hyperparameters are chosen asfollows: 0.5 (dropout rate, first and last layer), 5104(L2 regularization, first layer), 16 (numberof units for each hidden layer) and 0.01 (learning rate). Results are summarized in Figure 5.12345678910Number of layers0.500.550.600.650.700.750.800.850.90AccuracyCiteseerTrainTrain (Residual)TestTest (Residual)12345678910Number of layers0.550.600.650.700.750.800.850.900.95AccuracyCoraTrainTrain (Residual)TestTest (Residual)12345678910Number of layers0.760.780.800.820.840.860.88AccuracyPubmedTrainTrain (Residual)TestTest (Residual)Figure 5: Influence of model depth (number of layers) on classification performance. Markersdenote mean classification accuracy (training vs. testing) for 5-fold cross-validation. Shaded areasdenote standard error. We show results both for a standard GCN model (dashed lines) and a modelwith added residual connections (He et al., 2016) between hidden layers (solid lines).For the datasets considered here, best results are obtained with a 2- or 3-layer model. We observethat for models deeper than 7 layers, training without the use of residual connections can becomedifficult, as the effective context size for each node increases by the size of its Kth-order neighbor-hood (for a model with Klayers) with each additional layer. Furthermore, overfitting can becomean issue as the number of parameters increases with model depth.14
H1Gq5Q9el
Under review as a conference paper at ICLR 2017UNSUPERVISED PRETRAINING FORSEQUENCE TO SEQUENCE LEARNINGPrajit RamachandranUniversity of Illinois at Urbana-Champaignprajitram@gmail.comPeter J. Liu, Quoc V . LeGoogle Brainfpeterjliu,qvl g@google.comABSTRACTThis work presents a general unsupervised learning method to improve the accu-racy of sequence to sequence (seq2seq) models. In our method, the weights ofthe encoder and decoder of a seq2seq model are initialized with the pretrainedweights of two language models and then fine-tuned with labeled data. We ap-ply this method to challenging benchmarks in machine translation and abstractivesummarization and find that it significantly improves the subsequent supervisedmodels. Our main result is that the pretraining accelerates training and improvesgeneralization of seq2seq models, achieving state-of-the-art results on the WMTEnglish!German task, surpassing a range of methods using both phrase-basedmachine translation and neural machine translation. Our method achieves an im-provement of 1.3 BLEU from the previous best models on both WMT’14 andWMT’15 English!German. On summarization, our method beats the supervisedlearning baseline.1 I NTRODUCTIONSequence to sequence ( seq2seq ) models (Sutskever et al., 2014; Cho et al., 2014; Kalchbrenner& Blunsom, 2013; Allen, 1987; ̃Neco & Forcada, 1997) are extremely effective on a variety oftasks that require a mapping between a variable-length input sequence to a variable-length outputsequence. The main weakness of sequence to sequence models, and deep networks in general, liesin the fact that they can easily overfit when the amount of supervised training data is small.In this work, we propose a simple and effective technique for using unsupervised pretraining toimprove seq2seq models. Our proposal is to initialize both encoder and decoder networks withpretrained weights of two language models. These pretrained weights are then fine-tuned with thelabeled corpus.We benchmark this method on machine translation for English !German and abstractive summa-rization on CNN and Daily Mail articles. Our main result is that a seq2seq model, with pretraining,exceeds the strongest possible baseline in both neural machine translation and phrase-based machinetranslation. Our model obtains an improvement of 1.3 BLEU from the previous best models on bothWMT’14 and WMT’15 English !German. On abstractive summarization, our method achievescompetitive results to the strongest baselines.We also perform ablation study to understand the behaviors of the pretraining method. Our studyconfirms that among many other possible choices of using a language model in seq2seq with atten-tion, the above proposal works best. Our study also shows that, for translation, the main gains comefrom the improved generalization due to the pretrained features, whereas for summarization thegains come from the improved optimization due to pretraining the encoder which has been unrolledfor hundreds of timesteps. On both tasks, our proposed method always improves generalization onthe test sets.Work done as an intern on Google Brain.1Under review as a conference paper at ICLR 20172 U NSUPERVISED PRETRAINING FOR SEQUENCE TO SEQUENCE LEARNINGIn the following section, we will describe our basic unsupervised pretraining procedure for sequenceto sequence learning and how to modify sequence to sequence learning to effectively make use ofthe pretrained weights. We then show several extensions to improve the basic model.2.1 B ASIC PROCEDUREGiven an input sequence x1;x2;:::;x mand an output sequence yn;yn1;:::;y 1, the objective of se-quence to sequence learning is to maximize the likelihood p(yn;yn1;:::;y 1jx1;x2;:::;x m).Common sequence to sequence learning methods decompose this objective asp(yn;yn1;:::;y 1jx1;x2;:::;x m) =Qnt=1p(ytjyt1;:::;y 1;x1;x2;:::;x m).In sequence to sequence learning, an RNN encoder is used to represent x1;:::;x mas a hidden vector,which is given to an RNN decoder to produce the output sequence. Our method is based on theobservation that without the encoder, the decoder essentially acts like a language model on y’s.Similarly, the encoder with an additional output layer also acts like a language model. Thus it isnatural to use trained languages models to initialize the encoder and decoder.Therefore, the basic procedure of our approach is to pretrain both the seq2seq encoder and decodernetworks with language models, which can be trained on large amounts of unlabeled text data. Thiscan be seen in Figure 1, where the parameters in the shaded boxes are pretrained. In the followingwe will describe the method in detail using machine translation as an example application.A B C <EOS> W X Y ZW X Y Z <EOS>EmbeddingFirst RNN LayerSoftmaxSecond RNN LayerFigure 1: Pretrained sequence to sequence model. The red parameters are the encoder and the blueparameters are the decoder. All parameters in a shaded box are pretrained, either from the sourceside (light red) or target side (light blue) language model. Otherwise, they are randomly initialized.First, two monolingual datasets are collected, one for the source side language, and one for thetarget side language. A language model ( LM) is trained on each dataset independently, giving anLM trained on the source side corpus and an LM trained on the target side corpus.After two language models are trained, a multi-layer seq2seq model Mis constructed. The embed-ding and first LSTM layers of the encoder and decoder are initialized with the pretrained weights.To be even more efficient, the softmax of the decoder is initialized with the softmax of the pretrainedtarget side LM.2.2 I MPROVING THE MODELWe also employ three additional methods to further improve the model above. The three meth-ods are: a) Monolingual language modeling losses, b) Residual connections and c) Attention overmultiple layers (see Figure 2).Monolingual language modeling losses: After the seq2seq model Mis initialized with the twoLMs, it is fine-tuned with a labeled dataset. To ensure that the model does not overfit the labeleddata, we regularize the parameters that were pretrained by continuing to train with the monolinguallanguage modeling losses. The seq2seq and language modeling losses are weighted equally.2Under review as a conference paper at ICLR 2017WX+(a)A B C <EOS>WAttention(b)Figure 2: Two improvements to the baseline model: (a) residual connection, and (b) attention overmultiple layers.Residual connections: As described, the input vector to the decoder softmax layer is a randomvector because the high level (non-first) layers of the LSTM are randomly initialized. This slowsdown training and introduces random gradients to the pretrained parameters, reducing the effective-ness of pretraining. To circumvent this issue, we use a residual connection from the output of thefirst LSTM layer directly to the input of the softmax (see Figure 2-a).Attention over multiple layers: In all our models, we use an attention mechanism (Bahdanauet al., 2015), where the model attends over both top and first layer (see Figure 2-b). More concretely,given a query vector qtfrom the decoder, encoder states from the first layer h11;:::;h1T, and encoderstates from the last layer hL1;:::;hLT, we compute the attention context vector ctas follows:i=exp(qthNi)PTj=1exp(qthNj)c1t=TXi=1ih1icNt=TXi=1ihNict= [c1t;cNt]Note that attention weights iare only computed once using the top level encoder states.We also experiment with passing the attention vector ctas input into the next timestep (Luong et al.,2015b). Instead of passing cinto the first LSTM layer, we pass it as input to the second LSTM layerby concatenating it with the output of the first LSTM layer.We use all three improvements in our experiments. However, in general we notice that the benefitsof the attention modifications are minor in comparison with the benefits of the additional languagemodeling objectives and residual connections.3 E XPERIMENTSIn the following section, we apply our approach to two important tasks in seq2seq learning: machinetranslation and abstractive summarization. On each task, we compare against the previous bestsystems. We also perform ablation experiments to understand the behavior of each component ofour method.3.1 M ACHINE TRANSLATIONDataset and Evaluation: For machine translation, we evaluate our method on the WMTEnglish!German task (Bojar et al., 2015). We used the WMT 14 training dataset, which is slightlysmaller than the WMT 15 dataset. Because the dataset has some noisy examples, we used a lan-guage detection system to filter the training examples. Sentences pairs where either the source wasnot English or the target was not German were thrown away. This resulted in around 4 milliontraining examples. Following Sennrich et al. (2015b), we use subword units (Sennrich et al., 2015a)3Under review as a conference paper at ICLR 2017with 89500 merge operations, giving a vocabulary size around 90000. The validation set is theconcatenated newstest2012 and newstest2013, and our test sets are newstest2014 and newstest2015.Evaluation on the validation set was with case-sensitive BLEU (Papineni et al., 2002) on tokenizedtext using multi-bleu.perl . Evaluation on the test sets was with case-sensitive BLEU ondetokenized text using mteval-v13a.pl . The monolingual training datasets are the News CrawlEnglish and German corpora, each of which has more than a billion tokens.Experimental settings: The language models were trained in the same fashion as (Jozefowiczet al., 2016) We used a 1 layer 4096 dimensional LSTM with the hidden state projected downto 1024 units (Sak et al., 2014) and trained for one week on 32 Tesla K40 GPUs. Our seq2seqmodel was a 3 layer model, where the second and third layers each have 1000 hidden units. Themonolingual objectives, residual connection, and the modified attention were all used. We used theAdam optimizer (Kingma & Ba, 2015) and train with asynchronous SGD on 16 GPUs for speed.We used a learning rate of 5e-5 which is multiplied by 0.8 every 50K steps after an initial 400Ksteps, gradient clipping with norm 5.0 (Pascanu et al., 2013), and dropout of 0.2 on non-recurrentconnections (Zaremba et al., 2014). We used early stopping on validation set perplexity. A beamsize of 10 was used for decoding. Our ensemble is constructed with the 5 best performing modelson the validation set, which are trained with different hyperparameters.Results: Table 1 shows the results of our method in comparison with other baselines. Ourmethod achieves a new state-of-the-art for single model performance on both newstest2014 andnewstest2015, significantly outperforming the competitive semi-supervised backtranslation tech-nique (Sennrich et al., 2015b). Equally impressive is the fact that our best single model outperformsthe previous state of the art ensemble of 4 models. Our ensemble of 5 models matches or exceedsthe previous best ensemble of 12 models.BLEUSystem ensemble? newstest2014 newstest2015Phrase Based MT (Williams et al., 2016) - 21.9 23.7Supervised NMT (Jean et al., 2015) single - 22.4Edit Distance Transducer NMT (Stahlberg et al., 2016) single 21.7 24.1Edit Distance Transducer NMT (Stahlberg et al., 2016) ensemble 8 22.9 25.7Backtranslation (Sennrich et al., 2015b) single 22.7 25.7Backtranslation (Sennrich et al., 2015b) ensemble 4 23.8 26.5Backtranslation (Sennrich et al., 2015b) ensemble 12 24.7 27.6No pretraining single 21.3 24.3Pretrained seq2seq single 24.0 27.0Pretrained seq2seq ensemble 5 24.7 28.1Table 1: English!German performance on WMT test sets. Our pretrained model outperforms allother models. Note that the model without pretraining uses the LM objective.Ablation study: In order to better understand the effects of pretraining, we conducted an ablationstudy by modifying the pretraining scheme. Figure 3 shows the drop in validation BLEU of variousablations compared with the full model. The full model uses LMs trained with monolingual data toinitialize the encoder and decoder, in addition to the language modeling objective. In the following,we interpret the findings of the study. Note that some findings are specific to the translation task.Given the results from the ablation study, we can make the following observations:Pretraining the decoder is better than pretraining the encoder: Only pretraining the encoderleads to a 1.6 BLEU point drop while only pretraining the decoder leads to a 1.0 BLEUpoint drop.Pretrain as much as possible because the benefits compound: given the drops of no pre-training at all (2:0) and only pretraining the encoder ( 1:6), the additive estimate of thedrop of only pretraining the decoder side is 2:0(1:6) =0:4; however the actualdrop is1:0which is a much larger drop than the additive estimate.Pretraining the softmax is important: Pretraining only the embeddings and first LSTM layergives a large drop of 1.6 BLEU points.4Under review as a conference paper at ICLR 20172.01.51.00.50.0Difference/uni00A0in/uni00A0BLEU/uni00AD2.1Pretrain/uni00A0on/uni00A0parallel/uni00A0corpus/uni00AD2.0No/uni00A0pretraining/uni00AD2.0Only/uni00A0pretrain/uni00A0embeddings/uni00AD2.0No/uni00A0LM/uni00A0objective/uni00AD1.6Only/uni00A0pretrain/uni00A0encoder/uni00AD1.6Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM/uni00AD1.0Only/uni00A0pretrain/uni00A0decoder/uni00AD0.3Pretrain/uni00A0on/uni00A0WikipediaFigure 3: English!German ablation study measuring the difference in validation BLEU betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withmonolingual data to initialize the encoder and decoder, plus the language modeling objective.The language modeling objective is a strong regularizer: The drop in BLEU points ofpretraining the entire model and not using the LM objective is as bad as using the LMobjective without pretraining.Pretraining on a lot of unlabeled data is essential for learning to extract powerful features:If the model is initialized with LMs that are pretrained on the source part and target part oftheparallel corpus, the drop in performance is as large as not pretraining at all. However,performance remains strong when pretrained on the large, non-news Wikipedia corpus.To understand the contributions of unsupervised pretraining vs. supervised training, we track theperformance of pretraining as a function of dataset size. For this, we trained a a model with andwithout pretraining on random subsets of the English !German corpus. Both models use the ad-ditional LM objective. The results are summarized in Figure 4. When a 100% of the labeled datais used, the gap between the pretrained and no pretrain model is 2.0 BLEU points. However, thatgap grows when less data is available. When trained on 20% of the labeled data, the gap becomes3.8 BLEU points. This demonstrates that the pretrained models degrade less as the labeled datasetbecomes smaller.20 40 60 80 100Percent/uni00A0of/uni00A0entire/uni00A0labeled/uni00A0dataset/uni00A0used/uni00A0for/uni00A0training1516171819202122BLEUPretrainNo/uni00A0pretrainFigure 4: Validation performance of pretraining vs. no pretraining when trained on a subset of theentire labeled dataset for English !German translation.5Under review as a conference paper at ICLR 20173.2 A BSTRACTIVE SUMMARIZATIONDataset and Evaluation: For a low-resource abstractive summarization task, we use theCNN/Daily Mail corpus from (Hermann et al., 2015). Following Nallapati et al. (2016), we modifythe data collection scripts to restore the bullet point summaries. The task is to predict the bulletpoint summaries from a news article. The dataset has fewer than 300K document-summary pairs.To compare against Nallapati et al. (2016), we used the anonymized corpus. However, for our abla-tion study, we used the non-anonymized corpus.1We evaluate our system using full length ROUGE(Lin, 2004). For the anonymized corpus in particular, we considered each highlight as a separatesentence following Nallapati et al. (2016). In this setting, we used the English Gigaword corpus(Napoles et al., 2012) as our larger, unlabeled “monolingual” corpus, although all data used in thistask is in English.Experimental settings: We use subword units (Sennrich et al., 2015a) with 31500 merges, result-ing in a vocabulary size of about 32000. We use up to the first 600 tokens of the document andpredict the entire summary. Only one language model is trained and it is used to initialize both theencoder and decoder, since the source and target languages are the same. However, the encoderand decoder are not tied. The LM is a one-layer LSTM of size 1024 trained in a similar fashion toJozefowicz et al. (2016). For the seq2seq model, we use the same settings as the machine translationexperiments. The only differences are that we use a 2 layer model with the second layer having1024 hidden units, and that the learning rate is multiplied by 0.8 every 30K steps after an initial100K steps.Results: Table 2 summarizes our results on the anonymized version of the corpus. Our pretrainedmodel is only able to match the previous baseline seq2seq of Nallapati et al. (2016). However, ourmodel is a unidirectional LSTM while they use a bidirectional LSTM. They also use a longer contextof 800 tokens, whereas we used a context of 600 tokens due to GPU memory issues. Furthermore,they use pretrained word2vec (Mikolov et al., 2013) vectors to initialize their word embeddings. Aswe show in our ablation study, just pretraining the embeddings itself gives a large improvement.System ROUGE-1 ROUGE-2 ROUGE-LSeq2seq + pretrained embeddings (Nallapati et al., 2016) 32.49 11.84 29.47+ temporal attention (Nallapati et al., 2016) 35.46 13.30 32.65Pretrained seq2seq 32.56 11.89 29.44Table 2: Results on the anonymized CNN/Daily Mail dataset.Ablation study: We performed an ablation study similar to the one performed on the machinetranslation model. The results are reported in Figure 5. Here we report the drops on ROUGE-1,ROUGE-2, and ROUGE-L on the non-anonymized validation set.Given the results from our ablation study, we can make the following observations:Pretraining improves optimization: in contrast with the machine translation model, it ismore beneficial to only pretrain the encoder than only the decoder of the summarizationmodel. One interpretation is that pretraining enables the gradient to flow much furtherback in time than randomly initialized weights. This may also explain why pretraining onthe parallel corpus is no worse than pretraining on a larger monolingual corpus.The language modeling objective is a strong regularizer: A model without the LM objectivehas a significant drop in ROUGE scores.Human evaluation: As ROUGE may not be able to capture the quality of summarization, wealso performed a small qualitative study to understand the human impression of the summariesproduced by different models. We took 200 random documents and compared the performance of1We encourage future researchers to use the non-anonymized version because it is a more realistic summa-rization setting with a larger vocabulary. Our numbers on the non-anonymized test set are 35:56ROUGE-1,14:60ROUGE-2, and 25:08ROUGE-L. We did not consider highlights as separate sentences.6Under review as a conference paper at ICLR 2017/uni00AD5/uni00AD4/uni00AD3/uni00AD2/uni00AD10Difference/uni00A0in/uni00A0ROUGENo/uni00A0pretraining Only/uni00A0pretrain/uni00A0decoder No/uni00A0LM/uni00A0objective Only/uni00A0pretrain/uni00A0embeddings Only/uni00A0pretrain/uni00A0embeddings/uni00A0&/uni00A0LSTM Only/uni00A0pretrain/uni00A0encoder Pretrain/uni00A0on/uni00A0parallel/uni00A0corpusROUGE/uni00AD1ROUGE/uni00AD2ROUGE/uni00ADLFigure 5: Summarization ablation study measuring the difference in validation ROUGE betweenvarious ablations and the full model. More negative is worse. The full model uses LMs trained withunlabeled data to initialize the encoder and decoder, plus the language modeling objective.a pretrained and non-pretrained system. The document, gold summary, and the two system outputswere presented to a human evaluator who was asked to rate each system output on a scale of 1-5with 5 being the best score. The system outputs were presented in random order and the evaluatordid not know the identity of either output. The evaluator noted if there were repetitive phrases orsentences in either system outputs. Unwanted repetition was also noticed by Nallapati et al. (2016).Table 3 and 4 show the results of the study. In both cases, the pretrained system outperforms thesystem without pretraining in a statistically significant manner. The better optimization enabled bypretraining improves the generated summaries and decreases unwanted repetition in the output.NP>PNP = P NP<P29 88 83Table 3: The count of how often the no pretrain system ( NP) achieves a higher, equal, and lowerscore than the pretrained system ( P) in the side-by-side study where the human evaluator gave eachsystem a score from 1-5. The sign statistical test gives a p-value of <0:0001 for rejecting the nullhypothesis that there is no difference in the score obtained by either system.No pretrainNo repeats RepeatsPretrainNo repeats 67 65Repeats 24 44Table 4: The count of how often the pretrain and no pretrain systems contain repeated phrases orsentences in their outputs in the side-by-side study. McNemar’s test gives a p-value of <0:0001for rejecting the null hypothesis that the two systems repeat the same proportion of times. Thepretrained system clearly repeats less than the system without pretraining.4 R ELATED WORKUnsupervised pretraining has been intensively studied in the past years, most notably is the workby Dahl et al. (2012) who found that pretraining with deep belief networks improved feedforwardacoustic models. More recent acoustic models have found pretraining unnecessary (Xiong et al.,7Under review as a conference paper at ICLR 20172016; Zhang et al., 2016; Chan et al., 2015), probably because the reconstruction objective of deepbelief networks is too easy. In contrast, we find that pretraining language models by next stepprediction significantly improves seq2seq on challenging real world datasets.Despite its appeal, unsupervised learning is rarely shown to improve supervised training. Dai & Le(2015) was amongst the rare studies which showed the benefits of pretraining in a semi-supervisedlearning setting. Their method is similar to our method except that they did not have a decodernetwork and thus could not apply to seq2seq learning. Similarly, Zhang & Zong (2016) found ituseful to add an additional task of sentence reordering of source-side monolingual data for neuralmachine translation. Various forms of transfer or multitask learning with seq2seq framework alsohave the flavors of our algorithm (Zoph et al., 2016; Luong et al., 2015a; Firat et al., 2016).Perhaps most closely related to our method is the work by Gulcehre et al. (2015), who combined alanguage model with an already trained seq2seq model by fine-tuning additional deep output layers.Empirically, their method produces small improvements over the supervised baseline. We suspectthat their method does not produce significant gains because (i) the models are trained independentlyof each other and are not fine-tuned (ii) the LM is combined with the seq2seq model after the lastlayer, wasting the benefit of the low level LM features, and (iii) only using the LM on the decoderside. Venugopalan et al. (2016) addressed (i) but still experienced minor improvements. Usingpretrained GloVe embedding vectors (Pennington et al., 2014) had more impact.Related to our approach in principle is the work by Chen et al. (2016) who proposed a two-term,theoretically motivated unsupervised objective for unpaired input-output samples. Though they didnot apply their method to seq2seq learning, their framework can be modified to do so. In that case,the first term pushes the output to be highly probable under some scoring model, and the secondterm ensures that the output depends on the input. In the seq2seq setting, we interpret the first termas a pretrained language model scoring the output sequence. In our work, we fold the pretrainedlanguage model into the decoder. We believe that using the pretrained language model only forscoring is less efficient that using all the pretrained weights. Our use of labeled examples satisfiesthe second term. These connections provide a theoretical grounding for our work.In our experiments, we benchmark our method on machine translation, where other unsupervisedmethods are shown to give promising results (Sennrich et al., 2015b; Cheng et al., 2016). In back-translation (Sennrich et al., 2015b), the trained model is used to decode unlabeled data to yield extralabeled data. One can argue that this method may not have a natural analogue to other tasks such assummarization. We note that their technique is complementary to ours, and may lead to additionalgains in machine translation. The method of using autoencoders in Cheng et al. (2016) is promising,though it can be argued that autoencoding is an easy objective and language modeling may force theunsupervised models to learn better features.5 C ONCLUSIONWe presented a novel unsupervised pretraining method to improve sequence to sequence learning.The method can aid in both generalization and optimization. Our scheme involves pretraining twolanguage models in the source and target domain, and initializing the embeddings, first LSTM layers,and softmax of a sequence to sequence model with the weights of the language models. Using ourmethod, we achieved state-of-the-art machine translation results on both WMT’14 and WMT’15English to German.A key advantage of this technique is that it is flexible and can be applied to a large variety of tasks,such as summarization, where it surpasses the supervised learning baseline.ACKNOWLEDGMENTSWe thank George Dahl, Andrew Dai, Laurent Dinh, Stephan Gouws, Geoffrey Hinton, Rafal Joze-fowicz, Pooya Khorrami, Phillip Louis, Ramesh Nallapati, Arvind Neelakantan, Xin Pan, Abi See,Rico Sennrich, Luke Vilnis, Yuan Yu and the Google Brain team for their help with the project.8Under review as a conference paper at ICLR 2017REFERENCESRobert B. Allen. Several studies on natural language and back-propagation. IEEE First International Confer-ence on Neural Networks , 1987.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning toalign and translate. In ICLR , 2015.Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, PhilippKoehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, andMarco Turchi. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the TenthWorkshop on Statistical Machine Translation , 2015.William Chan, Navdeep Jaitly, Quoc V . Le, and Oriol Vinyals. Listen, attend and spell. arXiv preprintarXiv:1508.01211 , 2015.Jianshu Chen, Po-Sen Huang, Xiaodong He, Jianfeng Gao, and Li Deng. Unsupervised learning of predictorsfrom unpaired input-output samples. abs/1606.04646, 2016.Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. Semi-supervised learningfor neural machine translation. arXiv preprint arXiv:1606.04596 , 2016.Kyunghyun Cho, Bart Van Merri ̈enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, HolgerSchwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statisti-cal machine translation. In EMNLP , 2014.G. E. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing , 20(1):30–42, 2012. ISSN 1558-7916.Andrew M. Dai and Quoc V . Le. Semi-supervised sequence learning. In NIPS . 2015.Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman-Vural, and Kyunghyun Cho. Zero-resource translation with multi-lingual neural machine translation. arXiv preprint arXiv:1606.04164 , 2016.Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares,Holger Schwenk, and Yoshua Bengio. On using monolingual corpora in neural machine translation. arXivpreprint arXiv:1503.03535 , 2015.Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman,and Phil Blunsom. Teaching machines to read and comprehend. In NIPS . 2015.S ́ebastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. Montreal neural ma-chine translation systems for WMT’15. In Proceedings of the Tenth Workshop on Statistical Machine Trans-lation , 2015.Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits oflanguage modeling. arXiv preprint arXiv:1602.02410 , 2016.Nal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In EMNLP , 2013.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR , 2015.Chin-Yew Lin. ROUGE: a package for automatic evaluation of summaries. In Proceedings of the Workshop onText Summarization Branches Out (WAS 2004) , 2004.Minh-Thang Luong, Quoc V . Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task sequence tosequence learning. In ICLR , 2015a.Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based neuralmachine translation. In EMNLP , 2015b.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed representations ofwords and phrases and their compositionality. In NIPS . 2013.Ramesh Nallapati, Bing Xiang, and Bowen Zhou. Sequence-to-sequence RNNs for text summarization. arXivpreprint arXiv:1602.06023 , 2016.Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. Annotated gigaword. In Proceedings of theJoint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction . ACL,2012.9Under review as a conference paper at ICLR 2017Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: A method for automatic evaluation ofmachine translation. In ACL, 2002.Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks.ICML , 2013.Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representa-tion. In EMNLP , 2014.Hasim Sak, Andrew W. Senior, and Franc ̧oise Beaufays. Long short-term memory recurrent neural networkarchitectures for large scale acoustic modeling. In INTERSPEECH , 2014.Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subwordunits. arXiv preprint arXiv:1508.07909 , 2015a.Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with mono-lingual data. arXiv preprint arXiv:1511.06709 , 2015b.Felix Stahlberg, Eva Hasler, and Bill Byrne. The edit distance transducer in action: The university of cambridgeenglish-german system at wmt16. In Proceedings of the First Conference on Machine Translation , pp. 377–384, Berlin, Germany, August 2016. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W/W16/W16-2324 .Ilya Sutskever, Oriol Vinyals, and Quoc V . Le. Sequence to sequence learning with neural networks. In NIPS .2014.Subhashini Venugopalan, Lisa Anne Hendricks, Raymond Mooney, and Kate Saenko. Improving LSTM-basedvideo description with linguistic knowledge mined from text. arXiv preprint arXiv:1604.01729 , 2016.Philip Williams, Rico Sennrich, Maria Nadejde, Matthias Huck, Barry Haddow, and Ond ˇrej Bojar. Edinburgh’sstatistical machine translation systems for wmt16. In Proceedings of the First Conference on MachineTranslation , pp. 399–410, Berlin, Germany, August 2016. Association for Computational Linguistics. URLhttp://www.aclweb.org/anthology/W/W16/W16-2327 .W. Xiong, Jasha Droppo, Xuedong Huang, Frank Seide, Mike Seltzer, Andreas Stolcke, Dong Yu, and GeoffreyZweig. Achieving human parity in conversational speech recognition. abs/1610.05256, 2016.Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprintarXiv:1409.2329 , 2014.Jiajun Zhang and Chengqing Zong. Exploiting source-side monolingual data in neural machine translation. InEMNLP , 2016.Yu Zhang, William Chan, and Navdeep Jaitly. Very deep convolutional networks for end-to-end speech recog-nition. abs/1610.03022, 2016. URL http://arxiv.org/abs/1610.03022 .Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. Transfer learning for low-resource neural machinetranslation. In EMNLP , 2016.Ramon P. ̃Neco and Mikel L. Forcada. Asynchronous translations with recurrent neural nets. Neural Networks ,1997.10Under review as a conference paper at ICLR 2017APPENDIXSELECTED SUMMARIZATION OUTPUTSSource Document( cnn ) like phone booths and typewriters , record stores are a vanishing breed – another victimof the digital age . camelot music . virgin megastores . wherehouse music . tower records. all of them gone . corporate america has largely abandoned brick - and - mortar musicretailing to a scattering of independent stores , many of them in scruffy urban neighborhoods. and that s not necessarily a bad thing . yes , it s harder in the spotify era to find a place to gobuy physical music . but many of the remaining record stores are succeeding – even thriving– by catering to a passionate core of customers and collectors . on saturday , hundreds ofmusic retailers will hold events to commemorate record store day , an annual celebration of, well , your neighborhood record store . many stores will host live performances , drawings, book signings , special sales of rare or autographed vinyl and other happenings . some willeven serve beer . to their diehard customers , these places are more than mere stores : theyare cultural institutions that celebrate music history ( the entire duran duran oeuvre , all inone place ! ) , display artifacts ( aretha franklin on vinyl ! ) , and nurture the local musicscene ( hey , here s a cd by your brother s metal band ! ) . they also employ knowledgeableclerks who will be happy to debate the relative merits of blood on the tracks and blonde onblonde . or maybe , like jack black in high fidelity , just mock your lousy taste in music . soif you re a music geek , drop by . but you might think twice before asking if they stock i justcalled to say i love you .Ground Truth summarysaturday is record store day , celebrated at music stores around the world . many stores willhost live performances , drawings and special sales of rare vinyl .No pretraincorporate america has largely abandoned brick - brick - mortar music . many of the remainingrecord stores are succeeding – even thriving – by catering to a passionate core of customers .Pretrainedhundreds of music retailers will hold events to commemorate record store day . many storeswill host live performances , drawings , book signings , special sales of rare or autographedvinyl .Table 5: The pretrained model outputs a highly informative summary, while the no pretrain modeloutputs irrelevant details.11Under review as a conference paper at ICLR 2017Source Document( cnn ) hey , look what i did . that small boast on social media can trigger a whirlwind thatspins into real - life grief , as a texas veterinarian found out after shooting a cat . dr. kristenlindsey allegedly shot an arrow into the back of an orange tabby s head and posted a proudphoto this week on facebook of herself smiling , as she dangled its limp body by the arrow sshaft . lindsey added a comment , cnn affiliate kbtx reported . my first bow kill , lol . the onlygood feral tomcat is one with an arrow through it s head ! vet of the year award ... gladlyaccepted . callers rang the phones hot at washington county s animal clinic , where lindseyworked , to vent their outrage . web traffic crashed its website . high price of public shamingon the internet then an animal rescuer said that lindsey s prey was probably not a feral catbut the pet of an elderly couple , who called him tiger . he had gone missing on wednesday ,the same day that lindsey posted the photo of the slain cat . cnn has not been able to confirmthe claim . as the firestorm grew , lindsey wrote in the comments underneath her post : no idid not lose my job . lol . psshh . like someone would get rid of me . i m awesome ! thatprediction was wrong . the clinic fired lindsey , covered her name on its marquee with ducttape , and publicly distanced itself from her actions . our goal now is to go on and try to fixour black eye and hope that people are reasonable and understand that those actions do ntanyway portray what we re for here at washington animal clinic , said dr. bruce buenger . weput our heart and soul into this place . the clinic told wbtx that lindsey was not available forcomment . cnn is reaching out to her . she removed her controversial post then eventuallyshut down her facebook page . callers also complained to the brenham police department andwashington county animal control , as her facebook post went viral . the sheriff s office inaustin county , where the cat was apparently shot , is investigating , and lindsey could facecharges . its dispatchers were overloaded with calls , the sheriff posted on facebook . we areasking you to please take it easy on our dispatchers . as soon as the investigation is complete ,we will post the relevant information here on this page , the post read . animal rights activistsare pushing for charges . animal cruelty must be taken seriously , and the guilty parties shouldbe punished to the fullest extent of the law , said cat advocacy activist becky robinson . herorganization , alley cat allies , is offering a $ 7,500 reward for evidence leading to the arrestand conviction of the person who shot the cat . but others stood up for lindsey . she s amazing. she s caring , said customer shannon stoddard . she s a good vet , so maybe her bad choiceof posting something on facebook was not good . but i do nt think she should be judged forit . she dropped off balloons at the animal clinic for lindsey with a thank you note . cnn sjeremy grisham contributed to this report .Ground Truth summarydr. kristen lindsey has since removed the post of her holding the dead cat by an arrow . heremployer fired her ; the sheriff s office is investigating . activist offers $ 7,500 reward .No pretraindr. kristen lindsey allegedly shot an arrow into the back of an orange orange tabby s head . its the only good good tomcat is one with an arrow through it s head ! vet vet of the year award.Pretrainedlindsey lindsey , a texas veterinarian , shot an arrow into the back of an orange tabby s head. she posted a photo of herself smiling , as she dangled its limp body by the arrow s shaft .lindsey could face charges , the sheriff s department says .Table 6: The pretrained model outputs a highly relevant summary but makes a mistake on the felineexecutioner’s name. The no pretrain model degenerates into irrelevant details and repeats itself.12Under review as a conference paper at ICLR 2017Source Documenteugenie bouchard s run of poor form continued as the top seed was beaten 6 - 3 , 6 - 1 byamerican lauren davis in the second round at the family circle cup in charleston on wednesday. davis , 21 , had lost her only career meeting with bouchard , but was in control this timeagainst the world no 7 . davis won nine of the final 11 games of the match and broke bouchards serve twice in the final set to pull off the upset . eugenie bouchard fires down a serve duringher second - round match at the family circle cup bouchard shows her frustrations duringher straight - sets defeat by lauren davis on wednesday i ve never beaten here before , so icame out knowing i had nothing to lose , said davis , ranked 66th in the world . bouchardwas a semi-finalist last year but had struggled in the lead - up to the charlston green - clayevent , losing three of her last five matches to lower - ranked opponents . davis used precisegroundstrokes to keep bouchard on her heels throughout the second set . davis broke bouchards serve to love to take a 3 - 1 lead as the 21 - year - old canadian had a double fault , thensailed a forehand long to fall behind . two games later , davis hit a backhand to send backbouchard s 102mph serve for a winner down the left sideline and a 5 - 1 lead . davis hitsa forehand on her way to an impressive win against the canadian top seed bouchard hasstruggled recently , this time slumping to her fourth defeat in six matches the match endedon bouchard s forehand into the net , davis waving to the cheering crowd . bouchard said: as soon as it was over , definitely a bit of anger , but also kind of this confusion , slash ,like quest to find what s wrong , like this kind of searching feeling that , ok , like i knowsomething s off . i know something s not right . so i want to find it . i was definitely a littlebit slow today , overpowered . usually , i m the one dominating . so it was definitely , just , ido nt know , just not good .Ground Truth summaryeugenie bouchard suffered her fourth defeat in six matches . the canadian top seed lost tolauren davis at the family circle cup . world no 66 davis won 6 - 3 , 6 - 1 in the second roundin charleston . davis won nine of the final 11 games of the match to seal victory . click herefor all the latest news from charleston .No pretrainbouchard beat american lauren davis 6 - 3 , 6 - 1 in the second round of the family circle cupin charleston on wednesday . bouchard had lost her only career meeting with bouchard butwas in control this time against the world no 7 . bouchard won nine of the final 11 games ofthe match and broke bouchard s serve twice in the final set to pull off the upset .Pretrainedeugenie bouchard was beaten 6 - 3 , 6 - 1 by american lauren davis in the second round .davis had lost her only career meeting with bouchard , but was in control this time against theworld no 7 . davis hit a backhand to send back bouchard s 102mph serve for a winner downthe left sideline .Table 7: Both models output a relevant summary, but the no pretrain model uses the same name torefer to both players.13Under review as a conference paper at ICLR 2017Source Document( cnn ) mike rowe is coming to a river near you . sometimes , you hear about a person whomakes you feel good about humanity , but bad about yourself , rowe says . on thursday sepisode of somebody s got ta do it , rowe meets up with chad pregracke , the founder ofliving lands & waters , who does just that . pregracke wants to clean up the nation s riversone piece of detritus at a time . his quota ? always more . read mike rowe s facebook poston how to break our litter habit . since he founded the nonprofit in 1998 at the ripe age of 23, pregracke and more than 87,000 volunteers have collected 8.4 million pounds of trash fromu.s. waterways . those efforts helped him earn the 2013 cnn hero of the year award , alongwith numerous other honors . wherever you are , no matter if there s a stream , a creek , alake , whatever , that needs to be cleaned up , you can do it . just organize it and do it , he toldcnn s anderson cooper after his win . pregracke also gives rowe a tour of the 150 - foot , solar- powered barge that the living lands & waters staff calls home during lengthy cleanups . thepart - home , part - office , part - dumpster has seven bedrooms , two bathrooms , a classroomand a kitchen – and just happens to be made from a recycled strip club . according to theorganization s latest annual report , pregracke has made it his mission in 2015 to remove500,000 more pounds of trash . if you d like to help achieve this goal , visit his website tolearn how to help : livinglandsandwaters.org / get - involved / .Ground Truth summarychad pregracke was the 2013 cnn hero of the year . mike rowe visited pregracke for an episodeof somebody s got ta do it .No pretrainrowe meets up with chad pregracke , founder of living lands & waters . pregracke and morethan 87,000 volunteers collected 8.4 million pounds of trash from u.s. waterways .Pretrainedrowe is the founder of living lands & waters , who does just that . pregracke also gives rowea tour of the 150 - foot barge that the living lands & waters gets .Table 8: A failure case. The pretrained model outputs irrelevant details while the no pretrain modelsuccessfully summarizes the document.14Under review as a conference paper at ICLR 2017SELECTED ENGLISH!GERMAN OUTPUTSSourceMayor Bloomberg told reporters that, because of that court order, the city had suspended thereopening of the public space and protesters were informed, however, that local laws do notallow them to re-install with camping shops and sleeping bags.Ground TruthB ̈urgermeister Bloomberg stellt vor der Presse klar , das aufgrund dieser richterlichen Anord-nung die erneute ̈Offnung des Platzes f ̈ur den Publikumsverkehr und die Demonstrantenaufgehoben worden sei . Die Demonstranten wies er darauf hin , dass die Stadtgesetze ihnennicht erlaubten , sich erneut mit Zelten und Schlafs ̈acken an diesem Ort einzurichten .No pretrainDer B ̈urgermeister Bloomberg sagte den Reportern , dass die Stadt aufgrund dieser Gericht-sentscheidung die Wiederer ̈offnung des ̈offentlichen Raumes und die Information derDemonstranten ausgesetzt habe , dass die lokalen Gesetze ihnen nicht erlauben , mit denCampingpl ̈atzen und Schlafs ̈acken neu zu installieren .PretrainedB ̈urgermeister Bloomberg erkl ̈arte gegen ̈uber Journalisten , dass die Stadt aufgrund dieserGerichtsentscheidung die Wiederer ̈offnung des ̈offentlichen Raums ausgesetzt habe und dassdie Demonstranten dar ̈uber informiert wurden , dass die ̈ortlichen Gesetze es ihnen nichterlauben w ̈urden , sich mit Campingpl ̈atzen und Schlafs ̈alen neu zu installieren .Table 9: The no pretrain model makes a complete mistranslation when outputting ”und die Infor-mation der Demonstranten ausgesetzt habe”. That translates to ”the reopening of the public spaceand the information [noun] of the protesters were suspended”, instead of informing the protesters.Furthermore, it wrongly separated the two sentences, so the first sentence has extra words and thesecond sentence is left without a subject. The pretrained model does not make any of these mistakes.However, both models make a vocabulary mistake of ”zu installieren”, which is typically only usedto refer to installing software. A human evaluator fluent in both German and English said that thepretrained version was better.15Under review as a conference paper at ICLR 2017SourceThe low February temperatures, not only did they cause losses of millions for the agriculturalsector, but they limited the possibilities of the state economy to grow, causing a contraction ofthe economic activity in general of 3.6 percent in the first half of the year, mainly supportedby the historic fall of 31.16 per cent in agriculture, which affected the dynamics of othereconomic sectors.Ground TruthDie niedrigen Temperaturen im Februar verursachten nicht nur Verluste in Millionenh ̈ohe inder Landwirtschaft , sondern steckten dar ̈uber hinaus dem Wachstum der Staatswirtschaftenge Grenzen und verursachten im ersten Vierteljahr einen allgemeinen R ̈uckgang derWirtschaftst ̈atigkeit um 3,6 Prozent Dieser geht haupts ̈achlich auf den historischen Abbauder landwirtschaftlichen Entwicklung um 31,16 Prozent zur ̈uck , der sich bremsend auf weit-ere Wirtschaftssektoren auswirkte .No pretrainDie niedrigen Temperaturen im Februar f ̈uhrten nicht nur zu Verlusten f ̈ur die Landwirtschaft, sondern sie beschr ̈ankten die M ̈oglichkeiten der staatlichen Wirtschaft , wachsen zu wach-sen , wodurch die Wirtschaftst ̈atigkeit insgesamt von 3,6 Prozent in der ersten H ̈alfte desJahres , haupts ̈achlich durch den historischen R ̈uckgang von 31.16 % in der Landwirtschaft ,beeinflusst wurde , was die Dynamik anderer Wirtschaftssektoren betraf .PretrainedDie niedrigen Temperaturen im Februar f ̈uhrten nicht nur zu Verlusten von Millionen f ̈ur denAgrarsektor , sondern beschr ̈ankten die M ̈oglichkeiten der Staatswirtschaft , zu wachsen , waszu einer Schrumpfung der Wirtschaftst ̈atigkeit im Allgemeinen von 3,6 Prozent in der erstenH ̈alfte des Jahres f ̈uhrte , haupts ̈achlich durch den historischen Einbruch von 316 Prozent inder Landwirtschaft , der die Dynamik anderer Wirtschaftsbereiche beeinflusst hatte .Table 10: The human evaluator noted that the pretrained version is better, as it correctly captures themeaning and sentence structure of the middle. The no pretrain model does not misses translatingthe word ”million”, repeats itself in ”wachsen zu wachsen”, and puts the verb ”beeinflusst wurde”is an unnatural position. However, the pretrained model makes a mistake in the percentage (316%instead of 31.16%).16Under review as a conference paper at ICLR 2017SourceTo facilitate the inception of the Second World War, they allowed bankers and politiciansto create a latent conflict situation by saddling Germany with huge war reparations, therebymaking a radicalist example of the impoverished masses, it remained only to introduce asufficiently convincing culprit and a leader with a simple solution, while also creating a multi-racial Czechoslovakia with a strong German minority to play, and indeed did, the role of afifth colony, once the war had been ignited.Ground TruthUm den Zweiten Weltkrieg einfacher entfachen zu k ̈onnen , ließen die Banker durch diePolitik eine latente Konfliktsituation schaffen , indem sie Deutschland mit gigantischenKriegsreparationen belegten ; dadurch schufen sie die V oraussetzung verarmter Massen , sodass sie den Deutschen nur noch einen ausreichend starken F ̈uhrer unterjubeln mussten , derdie Schuldigen benannte und einfache L ̈osungen anbot ; ein weiterer Faktor war die Schaf-fung des Vielv ̈olkerstaates Tschechoslowakei mit einer starken deutschen Minderheit , die dieFunktion einer f ̈unften Kolonne einnehmen sollte und auch einnahm , um den Kriegsbrandzu entfachen .No pretrainUm die Gr ̈undung des Zweiten Weltkriegs zu erleichtern , erm ̈oglichte es den Bankern undPolitikern , eine latente Konfliktlage zu schaffen , indem sie Deutschland mit enormen Repa-rationsforderungen konfrontierte , wodurch ein radikalislamistisches Beispiel der verarmtenMassen entstand , es blieb nur , einen ausreichend aussagekr ̈aftigen Schuldigen und einenF ̈uhrer mit einer einfachen L ̈osung zu etablieren , w ̈ahrend gleichzeitig eine multi-ethnischeTschechoslowakei mit einer starken deutschen Minderheit zu spielen war und tats ̈achlich dieRolle einer f ̈unften Kolonie war .PretrainedUm die Einf ̈uhrung des Zweiten Weltkrieges zu erleichtern , ließen sie Banker und Politikereine latente Konfliktlage schaffen , indem sie Deutschland mit riesigen Reparationszahlungenbelieferten , wodurch ein radikalislamistisches Beispiel der verarmten Massen entstand , esblieb nur , einen ausreichend ̈uberzeugenden Schuldigen und einen F ̈uhrer mit einer einfachenL ̈osung zu pr ̈asentieren , w ̈ahrend gleichzeitig eine multiethnische Tschechoslowakei miteiner starken deutschen Minderheit geschaffen wurde , um zu spielen , und tats ̈achlich , dieRolle einer f ̈unften Kolonie , sobald der Krieg entfacht worden war .Table 11: An example where the English source is poorly worded. Both models output poor trans-lations, but the evaluator noted that the pretrained version is still better than the no pretrain version.Interestingly, both models mistranslate ”radical” as ”radikalislamistisches”, which means ”radicalIslam”, which is probably a bias in the training data.17Under review as a conference paper at ICLR 2017SourceThe total vote count will also be done if at the end of the ordinary calculation is establishedthat the difference between the winner and the candidate placed on second position is equalto or less than one percentage point, as long as there is a request of the representative ofthe political party whose candidate came on the second position, case in which there will beexcluded the electoral boxes that have been considered during the partial recount.Ground TruthDie Stimmenausz ̈ahlung kann auch in ihrer Gesamtheit erfolgen , wenn nach Abschlussder ordentlichen Berechnung festgestellt wird , dass der Unterschied zwischen dem mut-maßlichen Gewinner und dem Kandidaten auf dem zweiten Platz gleich oder geringer als einProzent ist , vorausgesetzt es liegt ein ausdr ̈ucklicher Antrag von einem Vertreter der Partei ,deren Kandidat Zweiter geworden ist , vor . In diesem Fall w ̈urden die Wahlpakete , die einerteilweisen Ausz ̈ahlung ausgesetzt wurden , ausgeschlossen .No pretrainDie gesamte Stimmenanzahl wird auch dann erreicht , wenn am Ende der ordentlichenBerechnung festgestellt wird , dass der Unterschied zwischen dem Sieger und dem Kandi-daten , der auf der zweiten Position liegt , gleich oder weniger als einen Prozentpunkt betr ̈agt, vorausgesetzt , dass der Vertreter der Partei , deren Kandidat auf der zweiten Position ist , derFall ist , in dem die Wahlunterlagen , die w ̈ahrend der teilweisen R ̈uckz ̈ahlung ber ̈ucksichtigtwurden , ausgeschlossen werden .PretrainedDie Gesamtzahl der Stimmzettel wird auch dann durchgef ̈uhrt , wenn am Ende der or-dentlichen Berechnung festgestellt wird , dass der Unterschied zwischen dem Gewinner unddem auf den zweiten Platz platzierten Kandidaten gleich oder weniger als einen Prozent-punkt betr ̈agt , solange es einen Antrag des Vertreters der politischen Partei gibt , dessenKandidat auf die zweite Position kam , in dem es die Wahlzettel ausklammert , die w ̈ahrendder Teilz ̈ahlung ber ̈ucksichtigt wurden .Table 12: Another example where the English source is poorly worded. Both models get the struc-ture right, but have a variety of problematic translations. Both models miss the meaning of ”totalvote count”. They both also translate ”electoral boxes” poorly - the no pretrain model calls it ”elec-toral paperwork” while the pretrained model calls it ”ballots”. These failures may be because of thepoorly worded English source. The human evaluator found them both equally poor.18
Sys6GJqxl
Published as a conference paper at ICLR 2017DELVING INTO TRANSFERABLE ADVERSARIAL EX-AMPLES AND BLACK -BOX ATTACKSYanpei Liu, Xinyun ChenShanghai Jiao Tong UniversityChang Liu, Dawn SongUniversity of the California, BerkeleyABSTRACTAn intriguing property of deep neural networks is the existence of adversarial ex-amples, which can transfer among different architectures. These transferable ad-versarial examples may severely hinder deep neural network-based applications.Previous works mostly study the transferability using small scale datasets. In thiswork, we are the first to conduct an extensive study of the transferability overlarge models and a large scale dataset, and we are also the first to study the trans-ferability of targeted adversarial examples with their target labels. We study bothnon-targeted andtargeted adversarial examples, and show that while transferablenon-targeted adversarial examples are easy to find, targeted adversarial examplesgenerated using existing approaches almost never transfer with their target labels.Therefore, we propose novel ensemble-based approaches to generating transfer-able adversarial examples. Using such approaches, we observe a large proportionof targeted adversarial examples that are able to transfer with their target labels forthe first time. We also present some geometric studies to help understanding thetransferable adversarial examples. Finally, we show that the adversarial examplesgenerated using ensemble-based approaches can successfully attack Clarifai.com,which is a black-box image classification system.1 I NTRODUCTIONRecent research has demonstrated that for a deep architecture, it is easy to generate adversarialexamples, which are close to the original ones but are misclassified by the deep architecture (Szegedyet al. (2013); Goodfellow et al. (2014)). The existence of such adversarial examples may have severeconsequences, which hinders vision-understanding-based applications, such as autonomous driving.Most of these studies require explicit knowledge of the underlying models. It remains an openquestion how to efficiently find adversarial examples for a black-box model.Several works have demonstrated that some adversarial examples generated for one model mayalso be misclassified by another model. Such a property is referred to as transferability , whichcan be leveraged to perform black-box attacks. This property has been exploited by constructinga substitute of the black-box model, and generating adversarial instances against the substitute toattack the black-box system (Papernot et al. (2016a;b)). However, so far, transferability is mostlyexamined over small datasets, such as MNIST (LeCun et al. (1998)) and CIFAR-10 (Krizhevsky &Hinton (2009)). It has yet to be better understood transferability over large scale datasets, such asImageNet (Russakovsky et al. (2015)).In this work, we are the first to conduct an extensive study of the transferability of different adver-sarial instance generation strategies applied to different state-of-the-art models trained over a largescale dataset. In particular, we study two types of adversarial examples: (1) non-targeted adversar-ial examples, which can be misclassified by a network, regardless of what the misclassified labelsmay be; and (2) targeted adversarial examples, which can be classified by a network as a targetlabel. We examine several existing approaches searching for adversarial examples based on a singlemodel. While non-targeted adversarial examples are more likely to transfer, we observe few targetedadversarial examples that are able to transfer with their target labels.Work is done while visiting UC Berkeley.1Published as a conference paper at ICLR 2017We further propose a novel strategy to generate transferable adversarial images using an ensembleof multiple models. In our evaluation, we observe that this new strategy can generate non-targetedadversarial instances with better transferability than other methods examined in this work. Also, forthe first time, we observe a large proportion of targeted adversarial examples that are able to transferwith their target labels.We study geometric properties of the models in our evaluation. In particular, we show that thegradient directions of different models are orthogonal to each other. We also show that decisionboundaries of different models align well with each other, which partially illustrates why adversarialexamples can transfer.Last, we study whether generated adversarial images can attack Clarifai.com, a commercial com-pany providing state-of-the-art image classification services. We have no knowledge about the train-ing dataset and the types of models used by Clarifai.com; meanwhile, the label set of Clarifai.comis quite different from ImageNet’s. We show that even in this case, both non-targeted and targetedadversarial images transfer to Clarifai.com. This is the first work documenting the success of gen-erating both non-targeted and targeted adversarial examples for a black-box state-of-the-art onlineimage classification system, whose model and training dataset are unknown to the attacker.Contributions and organization. We summarize our main contributions as follows:For ImageNet models, we show that while existing approaches are effective to generatenon-targeted transferable adversarial examples (Section 3), only few targeted adversarialexamples generated by existing methods can transfer (Section 4).We propose novel ensemble-based approaches to generate adversarial examples (Sec-tion 5). Our approaches enable a large portion of targeted adversarial examples to transferamong multiple models for the first time.We are the first to present that targeted adversarial examples generated for models trainedon ImageNet can transfer to a black-box system, i.e., Clarifai.com, whose model, trainingdata, and label set is unknown to us (Section 7). In particular, Clarifai.com’s label set isvery different from ImageNet’s.We conduct the first analysis of geometric properties for large models trained over Ima-geNet (Section 6), and the results reveal several interesting findings, such as the gradientdirections of different models are orthogonal to each other.In the following, we first discuss related work, and then present the background knowledge andexperiment setup in Section 2. Then we present each of our experiments and conclusions in thecorresponding section as mentioned above.Related work. Transferability of adversarial examples was first examined by Szegedy et al.(2013), which studied the transferability (1) between different models trained over the same dataset;and (2) between the same or different model trained over disjoint subsets of a dataset; However,Szegedy et al. (2013) only studied MNIST.The study of transferability was followed by Goodfellow et al. (2014), which attributed the phe-nomenon of transferability to the reason that the adversarial perturbation is highly aligned with theweight vector of the model. Again, this hypothesis was tested using MNIST and CIFAR-10 datasets.We show that this is not the case for models trained over ImageNet.Papernot et al. (2016a;b) examined constructing a substitute model to attack a black-box targetmodel. To train the substitute model, they developed a technique that synthesizes a training set andannotates it by querying the target model for labels. They demonstrate that using this approach,black-box attacks are feasible towards machine learning services hosted by Amazon, Google, andMetaMind. Further, Papernot et al. (2016a) studied the transferability between deep neural networksand other models such as decision tree, kNN, etc.Our work differs from Papernot et al. (2016a;b) in three aspects. First, in these works, only the modeland the training process are a black box, but the training set and the test set are controlled by theattacker; in contrast, we attack Clarifai.com, whose model, training data, training process, and eventhe test label set are unknown to the attacker. Second, the datasets studied in these works are small2Published as a conference paper at ICLR 2017scale, i.e., MNIST and GTSRB (Stallkamp et al. (2012)); in our work, we study the transferabilityover larger models and a larger dataset, i.e., ImageNet. Third, to attack black-box machine learningsystems, we do not query the systems for constructing the substitute model ourselves.In a concurrent and independent work, Moosavi-Dezfooli et al. (2016) showed the existence of auniversal perturbation for each model, which can transfer across different images. They also showthat the adversarial images generated using these universal perturbations can transfer across differentmodels on ImageNet. However, they only examine the non-targeted transferability, while our workstudies both non-targeted and targeted transferability over ImageNet.2 A DVERSARIAL DEEPLEARNING AND TRANSFERABILITY2.1 T HE ADVERSARIAL DEEP LEARNING PROBLEMWe assume a classifier f(x)outputs a category (or a label) as the prediction. Given an originalimagex, with ground truth label y, the adversarial deep learning problem is to seek for adversarialexamples for the classifier f(x). Specifically, we consider two classes of adversarial examples.Anon-targeted adversarial example x?is an instance that is close to x, in which case x?shouldhave the same ground truth as x, whilef(x?)6=y. For the problem to be non-trivial, we assumef(x) =ywithout loss of generality. A targeted adversarial example x?is close toxand satisfiesf(x?) =y?, wherey?is a target label specified by the adversary, and y?6=y.2.2 A PPROACHES FOR GENERATING ADVERSARIAL EXAMPLESIn this work, we consider three classes of approaches for generating adversarial examples:optimization-based approaches, fast gradient approaches, and fast gradient sign approaches. Eachclass has non-targeted and targeted versions respectively.2.2.1 A PPROACHES FOR GENERATING NON -TARGETED ADVERSARIAL EXAMPLESFormally, given an image xwith ground truth y=f(x), searching for a non-targeted adversarialexample can be modeled as searching for an instance x?to satisfy the following constraints:f(x?)6=y (1)d(x;x?)B (2)whered(;)is a metric to quantify the distance between an original image and its adversarial coun-terpart, andB, called distortion , is an upper bound placed on this distance. Without loss of gener-ality, we consider model fis composed of a network J(x), which outputs the probability for eachcategory, so that foutputs the category with the highest probability.Optimization-based approach. One approach is to approximate the solution to the followingoptimization problem:argminx?d(x;x?)`(1y;J(x?)) (3)where 1yis the one-hot encoding of the ground truth label y,`is a loss function to measure thedistance between the prediction and the ground truth, and is a constant to balance constraints (2)and (1), which is empirically determined. Here, loss function `is used to approximate constraint (1),and its choice can affect the effectiveness of searching for an adversarial example. In this work, wechoose`(u;v) = log (1uv), which is shown to be effective by Carlini & Wagner (2016).Fast gradient sign (FGS). Goodfellow et al. (2014) proposed the fast gradient sign (FGS) methodso that the gradient needs be computed only once to generate an adversarial example. FGS can beused to generate adversarial images to meet the L1norm bound. Formally, non-targeted adversarialexamples are constructed asx? clip(x+Bsgn(rx`(1y;J(x))))Here, clip(x)is used to clip each dimension of xto the range of pixel values, i.e., [0;255] in thiswork. We make a slight variation to choose `(u;v) = log (1uv), which is the same as used inthe optimization-based approach.3Published as a conference paper at ICLR 2017Fast gradient (FG). The fast gradient approach (FG) is similar to FGS, but instead of movingalong the gradient sign direction, FG moves along the gradient direction. In particular, we havex? clip(x+Brx`(1y;J(x))jjrx`(1y;J(x))jj))Here, we assume the distance metric in constraint (2), d(x;x?) =jjxx?jjis a norm of xx?.The term sgn(rx`)in FGS is replaced byrx`jjrx`jjto meet this distance constraint.We call both FGS and FG fast gradient-based approaches .2.2.2 A PPROACHES FOR GENERATING TARGETED ADVERSARIAL EXAMPLESA targeted adversarial image x?is similar to a non-targeted one, but constraint (1) is replaced byf(x?) =y?(4)wherey?is the target label given by the adversary. For the optimization-based approach, we ap-proximate the solution by solving the following dual objective:argminx?d(x;x?) +`0(1y?;J(x?)) (5)In this work, we choose the standard cross entropy loss `0(u;v) =Piuilogvi.For FGS and FG, we construct adversarial examples as follows:x? clip(xBsgn(rx`0(1y?;J(x)))) (FGS)x? clip(xBrx`0(1y?;J(x))jjrx`0(1y?;J(x))jj) (FG)where`0is the same as the one used for the optimization-based approach.2.3 E VALUATION METHODOLOGYFor the rest of the paper, we focus on examining the transferability among state-of-the-art modelstrained over ImageNet (Russakovsky et al. (2015)). In this section, we detail the models to beexamined, the dataset to be evaluated, and the measurements to be used.Models. We examine five networks, ResNet-50, ResNet-101, ResNet-152 (He et al. (2015))1,GoogLeNet (Szegedy et al. (2014))2, and VGG-16 (Simonyan & Zisserman (2014))3. We retrievethe pre-trained models for each network online. The performance of these models on the ILSVRC2012 (Russakovsky et al. (2015)) validation set can be found in our online technical report: Liu et al.(2016). We choose these models to study the transferability between homogeneous architectures(i.e., ResNet models) and heterogeneous architectures.Dataset. It is less meaningful to examine the transferability of an adversarial image between twomodels which cannot classify the original image correctly. Therefore, from the ILSVRC 2012 val-idation set, we randomly choose 100 images, which can be classified correctly by all five modelsin our examination. These 100 images form our test set. To perform targeted attacks, we manuallychoose a target label for each image, so that its semantics is far from the ground truth. The imagesand target labels in our evaluation can be found on website4.Measuring transferability. Given two models, we measure the non-targeted transferability bycomputing the percentage of the adversarial examples generated for one model that can be classifiedcorrectly for the other. We refer to this percentage as accuracy . A lower accuracy means betternon-targeted transferability. We measure the targeted transferability by computing the percentage ofthe adversarial examples generated for one model that are classified as the target label by the othermodel. We refer to this percentage as matching rate . A higher matching rate means better targetedtransferability. For clarity, the reported results are only based on top-1 accuracy. Top-5 accuracy’scounterparts can be found in our online technical report: Liu et al. (2016).1https://github.com/KaimingHe/deep-residual-networks2https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet3https://gist.github.com/ksimonyan/211839e770f7b538e2d84https://github.com/sunblaze-ucb/transferability-advdnn-pub4Published as a conference paper at ICLR 2017Distortion. Besides transferability, another important factor is the distortion between adversarialimages and the original ones. We measure the distortion by root mean square deviation , i.e., RMSD,which is computed as d(x?;x) =pPi(x?ixi)2=N, wherex?andxare the vector representationsof an adversarial image and the original one respectively, Nis the dimensionality of xandx?, andxidenotes the pixel value of the i-th dimension of x, within range [0;255], and similar for x?i.3 N ON-TARGETED ADVERSARIAL EXAMPLESIn this section, we examine different approaches for generating non-targeted adversarial images.3.1 O PTIMIZATION -BASED APPROACHTo apply the optimization-based approach for a single model, we initialize x?to bexand use AdamOptimizer (Kingma & Ba (2014)) to optimize Objective (3) . We find that we can tune the RMSDby adjusting the learning rate of Adam and . We find that, for each model, we can use a smalllearning rate to generate adversarial images with small RMSD, i.e. <2, with any. In fact, we findthat when initializing x?withx, Adam Optimizer will search for an adversarial example around x,even when we set to be 0, i.e., not restricting the distance between x?andx. Therefore, we setto be 0for all experiments using optimization-based approaches throughout the paper. Althoughthese adversarial examples with small distortions can successfully fool the target model, however,they cannot transfer well to other models (details can be found in our online technical report: Liuet al. (2016)).We increase the learning rate to allow the optimization algorithm to search for adversarial imageswith larger distortion. In particular, we set the learning rate to be 4. We run Adam Optimizer for 100iterations to generate the adversarial images. We observe that the loss converges after 100 iterations.An alternative optimization-based approach leading to similar results can be found in our onlinetechnical report: Liu et al. (2016).Non-targeted adversarial examples transfer. We generate non-targeted adversarial examples onone network, but evaluate them on another, and Table 1 Panel A presents the results. From the table,we can observe thatThe diagonal contains all 0 values. This says that all adversarial images generated for onemodel can mislead the same model.A large proportion of non-targeted adversarial images generated for one model using theoptimization-based approach can transfer to another.Although the three ResNet models share similar architectures which differ only in the hy-perparameters, adversarial examples generated against a ResNet model do not necessarilytransfer to another ResNet model better than other non-ResNet models. For example, theadversarial examples generated for VGG-16 have lower accuracy on ResNet-50 than thosegenerated for ResNet-152 or ResNet-101.3.2 F AST GRADIENT -BASED APPROACHESWe then examine the effectiveness of fast gradient-based approaches. A good property of fastgradient-based approaches is that all generated adversarial examples lie in a 1-D subspace. There-fore, we can easily approximate the minimal distortion in this subspace of transferable adversarialexamples between two models. In the following, we first control the RMSD to study fast gradient-based approaches’ effectiveness. Second, we study the transferable minimal distortions of fastgradient-based approaches.3.2.1 E FFECTIVENESS AND TRANSFERABILITY OF THE FAST GRADIENT -BASEDAPPROACHESSince the distortion Band the RMSD of the generated adversarial images are highly correlated, wecan choose this hyperparameter Bto generate adversarial images with a given RMSD. In Table 15Published as a conference paper at ICLR 2017RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 22.83 0% 13% 18% 19% 11%ResNet-101 23.81 19% 0% 21% 21% 12%ResNet-50 22.86 23% 20% 0% 21% 18%VGG-16 22.51 22% 17% 17% 0% 5%GoogLeNet 22.58 39% 38% 34% 19% 0%Panel A: Optimization-based approachRMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 23.45 4% 13% 13% 20% 12%ResNet-101 23.49 19% 4% 11% 23% 13%ResNet-50 23.49 25% 19% 5% 25% 14%VGG-16 23.73 20% 16% 15% 1% 7%GoogLeNet 23.45 25% 25% 17% 19% 1%Panel B: Fast gradient approachTable 1: Transferability of non-targeted adversarial images generated between pairs of models. Thefirst column indicates the average RMSD of all adversarial images generated for the model in thecorresponding row. The cell (i;j)indicates the accuracy of the adversarial images generated formodeli(row) evaluated over model j(column). Results of top-5 accuracy can be found in ouronline technical report: Liu et al. (2016).Panel B, we generate adversarial images using FG such that the average RMSD is almost the sameas those generated using the optimization-based approach. We observe that the diagonal values inthe table are all positive, which means that FG cannot fully mislead the models. A potential reasonis that, FG can be viewed as approximating the optimization, but is tailored for speed over accuracy.On the other hand, the values of non-diagonal cells in the table, which correspond to the accuraciesof adversarial images generated for one model but evaluated on another, are comparable with or lessthan their counterparts in the optimization-based approach. This shows that non-targeted adversarialexamples generated by FG exhibit transferability as well.We also evaluate FGS, but the transferability of the generated images is worse than the ones gen-erated using either FG or optimization-based approaches. The results can be found in our onlinetechnical report: Liu et al. (2016). It shows that when RMSD is around 23, the accuracies of theadversarial images generated by FGS is greater than their counterparts for FG. We hypothesize thereason why transferability of FGS is worse to this fact.3.2.2 A DVERSARIAL IMAGES WITH MINIMAL TRANSFERABLE RMSDFor an image xand two models M1;M 2, we can approximate the minimal distortion Balong adirection, such thatxB=x+Bgenerated for M1is adversarial for both M1andM2. Hereisthe direction, i.e., sgn(rx`)for FGS, andrx`=jjrx`jjfor FG.We refer to the minimal transferable RMSD from M1toM2using FG (or FGS) as the RMSD ofa transferable adversarial example xBwith the minimal transferable distortion BfromM1toM2using FG (or FGS). The minimal transferable RMSD can illustrate the tradeoff between distortionand transferability.In the following, we approximate the minimal transferable RMSD through a linear search by sam-plingBevery 0.1 step. We choose the linear-search method rather than binary-search method todetermine the minimal transferable RMSD because the adversarial images generated from an origi-nal image may come from multiple intervals. The experiment can be found in our online technicalreport: Liu et al. (2016).Minimal transferable RMSD using FG and FGS. Figure 1 plots the cumulative distributionfunction (CDF) of the minimal transferable RMSD from VGG-16 to ResNet-152 using non-targetedFG (Figure 1a) and FGS (Figure 1b). From the figures, we observe that both FG and FGS can find100% transferable adversarial images with RMSD less than 80:91and86:56respectively. Further,6Published as a conference paper at ICLR 2017(a) Fast Gradient (b) Fast Gradient SignFigure 1: The CDF of the minimal transferable RMSD from VGG-16 to ResNet-152 using FG (a)and FGS (b). The green line labels the median minimal transferable RMSD, while the red line labelsthe minimal transferable RMSD to reach 90% percentage.RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNetResNet-152 23.13 100% 2% 1% 1% 1%ResNet-101 23.16 3% 100% 3% 2% 1%ResNet-50 23.06 4% 2% 100% 1% 1%VGG-16 23.59 2% 1% 2% 100% 1%GoogLeNet 22.87 1% 1% 0% 1% 100%Table 2: The matching rate of targeted adversarial images generated using the optimization-basedapproach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)indicates that matching rate of the targeted adversarial images generated for model i(row)when evaluated on model j(column). The top-5 results can be found in our online technical re-port: Liu et al. (2016).the FG method can generate transferable attacks with smaller RMSD than FGS. A potential rea-son is that while FGS minimizes the distortion’s L1norm, FG minimizes its L2norm, which isproportional to RMSD.3.3 C OMPARISON WITH RANDOM PERTURBATIONSWe also evaluate the test accuracy when we add a Gaussian noise to the 100 images in our testset. The concrete results can be found in our online technical report: Liu et al. (2016), where weshow the conclusion that the “transferability” of this approach is significantly worse than eitheroptimization-based approaches or fast gradient-based approaches.4 T ARGETED ADVERSARIAL EXAMPLESIn this section, we examine the transferability of targeted adversarial images. Table 2 presentsthe results for using optimization-based approach. We observe that (1) the prediction of targetedadversarial images can match the target labels when evaluated on the same model that is used togenerate the adversarial examples; but (2) the targeted adversarial images can be rarely predictedas the target labels by a different model. We call the latter that the target labels do not transfer .Even when we increase the distortion, we still do not observe improvements on making target labeltransfer. Some results can be found in our online technical report: Liu et al. (2016). Even if wecompute the matching rate based on top-5 accuracy, the highest matching rate is only 10%. Theresults can be found in our online technical report: Liu et al. (2016).We also examine the targeted adversarial images generated by fast gradient-based approaches, andwe observe that the target labels do not transfer as well. The results can be found in our onlinetechnical report: Liu et al. (2016). In fact, most targeted adversarial images cannot mislead themodel, for which the adversarial images are generated, to predict the target labels, regardless of howlarge the distortion is used. We attribute it to the fact that the fast gradient-based approaches only7Published as a conference paper at ICLR 2017search for attacks in a 1-D subspace. In this subspace, the total possible predictions may contain asmall subset of all labels, which usually does not contain the target label. In Section 6, we studydecision boundaries regarding this issue.We also evaluate the matching rate of images added with Gaussian noise, as described in Section 3.3.However, we observe that the matching rate of any of the 5 models is 0%. Therefore, we concludethat by adding Gaussian noise, the attacker cannot generate successful targeted adversarial examplesat all, let alone targeted transferability.5 E NSEMBLE -BASED APPROACHESWe hypothesize that if an adversarial image remains adversarial for multiple models, then it is morelikely to transfer to other models as well. We develop techniques to generate adversarial images formultiple models. The basic idea is to generate adversarial images for the ensemble of the models .Formally, given kwhite-box models with softmax outputs being J1;:::;Jk, an original image x,and its ground truth y,the ensemble-based approach solves the following optimization problem (fortargeted attack):argminx?log(kXi=1iJi(x?))1y?+d(x;x?) (6)wherey?is the target label specified by the adversary,PiJi(x?)is the ensemble model, and iare the ensemble weights,Pki=1i= 1. Note that (6) is the targeted objective. The non-targetedcounterpart can be derived similarly. In doing so, we hope the generated adversarial images remainadversarial for an additional black-box model Jk+1.We evaluate the effectiveness of the ensemble-based approach. For each of the five models, we treatit as the black-box model to attack, and generate adversarial images for the ensemble of the restfour, which is considered as white-box. We evaluate the generated adversarial images over all fivemodels. Throughout the rest of the paper, we refer to the approaches evaluated in Section 3 and 4 asthe approaches using a single model, and to the ensemble-based approaches discussed in this sectionas the approaches using an ensemble model.Optimization-based approach. We use Adam to optimize the objective (6) with equal ensembleweights across all models in the ensemble to generate targeted adversarial examples. In particular,we set the learning rate of Adam to be 8for each model. In each iteration, we compute the Adamupdate for each model, sum up the four updates, and add the aggregation onto the image. We run 100iterations of updates, and we observe that the loss converges after 100 iterations. By doing so, for thefirst time, we observe a large proportion of the targeted adversarial images whose target labels cantransfer. The results are presented in Table 3. We observe that not all targeted adversarial imagescan be misclassified to the target labels by the models used in the ensemble. This suggests thatwhile searching for an adversarial example for the ensemble model, there is no direct supervision tomislead any individual model in the ensemble to predict the target label. Further, from the diagonalnumbers of the table, we observe that the transferability to ResNet models is better than to VGG-16or GoogLeNet, when adversarial examples are generated against all models except the target model.We also evaluate non-targeted adversarial images generated by the ensemble-based approach. Weobserve that the generated adversarial images have almost perfect transferability. We use the sameprocedure as for the targeted version, except the objective to generate the adversarial images. Weevaluate the generated adversarial images over all models. The results are presented in Table 4.The generated adversarial images all have RMSDs around 17, which are lower than 22 to 23 ofthe optimization-based approach using a single model (See Table 1 for comparison). When theadversarial images are evaluated over models which are not used to generate the attack, the accuracyis no greater than 6%. For a reference, the corresponding accuracies for all approaches evaluated inSection 3 using one single model are at least 12%. Our experiments demonstrate that the ensemble-based approaches can generate almost perfectly transferable adversarial images.Fast gradient-based approach. The results for non-targeted fast gradient-based approaches ap-plied to the ensemble can be found in our online technical report: Liu et al. (2016). We observethat the diagonal values are not zero, which is the same as we observed in the results for FG and8Published as a conference paper at ICLR 2017RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet-ResNet-152 30.68 38% 76% 70% 97% 76%-ResNet-101 30.76 75% 43% 69% 98% 73%-ResNet-50 30.26 84% 81% 46% 99% 77%-VGG-16 31.13 74% 78% 68% 24% 63%-GoogLeNet 29.70 90% 87% 83% 99% 11%Table 3: The matching rate of targeted adversarial images generated using the optimization-basedapproach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)indicates that percentage of the targeted adversarial images generated for the ensemble of thefour models except model i(row) is predicted as the target label by model j(column). In each row,the minus sign “” indicates that the model of the row is not used when generating the attacks.Results of top-5 matching rate can be found in our online technical report: Liu et al. (2016).RMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet-ResNet-152 17.17 0% 0% 0% 0% 0%-ResNet-101 17.25 0% 1% 0% 0% 0%-ResNet-50 17.25 0% 0% 2% 0% 0%-VGG-16 17.80 0% 0% 0% 6% 0%-GoogLeNet 17.41 0% 0% 0% 0% 5%Table 4: Accuracy of non-targeted adversarial images generated using the optimization-based ap-proach. The first column indicates the average RMSD of the generated adversarial images. Cell(i;j)corresponds to the accuracy of the attack generated using four models except model i(row)when evaluated over model j(column). In each row, the minus sign “ ” indicates that the modelof the row is not used when generating the attacks. Results of top-5 accuracy can be found in ouronline technical report: Liu et al. (2016).FGS applied to a single model. We hypothesize a potential reason is that the gradient directions ofdifferent models in the ensemble are orthogonal to each other, as we will illustrate in Section 6. Inthis case, the gradient direction of the ensemble is almost orthogonal to the one of each model in theensemble. Therefore searching along this direction may require large distortion to reach adversarialexamples.For targeted adversarial examples generated using FG and FGS based on an ensemble model, theirtransferability is no better than the ones generated using a single model. The results can be found inour online technical report: Liu et al. (2016). We hypothesize the same reason to explain this: thereare only few possible target labels in total in the 1-D subspace.6 G EOMETRIC PROPERTIES OF DIFFERENT MODELSIn this section, we show some geometric properties of the models to try to better understand transfer-able adversarial examples. Prior works also try to understand the geometic properties of adversarialexamples theoretically (Fawzi et al. (2016)) or empirically (Goodfellow et al. (2014)). In this work,we examine large models trained over a large dataset with 1000 labels, whose geometric propertiesare never examined before. This allows us to make new observations to better understand the modelsand their adversarial examples.The gradient directions of different models in our evaluation are almost orthogonal to eachother. We study whether the adversarial directions of different models align with each other. Wecalculate cosine value of the angle between gradient directions of different models, and the resultscan be found in our online technical report: Liu et al. (2016). We observe that all non-diagonalvalues are close to 0, which indicates that for most images, their gradient directions with respect todifferent models are orthogonal to each other.Decision boundaries of the non-targeted approaches using a single model. We study the deci-sion boundary of different models to understand why adversarial examples transfer. We choose two9Published as a conference paper at ICLR 2017Figure 2: The example image to study the decision boundary. Its ID in ILSVRC 2012 validation setis 49443, and its ground truth label is “anemone fish.”VGG-16 ResNet-50 ResNet-101 ResNet-152 GoogLeNetZoom-in20 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 2020151050510152020 15 10 50 5 10 15 20201510505101520 Zoom-out100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100100 50 0 50 10010050050100Figure 3: Decision regions of different models. We pick the same two directions for all plots: one isthe gradient direction of VGG-16 (x-axis), and the other is a random orthogonal direction (y-axis).Each point in the span plane shows the predicted label of the image generated by adding a noise tothe original image (e.g., the origin corresponds to the predicted label of the original image). Theunits of both axises are 1 pixel values. All sub-figure plots the regions on the span plane using thesame color for the same label. The image is in Figure 2.normalized orthogonal directions 1;2, one being the gradient direction of VGG-16 and the otherbeing randomly chosen. Each point (u;v)in this 2-D plane corresponds to the image x+u1+v2,wherexis the pixel value vector of the original image. For each model, we plot the label of theimage corresponding to each point, and get Figure 3 using the image in Figure 2.We can observe that for all models, the region that each model can predict the image correctlyis limited to the central area. Also, along the gradient direction, the classifiers are soon misled.One interesting finding is that along this gradient direction, the first misclassified label for the threeResNet models (corresponding to the light green region) is the label “orange”. A more detailedstudy can be found in our online technical report: Liu et al. (2016). When we look at the zoom-out figures, however, the labels of images that are far away from the original one are different fordifferent models, even among ResNet models.On the other hand, in Table 5, we show the total number of regions in each plane. In fact, for eachplane, there are at most 21 different regions in all planes. Compared with the 1,000 total categoriesin ImageNet, this is only 2.1% of all categories. That means, for all other 97.9% labels, no targetedadversarial example exists in each plane. Such a phenomenon partially explains why fast gradient-based approaches can hardly find targeted adversarial images.Further, in Figure 4, we draw the decision boundaries of all models on the same plane as describedabove. We can observe that10Published as a conference paper at ICLR 2017Model VGG-16 ResNet-50 ResNet-101 ResNet-152 GoogLeNet# of labels 10 9 21 10 21Table 5: The number of all possible predicted labels for each model in the same plane described in Figure 3.50 0 50 100604020020406080VGG-16ResNet-101ResNet-152ResNet-50GoogLeNetFigure 4: The decision boundary to sep-arate the region within which all pointsare classified as the ground truth label(encircled by each closed curve) fromothers. The plane is the same one de-scribed in Figure 3. The origin ofthe coordinate plane corresponds to theoriginal image. The units of both axisesare 1 pixel values.50 0 50 1006040200204060 ResNet-101VGG-16ResNet-50ResNet-152GoogLeNetFigure 5: The decision boundary to separate theregion within which all points are classified as thetarget label (encircled by each closed curve) fromothers. The plane is spanned by the targeted ad-versarial direction and a random orthogonal di-rection. The targeted adversarial direction is com-puted as the difference between the original imagein Figure 2 and the adversarial image generated bythe optimization-based approach for an ensemble.The ensemble contains all models except ResNet-101. The origin of the coordinate plane corre-sponds to the original image. The units of bothaxises are 1 pixel values.The boundaries align with each other very well. This partially explains why non-targetedadversarial images can transfer among models.The boundary diameters along the gradient direction is less than the ones along the ran-dom direction. A potential reason is that moving a variable along its gradient directioncan change the loss function (i.e., the probability of the ground truth label) significantly.Therefore along the gradient direction it will take fewer steps to move out of the groundtruth region than a random direction.An interesting finding is that even though we move left along the x-axis, which is equivalentto maximizing the ground truth’s prediction probability, it also reaches the boundary muchsooner than moving along a random direction. We attribute this to the non-linearity of theloss function: when the distortion is larger, the gradient direction also changes dramatically.In this case, moving along the original gradient direction no longer increases the probabilityto predict the ground truth label (details can be found in our online technical report: Liuet al. (2016)).As for VGG-16 model, there is a small hole within the region corresponding to the groundtruth. This may partially explain why non-targeted adversarial images with small distortionexist, but do not transfer well. This hole does not exist in other models’ decision planes. Inthis case, non-targeted adversarial images in this hole do not transfer.Decision boundaries of the targeted ensemble-based approaches. In addition, we choose thetargeted adversarial direction of the ensemble of all models except ResNet-101 and a random or-thogonal direction, and we plot decision boundaries on the plane spanned by these two directionvectors in Figure 5. We observe that the regions of images, which are predicted as the target label,align well for the four models in the ensemble. However, for the model not used to generate theadversarial image, i.e., ResNet-101, it also has a non-empty region such that the prediction is suc-cessfully misled to the target label, although the area is much smaller. Meanwhile, the region withineach closed curve of the models almost has the same center.11Published as a conference paper at ICLR 20177 R EAL WORLD EXAMPLE :ADVERSARIAL EXAMPLES FOR CLARIFAI .COMClarifai.com is a commercial company providing state-of-the-art image classification services. Wehave no knowledge about the dataset and types of models used behind Clarifai.com, except that wehave black-box access to the services. The labels returned from Clarifai.com are also different fromthe categories in ILSVRC 2012. We submit all 100 original images to Clarifai.com and the returnedlabels are correct based on a subjective measure.We also submit 400 adversarial images in total, where 200 of them are targeted adversarial examples,and the rest 200 are non-targeted ones. As for the 200 targeted adversarial images, 100 of themare generated using the optimization-based approach based on VGG-16 (the same ones evaluatedin Table 2), and the rest 100 are generated using the optimization-based approach based on anensemble of all models except ResNet-152 (the same ones evaluated in Table 3). The 200 non-targeted adversarial examples are generated similarly (the same ones evaluated in Table 1 and 4).For non-targeted adversarial examples, we observe that for both the ones generated using VGG-16and those generated using the ensemble, most of them can transfer to Clarifai.com.More importantly, a large proportion of our targeted adversarial examples are misclassified by Clari-fai.com as well. We observe that 57% of the targeted adversarial examples generated using VGG-16,and76% of the ones generated using the ensemble can mislead Clarifai.com to predict labels irrele-vant to the ground truth.Further, our experiment shows that for targeted adversarial examples, 18% of those generated us-ing the ensemble model can be predicted as labels close to the target label by Clarifai.com. Thecorresponding number for the targeted adversarial examples generated using VGG-16 is 2%. Con-sidering that in the case of attacking Clarifai.com, the labels given by the target model are differentfrom those given by our models, it is fairly surprising to see that when using the ensemble-basedapproach, there is still a considerable proportion of our targeted adversarial examples that can mis-lead this black-box model to make predictions semantically similar to our target labels. All thesenumbers are computed based on a subjective measure, and we include some examples in Table 6.More examples can be found in our online technical report: Liu et al. (2016).originalimagetruelabelClarifai.comresults oforiginal imagetargetlabeltargetedadversarialexampleClarifai.com resultsof targetedadversarial exampleviaductbridge,sight,arch,river,skywindowscreenwindow,wall,old,decoration,designhip, rosehip,rosehipfruit,fall,food,little,wildlifestupa,topeBuddha,gold,temple,celebration,artisticdogsled,dogsled,dogsleighgroup together,four,sledge,sled,enjoymenthip, rosehip,rosehipcherry,branch,fruit,food,season12Published as a conference paper at ICLR 2017pug,pug-dogpug,friendship,adorable,purebred,sitsea lionsea seal,ocean,head,sea,cuteOldEnglishsheep-dog,bobtailpoodle,retriever,loyalty,sit,twoabayaveil,spirituality,religion,people,illustrationmaillot,tank suitbeach,woman,adult,wear,portraitamphib-ian,amphibi-ousvehicletransportationsystem,vehicle,man,print,retropatas,hussarmonkey,Erythro-cebuspatasprimate,monkey,safari,sit,lookingbee eaterornithology,avian,beak,wing,featherTable 6: Original images and adversarial images evaluated over Clarifai.com. For labels returnedfrom Clarifai.com, we sort the labels firstly by rareness: how many times a label appears in theClarifai.com results for all adversarial images and original images, and secondly by confidence.Only top 5 labels are provided.8 C ONCLUSIONIn this work, we are the first to conduct an extensive study of the transferability of both non-targetedand targeted adversarial examples generated using different approaches over large models and alarge scale dataset. Our results confirm that the transferability for non-targeted adversarial exam-ples are prominent even for large models and a large scale dataset. On the other hand, we find thatit is hard to use existing approaches to generate targeted adversarial examples whose target labelscan transfer. We develop novel ensemble-based approaches, and demonstrate that they can gen-erate transferable targeted adversarial examples with a high success rate. Meanwhile, these newapproaches exhibit better performance on generating non-targeted transferable adversarial examplesthan previous work. We also show that both non-targeted and targeted adversarial examples gen-erated using our new approaches can successfully attack Clarifai.com, which is a black-box imageclassification system. Furthermore, we study some geometric properties to better understand thetransferable adversarial examples.ACKNOWLEDGMENTSThis material is in part based upon work supported by the National Science Foundation under GrantNo. TWC-1409915. Any opinions, findings, and conclusions or recommendations expressed in thismaterial are those of the author(s) and do not necessarily reflect the views of the National ScienceFoundation.REFERENCESNicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. arXivpreprint arXiv:1608.04644 , 2016.13Published as a conference paper at ICLR 2017Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers:from adversarial to random noise. In Advances in Neural Information Processing Systems , pp.1624–1632, 2016.Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarialexamples. arXiv preprint arXiv:1412.6572 , 2014.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. arXiv preprint arXiv:1512.03385 , 2015.Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR ,abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980 .Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial exam-ples and black-box attacks. arXiv preprint arXiv:1611.02770 , 2016.Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universaladversarial perturbations. arXiv preprint arXiv:1610.08401 , 2016.Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: fromphenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 ,2016a.Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and AnanthramSwami. Practical black-box attacks against deep learning systems using adversarial examples.arXiv preprint arXiv:1602.02697 , 2016b.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. CoRR , abs/1409.1556, 2014. URL http://arxiv.org/abs/1409.1556 .J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man vs. computer: Benchmarking machinelearning algorithms for traffic sign recognition. Neural Networks , (0):–, 2012. ISSN 0893-6080.doi: 10.1016/j.neunet.2012.02.016. URL http://www.sciencedirect.com/science/article/pii/S0893608012000457 .Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow,and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 , 2013.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov,Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions.CoRR , abs/1409.4842, 2014. URL http://arxiv.org/abs/1409.4842 .14
BkSmc8qll
Under review as a conference paper at ICLR 2017DYNAMIC NEURAL TURING MACHINE WITH CONTIN -UOUS AND DISCRETE ADDRESSING SCHEMESCaglar Gulcehre, Sarath Chandar, Kyunghyun Choy, Yoshua BengioUniversity of Montreal, name.lastname@umontreal.cayNew York University, name.lastname@nyu.eduABSTRACTIn this paper, we extend neural Turing machine (NTM) into a dynamic neural Turingmachine (D-NTM) by introducing a trainable memory addressing scheme. Thisaddressing scheme maintains for each memory cell two separate vectors, content andaddress vectors. This allows the D-NTM to learn a wide variety of location-basedaddressing strategies including both linear and nonlinear ones. We implementthe D-NTM with both continuous, differentiable and discrete, non-differentiableread/write mechanisms. We investigate the mechanisms and effects for learning toread and write to a memory through experiments on Facebook bAbI tasks using bothafeedforward andGRU -controller. The D-NTM is evaluated on a set of FacebookbAbI tasks and shown to outperform NTM and LSTM baselines. We also providefurther experimental results on sequential MNIST, associative recall and copy tasks.1 I NTRODUCTIONDesigning general-purpose learning algorithms is one of the long-standing goals of artificial intelligence.Despite the success of deep learning in this area (see, e.g., (Goodfellow et al., 2016)) there are still a setof complex tasks that are not well addressed by conventional neural networks. Those tasks often require aneural network to be equipped with an explicit, external memory in which a larger, potentially unbounded,set of facts need to be stored. They include, but are not limited to, episodic question-answering (Westonet al., 2015b; Hermann et al., 2015; Hill et al., 2015), compact algorithms (Zaremba et al., 2015),dialogue (Serban et al., 2016; Vinyals & Le, 2015) and video caption generation (Yao et al., 2015).Recently two promising approaches based on neural networks to this type of tasks have been proposed.Memory networks (Weston et al., 2015b) explicitly store all the facts, or information, available foreach episode in an external memory (as continuous vectors) and use the attention-based mechanismto index them when returning an output. On the other hand, neural Turing machines (NTM, (Graveset al., 2014)) read each fact in an episode and decides whether to read, write the fact or do both tothe external, differentiable memory.A crucial difference between these two models is that the memory network does not have a mechanismto modify the content of the external memory, while the NTM does. In practice, this leads to easierlearning in the memory network, which in turn resulted in it being used more in real tasks (Bordes et al.,2015; Dodge et al., 2015). On the contrary, the NTM has mainly been tested on a series of small-scale,carefully-crafted tasks such as copy and associative recall. The NTM, however is more expressive,precisely because it can store and modify the internal state of the network as it processes an episode.The original NTM supports two modes of addressing (which can be used simultaneously.) They arecontent-based and location-based addressing. We notice that the location-based strategy is based onlinear addressing. The distance between each pair of consecutive memory cells is fixed to a constant.We address this limitation, in this paper, by introducing a learnable address vector for each memorycell of the NTM with least recently used memory addressing mechanism, and we call this variant adynamic neural Turing machine (D-NTM).We evaluate the proposed D-NTM on the full set of Facebook bAbI task (Weston et al., 2015b)using either continuous , differentiable attention or discrete , non-differentiable attention (Zaremba &Sutskever, 2015) as an addressing strategy. Our experiments reveal that it is possible to use the discrete,non-differentiable attention mechanism, and in fact, the D-NTM with the discrete attention and GRUcontroller outperforms the one with the continuous attention. After we published our paper on arXiv, anew extension of NTM called DNC (Graves et al., 2016) has also provided results on bAbI task as well.1Under review as a conference paper at ICLR 2017We also provide results on sequential-MNIST and algorithmic tasks proposed by (Graves et al., 2014)in order to investigate the ability of our model when dealing with long-term dependencies.Our Contributions1. We propose a generalization of Neural Turing Machine called a dynamic neural Turing machine(D-NTM) which employs a learnable and location-based addressing.2.We demonstrate the application of neural Turing machines on a more natural and less toyish task:episodic question-answering besides the toy tasks. We provide detailed analysis of our model onthis task.3.We propose to use the discrete attention mechanism and empirically show that, it can outperformthe continuous attention based addressing for episodic QA task.4. We propose a curriculum strategy for our model with the feedforward controller and discreteattention that improves our results significantly.2 D YNAMIC NEURAL TURING MACHINEThe proposed dynamic neural Turing machine (D-NTM) extends the neural Turing machine (NTM,(Graves et al., 2014)) which has a modular design. The NTM consists of two main modules, a controllerand, a memory. The controller, which is often implemented as a recurrent neural network, issues acommand to the memory so as to read, write to and erase a subset of memory cells. Although thememory was originally envisioned as an integrated module, it is not necessary, and the memory maybe an external, black box (Zaremba & Sutskever, 2015).2.1 C ONTROLLERAt each time step t, the controller (1) receives an input value xt, (2) addresses and reads the memory andcreates the content vector t, (3) erases/writes a portion of the memory, (4) updates its own hidden stateht, and (5) outputs a value yt(if needed.) In this paper, we use both a gated recurrent unit (GRU, (Choet al., 2014)) and a feedforward-controller to implement the controller such that for a GRU controllerht=GRU(xt;ht1;t) (1)or for a feedforward-controllerht=(xt;t): (2)2.2 M EMORYWe use a rectangular matrix M2RN(dc+da)to denoteNmemory cells. Unlike the original NTM,we partition each memory cell vector into two parts:M= [A;C]:The first part A2RNdais a learnable address matrix, and the second C2RNdca content matrix.In other words, each memory cell miis nowmi= [ai;ci]:The address part aiis considered a model parameter that is updated during training. During inference,the address part is not overwritten by the controller and remains constant. On the other hand, thecontent part ciis both read and written by the controller both during training and inference. At thebeginning of each episode, the content part of the memory is refreshed to be an all-zero matrix, C0=0.This introduction of the learnable address portion for each memory cell allows the model to learnsophisticated location-based addressing strategies. A similar addressing mechanism is also exploredin (Reed & de Freitas, 2015) in the context of learning program traces.2.3 M EMORY ADDRESSINGMemory addressing in the D-NTM is equivalent to computing an N-dimensional address vector. The D-NTM computes three such vectors for respectively reading wt2RN, erasing et2Rdcand writing ut2RN. Specifically for writing, the controller further computes a candidate memory content vector ct22Under review as a conference paper at ICLR 2017Address 1ContentAddress 2ContentAddress 3ContentAddress 4ContentAddress 5ContentAddress 6ContentAddress 7ContentControllerMemoryContentReaderWriterStoryFact t-1Fact tQuestionAnswer.........Figure 1: A graphical illustration of the proposed dynamic neural Turing machine with therecurrent-controller. The controller receives the fact as a continuous vector encoded by a recurrentneural network, computes the read and write weights for addressing the memory. If the D-NTMautomatically detects that a query has been received, it returns an answer and terminates.Rdcbased on its current hidden state of the controller ht2Rdhand the input of the controller scaled witha scalar gate twhich is a function of the hidden state and the input of the controller as well, see Eqn 4.t=f(ht;xt); (3)ct=ReLU (Wmht+tWxxt+bm): (4)Reading With the read vector wt, the content vector read from the memory t2Rda+dcis retrievedbyt= (wt)>Mt1; (5)where wtis a row vector.Erasing and Writing Given the erase, write and candidate memory content vectors ( et,utj, andctrespectively) generated by a simple MLP conditioned on the hidden state of the controller ht, thememory matrix is updated by,Ct[j] = (1etutj)Ct1[j] +utjct: (6)where the subscript jinCt[j]denotes thej-th row of the content part Ctof the memory matrix Mt.No Operation (NOP) As found in (Joulin & Mikolov, 2015), an additional NOP action might bebeneficial for the controller notto access the memory once in a while. We model this situation bydesignating one memory cell as a NOP cell. Reading or writing from this memory cell is ignored.2.4 L EARNINGOnce the proposed D-NTM is executed, it returns the output distribution p(yjx1;:::;xT). As a result,we define a cost function as the negative log-likelihood:C() =1NNXn=1logp(ynjxn1;:::;xnT); (7)whereis a set of all the parameters. As the proposed D-NTM, just like the original NTM, is fullyend-to-end differentiable, we can compute the gradient of this cost function by using backpropagationand learn the parameters of the model with a gradient-based optimization algorithm, such as stochasticgradient descent, to train it end-to-end.3Under review as a conference paper at ICLR 20173 A DDRESSING MECHANISM3.1 A DDRESS VECTORSEach of the address vectors (both read and write) is computed in the same way. The way they arecomputed are very similar to the content based addressing in (Graves et al., 2014). First, the controllercomputes a key vector:kt=W>kht+bk;where Wk2RN(da+dc)andbk2Rda+dcif the read head is being computed, otherwiseWk2RNdcandbk2Rdcif the write head weights are being computed. They can be the parametersfor a specific head (either read or write.) Also, the sharpening factor t2R1is computed as:softplus (x) =log(exp(x) + 1) (8)t=softplus (u>ht+b) + 1: (9)uandbare the parameters of the sharpening t.The address vector is then computed by,zti=tSkt;mti(10)wti=exp(zti)Pjexp(ztj); (11)where the similarity function S2R0is defined asS(x;y) =xy(jjxjjjjyjj+):3.2 M ULTI -STEP ADDRESSINGAt each time-step, controller may require more than one-step for accessing to the memory. The originalNTM addresses this by implementing multiple sets of read, erase and write heads. In this paper, weexplore an option of allowing each head to operate more than once at each time step, similar to themulti-hop mechanism from the end-to-end memory network (Sukhbaatar et al., 2015).3.3 D YNAMIC LEAST RECENTLY USED ADDRESSINGWe introduce a memory addressing schema that can learn to put more emphasis on the least recentlyused (LRU) memory locations. As observed in (Santoro et al., 2016; Rae et al., 2016), we find it easierto learn the write operations with the use of LRU addressing.To learn a LRU based addressing, first we compute the exponentially moving averages of the logits ( zt)asvt,vt= 0:1vt1+ 0:9zt. We rescale the accumulated vtwitht, such that the controller adjuststhe influence of how much previously written memory locations should effect the attention weightsof a particular time-step. Next, we subtract vtfromztin order to reduce the weights of previouslyread or written memory locations. tis a shallow MLP with a scalar output and it is conditioned onthe hidden state of the controller. tis parametrized with the parameters uandb,t=sigmoid (u>ht+b); (12)wt=softmax (zttvt1): (13)This addressing method increases the weights of the least recently used rows of the memory. Themagnitude of the influence of the least-recently used memory locations is being learned and adjustedwitht. Our LRU addressing is dynamic due to the model’s ability to switch between pure content-basedaddressing and LRU. During the training, we do not backpropagate through vt. Due to the dynamicnature of this addressing mechanism, it can be used for both read and write operations. If needed,the model will automatically learn to disable LRU while reading from the memory.4 G ENERATING DISCRETE ADDRESS VECTORSIn this section, we describe the discrete attention based addressing strategy.4Under review as a conference paper at ICLR 2017Discrete Addressing Let us use wto denote an address vector (either read, write or erase) at timet. By definition in Eq. (10), every element in this address vector is positive and sums up to one. Inother words, we can treat this vector as the probabilities of a categorical distribution C(w)withdim(w)choices:p(j) =wj;wherewjis thej-th element of w. We can readily sample from this categorical distribution and forman one-hot vector ~wsuch that~wk=I(k=j);wherejC(w), andIis an indicator function.Training We use this sampling-based strategy for all the heads during training. This clearly makesthe use of backpropagation infeasible to compute the gradient, as the sampling procedure is notdifferentiable. Thus, we use REINFORCE (Williams, 1992) together with the three variance reductiontechniques–global baseline, input-dependent baseline and variance normalization– suggested in (Mnih& Gregor, 2014).Let us define R(x) = logp(yjx1;:::;xT)as a reward. We first center and re-scale the reward by~R(x) =R(x)bp2+;wherebandis running average and standard deviation of R. We can further center it for each inputxseparately, i.e.,~R(x) ~R(x)b(x);whereb(x)is computed by a baseline network which takes as input xand predicts its estimated reward.The baseline network is trained to minimize the Huber loss (Huber, 1964) between the true reward~R(x)and the predicted reward b(x). We use the Huber loss, which is defined byH(x) =x2forjxj;(2jxj);otherwise,due to its robustness. As a further measure to reduce the variance, we regularize the negative entropyof all those category distributions to facilitate a better exploration during training (Xu et al., 2015).Then, the cost function for each training example is approximated asCn() =logp(yjx1:T;~w1:J;~u1:J;~e1:J)JXj=1~R(xn)(logp( ~wjjx1:T) + logp(~ujjx1:T) + logp(~ejjx1:T))HJXj=1(H(wjjx1:T) +H(ujjx1:T) +H(ejjx1:T)):whereJis the number of addressing steps, His the entropy regularization coefficient, and Hdenotesthe entropy.Inference Once training is over, we switch to a deterministic strategy. We simply choose an elementofwwith the largest value to be the index of the target memory cell, such that~wk=I(k=argmax (w)):Curriculum Learning for the Discrete Attention Training discrete attention with feed-forwardcontroller and REINFORCE is challenging. We propose to use a curriculum strategy for trainingwith the discrete attention in order to tackle this problem. For each minibatch, we sample from abinomial distribution with the probability pt,tBin(pt). The model will either use the discreteor the continuous-attention based on the t. We start the training procedure with p0= 1and duringthe trainingptis annealed to 0by settingpt=p0p1+t.We can rewrite the weights wtas in Equation 14, where it is expressed as the combination of continuousattention weights wtand discrete attention weights ~wtwithtbeing a binary variable that choosesto use one of them,wt twt+ (1t)~wt: (14)5Under review as a conference paper at ICLR 2017By using this curriculum learning strategy, at the beginning of the training, the model learns to usethe memory mainly with the continuous attention. As we anneal the pt, the model will rely more onthe discrete attention.5 R EGULARIZING DYNAMIC NEURAL TURING MACHINESWhen the controller of D-NTM is a powerful recurrent neural network, it is important to regularizetraining of the D-NTM so as to avoid suboptimal solutions in which the D-NTM ignores the memoryand works as a simple recurrent neural network.Read-Write Consistency Regularizer One such suboptimal solution we have observed in ourpreliminary experiments with the proposed D-NTM is that the D-NTM uses the address part Aofthe memory matrix simply as an additional weight matrix, rather than as a means to accessing thecontent part C. We found that this pathological case can be effectively avoided by encouraging the readhead to point to a memory cell which has also been pointed by the write head. This can be implementedas the following regularization term:Rrw(w;u) =TXt0=1jj1(1t0t0Xt=1ut)>wt0jj22 (15)In the equations above, utis the write and wtis the read weights.Next Input Prediction as Regularization Temporal structure is a strong signal that should beexploited by the controller based on a recurrent neural network. We exploit this structure by lettingthe controller predict the input in the future. We maximize the predictability of the next input by thecontroller during training. This is equivalent to minimizing the following regularizer:Rpred(W) =logp(ft+1jft;wt;ut;Mt;W))whereftis the current input and ft+1is the input at next timestep. We found this regularizer to beeffective in our preliminary experiments and use it for bAbI tasks.6 R ELATED WORKA recurrent neural network (RNN), which is used as a controller in the proposed D-NTM, has animplicit memory in the form of recurring hidden states. Even with this implicit memory, a vanillaRNN is however known to have difficulties in storing information for long time-spans (Bengio et al.,1994; Hochreiter, 1991). Long short-term memory (LSTM, (Hochreiter & Schmidhuber, 1997)) andgated recurrent units (GRU, (Cho et al., 2014)) have been found to address this issue. However allthese models based solely on RNNs have been found to be limited when they are used to solve, e.g.,algorithmic tasks and episodic question-answering.In addition to the finite random access memory of the neural Turing machine, based on which theD-NTM is designed, other data structures have been proposed as external memory for neural networks.In (Sun et al., 1997; Grefenstette et al., 2015; Joulin & Mikolov, 2015), a continuous, differentiablestack was proposed. In (Zaremba et al., 2015; Zaremba & Sutskever, 2015), grid and tape storagesare used. These approaches differ from the NTM in that their memory is unbounded and can growindefinitely. On the other hand, they are often not randomly accessible.Memory networks (Weston et al., 2015b) form another family of neural networks with external memory.In this class of neural networks, information is stored explicitly as it is (in the form of its continuousrepresentation) in the memory, without being erased or modified during an episode. Memory networksand their variants have been applied to various tasks successfully (Sukhbaatar et al., 2015; Bordes et al.,2015; Dodge et al., 2015; Xiong et al., 2016). Miller et al. (2016) have also independently proposedthe idea of having separate key and value vectors for memory networks.Another related family of models is the attention-based neural networks. Neural networks withcontinuous or discrete attention over an input have shown promising results on a variety ofchallenging tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), speechrecognition (Chorowski et al., 2015), machine reading comprehension (Hermann et al., 2015) andimage caption generation (Xu et al., 2015).6Under review as a conference paper at ICLR 2017The latter two, the memory network and attention-based networks, are however clearly distinguishablefrom the D-NTM by the fact that they do not modify the content of the memory.7 E XPERIMENTSWe provide experimental results to demonstrate the abilities of our model, first on Facebook bAbItask (Weston et al., 2015a). We give detailed analysis and experimental results on this task. We alsocompare different variations of NTM on bAbI tasks. We have performed experiments on sequentialpermuted MNIST (Le et al., 2015) and on toy tasks to compare other published models on these taskswith a recurrent controller. The details of our experiments are provided in the supplementary material.7.1 E PISODIC QUESTION -ANSWERING :BABI TASKSIn this section, we evaluate the proposed D-NTM on the recently proposed episodic question-answeringtask called Facebook bAbI. We use the dataset with 10k training examples per sub-task provided byFacebook.1For each episode, the D-NTM reads a sequence of factual sentences followed by a question,all of which are given as natural language sentences. The D-NTM is expected to store and retrieverelevant information in the memory in order to answer the question based on the presented facts. Exactimplementation details and hyper-parameter settings are provided in the appendix.7.1.1 G OALSThe goal of this experiment is three-fold. First, we present for the first time the performance of amemory-based network that can both read and write dynamically on the Facebook bAbI tasks2. We aimto understand whether a model that has to learn to write an incoming fact to the memory, rather thanstoring it as it is, is able to work well, and to do so, we compare both the original NTM and proposedD-NTM against an LSTM-RNN.Second, we investigate the effect of having to learn how to write. The fact that the NTM needs tolearn to write likely has adverse effect on the overall performance, when compared to, for instance,end-to-end memory networks (MemN2N, (Sukhbaatar et al., 2015)) and dynamic memory network(DMN+, (Xiong et al., 2016)) both of which simply store the incoming facts as they are. We quantifythis effect in this experiment. Lastly, we show the effect of the proposed learnable addressing scheme.We further explore the effect of using a feedforward controller instead of the GRU controller. In additionto the explicit memory, the GRU controller can use its own internal hidden state as the memory. Onthe other hand, the feedforward controller must solely rely on the explicit memory, as it is the onlymemory available.7.1.2 R ESULTS AND ANALYSISIn Table 1, we first observe that the NTMs are indeed capable of solving this type of episodicquestion-answering better than the vanilla LSTM-RNN. Although the availability of explicit memoryin the NTM has already suggested this result, we note that this is the first time neural Turing machineshave been used in this specific task.All the variants of NTM with the GRU controller outperform the vanilla LSTM-RNN. However, not allof them perform equally well. First, it is clear that the proposed dynamic NTM (D-NTM) using the GRUcontroller outperforms the original NTM with the GRU controller (NTM, CBA only NTM vs. continuousD-NTM, Discrete D-NTM). As discussed earlier, the learnable addressing scheme of the D-NTM allowsthe controller to access the memory slots by location in a potentially nonlinear way. We expect it to helpwith tasks that have non-trivial access patterns, and as anticipated, we see a large gain with the D-NTMover the original NTM in the tasks of, for instance, 12 - Conjunction and 17 - Positional Reasoning.Among the recurrent variants of the proposed D-NTM, we notice significant improvements by usingdiscrete addressing over using continuous addressing. We conjecture that this is due to certain typesof tasks that require precise/sharp retrieval of a stored fact, in which case continuous addressingis in disadvantage over discrete addressing. This is evident from the observation that the D-NTMwith discrete addressing significantly outperforms that with continuous addressing in the tasks of 8 -1https://research.facebook.com/researchers/15439345391893482Similar experiments were done in the recently published (Graves et al., 2016), but D-NTM results for bAbItasks were already available in arxiv by that time.7Under review as a conference paper at ICLR 20171-step 1-step 1-step 1-step 3-steps 3-steps 3-steps 3-stepsLBACBA Soft Discrete LBACBA Soft DiscreteTask LSTM MemN2N DMN+ NTM NTM D-NTM D-NTM NTM NTM D-NTM D-NTM1 0.00 0.00 0.00 16.30 16.88 5.41 6.66 0.00 0.00 0.00 0.002 81.90 0.30 0.30 57.08 55.70 58.54 56.04 61.67 59.38 46.66 62.293 83.10 2.10 1.10 74.16 55.00 74.58 72.08 83.54 65.21 47.08 41.454 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.005 1.20 0.80 0.50 1.46 20.41 1.66 1.04 0.83 1.46 1.25 1.456 51.80 0.10 0.00 23.33 21.04 40.20 44.79 48.13 54.80 20.62 11.047 24.90 2.00 2.40 21.67 21.67 19.16 19.58 7.92 37.70 7.29 5.628 34.10 0.90 0.00 25.76 21.05 12.58 18.46 25.38 8.82 11.02 0.749 20.20 0.30 0.00 24.79 24.17 36.66 34.37 37.80 0.00 39.37 32.5010 30.10 0.00 0.00 41.46 33.13 52.29 50.83 56.25 23.75 20.00 20.8311 10.30 0.10 0.00 18.96 31.88 31.45 4.16 3.96 0.28 30.62 16.8712 23.40 0.00 0.00 25.83 30.00 7.70 6.66 28.75 23.75 5.41 4.5813 6.10 0.00 0.00 6.67 5.63 5.62 2.29 5.83 83.13 7.91 5.0014 81.00 0.10 0.20 58.54 59.17 60.00 63.75 61.88 57.71 58.12 60.2015 78.70 0.00 0.00 36.46 42.30 36.87 39.27 35.62 21.88 36.04 40.2616 51.90 51.80 45.30 71.15 71.15 49.16 51.35 46.15 50.00 46.04 45.4117 50.10 18.60 4.20 43.75 43.75 17.91 16.04 43.75 56.25 21.25 9.1618 6.80 5.30 2.10 3.96 47.50 3.95 3.54 47.50 47.50 6.87 1.6619 90.30 2.30 0.00 75.89 71.51 73.74 64.63 61.56 63.65 75.88 76.6620 2.10 0.00 0.00 1.25 0.00 2.70 3.12 0.40 0.00 3.33 0.00Avg.Err. 36.41 4.24 2.81 31.42 33.60 29.51 27.93 32.85 32.76 24.24 21.79Table 1: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples withthe GRU and feedforward controller. FF stands for the experiments that are conducted with feedforwardcontroller. Let us, note that LBArefers to NTM that uses both LBA and CBA. In this table, wecompare multi-step vs single-step addressing, original NTM with location based+content basedaddressing vs only content based addressing, and discrete vs continuous addressing on bAbI.Lists/Sets and 11 - Basic Coreference. Furthermore, this is in line with an earlier observation in (Xu et al.,2015), where discrete addressing was found to generalize better in the task of image caption generation.In Table 2, we also observe that the D-NTM with the feedforward controller and discrete attentionperforms worse than LSTM and D-NTM with continuous-attention. However, when the proposedcurriculum strategy from Sec. 4 is used, the average test error drops from 68.30 to 37.79.We empirically found training of the feedforward controller more difficult than that of the recurrentcontroller. We train our feedforward controller based models four times longer (in terms of the numberof updates) than the recurrent controller based ones in order to ensure that they are converged for mostof the tasks. On the other hand, the models trained with the GRU controller overfit on bAbI tasksvery quickly. For example, on tasks 3 and 16 the feedforward controller based model underfits (i.e.,high training loss) at the end of the training, whereas with the same number of units the model withthe GRU controller can overfit on those tasks after 3,000 updates only.When our results are compared to the variants of the memory network Weston et al. (2015b) (MemN2Nand DMN+), we notice a significant performance gap. We attribute this gap to the difficulty in learningto manipulate and store a complex input.FF FF FFSoft Discrete DiscreteTask D-NTM D-NTM D-NTM1 4.38 81.67 14.792 27.5 76.67 76.673 71.25 79.38 70.834 0.00 78.65 44.065 1.67 83.13 17.716 1.46 48.76 48.137 6.04 54.79 23.548 1.70 69.75 35.629 0.63 39.17 14.3810 19.80 56.25 56.2511 0.00 78.96 39.5812 6.25 82.5 32.0813 7.5 75.0 18.5414 17.5 78.75 24.7915 0.0 71.42 39.7316 49.65 71.46 71.1517 1.25 43.75 43.7518 0.24 48.13 2.9219 39.47 71.46 71.5620 0.0 76.56 9.79Avg.Err. 12.81 68.30 37.79Table 2: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples withfeedforward controller.We also provide further experiments investigating different extensions on D-NTM in the appendix.8Under review as a conference paper at ICLR 20177.2 S EQUENTIAL pMNISTIn sequential MNIST task, the pixels of the MNIST digits are provided to the model in scan line order,left to right and top to bottom (Le et al., 2015). At the end of sequence of pixels, the model predictsthe label of the digit in the sequence of pixels. We experiment D-NTM on the variation of sequentialMNIST where the order of the pixels is randomly shuffled, we call this task as permuted MNIST(pMNIST). An important contribution of this task to our paper, in particular, is to measure the model’sability to perform well when dealing with long-term dependencies. We report our results in Table 33, weobserve improvements over other models that we compare against. In Table 3, ”discrete addressing withMAB” refers to D-NTM model using REINFORCE with baseline computed from moving averages ofthe reward. Discrete addressing with IB refers to D-NTM using REINFORCE with input-based baseline.7.3 NTM T OYTASKSWe explore the possibility of using D-NTM to solve algorithmic tasks such as copy and associativerecall tasks. We train our model on the same lengths of sequences that is experimented in (Graveset al., 2014). We report our results in Table 4. We find out that D-NTM using continuous-attentioncan successfully learn the ”Copy” and ”Associative Recall” tasks.In Table 4, we train our model on sequences of the same length as the experiments in (Graves et al., 2014)and test the model on the sequences of the maximum length seen during the training. We consider modelto be successful on copy or associative recall if its validation cost (binary cross-entropy) is lower than0:02over the sequences of maximum length seen during the training. We set the threshold to 0:02todetermine whether a model is successful on a task. Because empirically we observe that the models havehigher validation costs perform badly in terms of generalization over the longer sequences. ”D-NTMdiscrete” model in this table is trained with REINFORCE using moving averages to estimate the baseline.Test AccD-NTM discrete MAB 89.6D-NTM discrete IB 92.3Soft D-NTM 93.4NTM 90.9I-RNN (Le et al., 2015) 82.0Zoneout (Krueger et al., 2016) 93.1LSTM (Krueger et al., 2016) 89.8Unitary-RNN (Arjovsky et al., 2015) 91.4Recurrent Dropout (Krueger et al., 2016) 92.5Table 3: Sequential pMNIST.Copy Tasks Associative RecallSoft D-NTM Success SuccessD-NTM discrete Success FailureNTM Success SuccessTable 4: NTM Toy Tasks.8 C ONCLUSION AND FUTURE WORKIn this paper we extend neural Turing machines (NTM) by introducing a learnable addressing schemewhich allows the NTM to be capable of performing highly nonlinear location-based addressing.This extension, to which we refer by dynamic NTM (D-NTM), is extensively tested with variousconfigurations, including different addressing mechanisms (continuous vs. discrete) and differentnumber of addressing steps, on the Facebook bAbI tasks. This is the first time an NTM-type modelwas tested on this task, and we observe that the NTM, especially the proposed D-NTM, performs betterthan vanilla LSTM-RNN. Furthermore, the experiments revealed that the discrete, discrete addressingworks better than the continuous addressing with the GRU controller, and our analysis reveals thatthis is the case when the task requires precise retrieval of memory content.Our experiments show that the NTM-based models can be weaker than other variants of memorynetworks which do not learn but have an explicit mechanism of storing incoming facts as they are. Weconjecture that this is due to the difficulty in learning how to write, manipulate and delete the contentof memory. Despite this difficulty, we find the NTM-based approach, such as the proposed D-NTM,3Let us note that, the current state of art on this task is recurrent batch normalization with LSTM (Cooijmanset al., 2016) with 95.6% accuracy. It is possible to use recurrent batch normalization in our model and potentiallyimprove our results on this task as well.9Under review as a conference paper at ICLR 2017to be a better, future-proof approach, because it can scale to a much longer horizon (where it becomesimpossible to explicitly store all the experiences.)OnpMNIST task, we show that our model can outperform other similar type of approaches proposedto deal with the long-term dependencies. On copy and associative recall tasks, we show that our modelcan solve the algorithmic problems that are proposed to solve with NTM type of models.The success of both the learnable address and the discrete addressing scheme suggests two futureresearch directions. First, we should try both of these schemes in a wider array of memory-based models,as they are not specific to the neural Turing machines. Second, the proposed D-NTM needs to beevaluated on a diverse set of applications, such as text summarization (Rush et al., 2015), visual question-answering (Antol et al., 2015) and machine translation, in order to make a more concrete conclusion.REFERENCESStanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick,and Devi Parikh. VQA: visual question answering. In 2015 IEEE International Conference onComputer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015 , pp. 2425–2433, 2015.Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. arXivpreprint arXiv:1511.06464 , 2015.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. In Proceedings Of The International Conference on RepresentationLearning (ICLR 2015) , 2015.Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradientdescent is difficult. Neural Networks, IEEE Transactions on , 5(2):157–166, 1994.Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple questionanswering with memory networks. arXiv preprint arXiv:1506.02075 , 2015.Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and YoshuaBengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation.arXiv preprint arXiv:1406.1078 , 2014.Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio.Attention-based models for speech recognition. arXiv preprint arXiv:1506.07503 , 2015.Tim Cooijmans, Nicolas Ballas, C ́esar Laurent, and Aaron Courville. Recurrent batch normalization.arXiv preprint arXiv:1603.09025 , 2016.Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, ArthurSzlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog systems.CoRR , abs/1511.06931, 2015.Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MITPress, 2016. URL http://www.deeplearningbook.org .Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401 ,2014.Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi ́nska, Sergio G ́omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al.Hybrid computing using a neural network with dynamic external memory. Nature , 538(7626):471–476, 2016.Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning totransduce with unbounded memory. In Advances in Neural Information Processing Systems , pp.1819–1827, 2015.Caglar Gulcehre, Marcin Moczulski, Misha Denil, and Yoshua Bengio. Noisy activation functions.arXiv preprint arXiv:1603.00391 , 2016.Karl Moritz Hermann, Tom ́aˇs Ko ˇcisk`y, Edward Grefenstette, Lasse Espeholt, Will Kay, MustafaSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. arXiv preprintarXiv:1506.03340 , 2015.10Under review as a conference paper at ICLR 2017Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Readingchildren’s books with explicit memory representations. arXiv preprint arXiv:1511.02301 , 2015.Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universit ̈atM ̈unchen , pp. 91, 1991.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural Computation , 9(8):1735–1780, 1997.Peter J. Huber. Robust estimation of a location parameter. Ann. Math. Statist. , 35(1):73–101, 03 1964.Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrentnets. In Advances in Neural Information Processing Systems , pp. 190–198, 2015.Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR , abs/1412.6980,2014.David Krueger, Tegan Maharaj, J ́anos Kram ́ar, Mohammad Pezeshki, Nicolas Ballas, Nan RosemaryKe, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regularizingrnns by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305 , 2016.Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networksof rectified linear units. arXiv preprint arXiv:1504.00941 , 2015.Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-basedneural machine translation. In Proceedings Of The Conference on Empirical Methods for NaturalLanguage Processing (EMNLP 2015) , 2015.Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston.Key-value memory networks for directly reading documents. CoRR , abs/1606.03126, 2016. URLhttp://arxiv.org/abs/1606.03126 .Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXivpreprint arXiv:1402.0030 , 2014.Jack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves,and Timothy P Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes.InAdvances in NIPS . 2016.Scott Reed and Nando de Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279 ,2015.Alexander M. Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentencesummarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural LanguageProcessing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015 , pp. 379–389, 2015.Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One-shotlearning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065 , 2016.Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Buildingend-to-end dialogue systems using generative hierarchical neural network models. In Proceedingsof the 30th AAAI Conference on Artificial Intelligence (AAAI-16) , 2016.Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks.arXiv preprint arXiv:1503.08895 , 2015.Guo-Zheng Sun, C. Lee Giles, and Hsing-Hen Chen. The neural network pushdown automaton:Architecture, dynamics and training. In Adaptive Processing of Sequences and Data Structures,International Summer School on Neural Networks , pp. 296–345, 1997.Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869 , 2015.Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai-complete questionanswering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698 , 2015a.Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of TheInternational Conference on Representation Learning (ICLR 2015) , 2015b. In Press.11Under review as a conference paper at ICLR 2017Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcementlearning. Machine Learning , 8:229–256, 1992.Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textualquestion answering. CoRR , abs/1603.01417, 2016.Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and YoshuaBengio. Show, attend and tell: Neural image caption generation with visual attention. In ProceedingsOf The International Conference on Representation Learning (ICLR 2015) , 2015.Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and AaronCourville. Describing videos by exploiting temporal structure. In Computer Vision (ICCV), 2015IEEE International Conference on . IEEE, 2015.Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR ,abs/1505.00521, 2015.Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithmsfrom examples. arXiv preprint arXiv:1511.07275 , 2015.12Under review as a conference paper at ICLR 2017A E XPERIMENTAL DETAILSA.1 M ODEL AND TRAINING DETAILS FOR B ABIWe use the same hyperparameters for all the tasks for a given model.A.1.1 F ACT REPRESENTATIONWe use a recurrent neural network with GRU units to encode a variable-length fact into a fixed-sizevector representation. This allows the D-NTM to exploit the word ordering in each fact, unlike whenfacts are encoded as bag-of-words vectors.A.1.2 C ONTROLLERWe experiment with both a recurrent and feedforward neural network as the controller that generatesthe read and write weights. The controller has 180 units. We train our feed-forward controller usingnoisy-tanh activation function (Gulcehre et al., 2016) since we were experiencing training difficultieswith sigmoid andtanh activation functions. We use both single-step and three-steps addressing withour GRU controller.A.1.3 M EMORYThe memory contains 120 memory cells. Each memory cell consists of a 16-dimensional addresspart and 28-dimensional content part.A.1.4 T RAINING DETAILSWe set aside a random 10% of the training examples as a validation set for each sub-task and useit for early-stopping and hyperparameter search. We train one D-NTM for each sub-task, usingAdam (Kingma & Ba, 2014) with its learning rate set to 0:003and0:007respectively for GRUand Feedforward controller. The size of each minibatch is 160, and each minibatch is constructeduniform-randomly from the training set.A.2 M ODEL AND TRAINING DETAILS FOR SEQUENTIAL MNISTOn sequential MNIST task we try to keep the capacity of our model to be close to our baselines. Weuse 100 GRU units in the controller and each content vector of size 8 and with address vectors of size8. We use a learning rate of 1e3and trained the model with adam optimizer. We did not use theread and write consistency regularization in any of our models.A.3 M ODEL AND TRAINING DETAILS FOR TOYTASKSOn both copy and associative recall tasks, we try to keep the capacity of our model to be close to ourbaselines. We use 100 GRU units in the controller and each content vector of has a size of 8 and usingaddress vector of size 8. We use a learning rate of 1e3and trained the model with adam optimizer.We did not use the read and write consistency regularization in any of our models. For the model withthe discrete attention we use REINFORCE with baseline computed using moving averages.B V ISUALIZATION OF DISCRETE ATTENTIONWe visualize the attention of D-NTM with GRU controller with discrete attention in Figure 2. Fromthis example, we can see that D-NTM has learned to find the correct supporting fact even withoutany supervision for the particular story in the visualization.C L EARNING CURVES FOR THE RECURRENT CONTROLLERIn Figure 3, we compare the learning curves of the continuous and discrete attention D-NTM modelwith recurrent controller on Task 1. Surprisingly, the discrete attention D-NTM converges faster thanthe continuous-attention model. The main difficulty of learning continuous-attention is due to thefact that learning to write with continuous-attention can be challenging.13Under review as a conference paper at ICLR 2017Figure 2: An example view of the discrete attention over the memory slots for both read (left) and writeheads(right). x-axis the denotes the memory locations that are being accessed and y-axis correspondsto the content in the particular memory location. In this figure, we visualize the discrete-attention modelwith 3-reading steps and on task- 20. It is easy to see that the NTM with discrete-attention accessesto the relevant part of the memory. We only visualize the last-step of the 3-steps writing. Becausewith discrete attention usually the model just reads the empty slots of the memory.0 50 100 150 200 250 3000.00.51.01.52.02.53.0Train nll hard attention modelTrain nll soft attention modelFigure 3: A visualization for the learning curves of continuous and discrete D-NTM models trainedon Task 1 using 3 steps. In most tasks, we observe that the discrete attention model with GRU controllerdoes converge faster than the continuous-attention model.D A C OMPARISON BETWEEN THE LEARNINGCURVES OF INPUT BASED BASELINE AND REGULAR BASELINE ON pMNISTIn Figure 4, we show the learning curves of input-based-baseline (ibb) and regular REINFORCE withmoving averages baseline (mab) on the pMNIST task. We observe that input-based-baseline in generalis much easier to optimize and converges faster as well. But it can quickly overfit to the task as well.E T RAININGWITH CONTINUOUS -ATTENTION AND TESTING WITH DISCRETE -ATTENTIONIn Table 5, we provide results investigating the effects of using discrete attention model at the test-timefor a model trained with feed-forward controller and continuous attention. DiscreteD-NTM modelbootstraps the discrete attention with the continuous attention, using the curriculum method that we have14Under review as a conference paper at ICLR 20170 20 40 60 80 1000.00.51.01.52.02.5validation learning curve of ibbvalidation learning curve of mabtraining learning curve of ibbtraining learning curve of mabFigure 4: We compare the learning curves of our D-NTM model using discrete attention on pMNISTtask with input-based baseline and regular REINFORCE baseline. The x-axis is the loss and y-axisis the number of epochs.introduced in Section ”Curriculum Learning for the Discrete Attention”. DiscreteyD-NTM model is thecontinuous-attention model which uses discrete-attention at the test time. We observe that the DiscreteyD-NTM model which is trained with continuous-attention outperforms Discrete D-NTM model.continuous Discrete DiscreteDiscreteyTask D-NTM D-NTM D-NTM D-NTM1 4.38 81.67 14.79 72.282 27.5 76.67 76.67 81.673 71.25 79.38 70.83 78.954 0.00 78.65 44.06 79.695 1.67 83.13 17.71 68.546 1.46 48.76 48.13 31.677 6.04 54.79 23.54 49.178 1.70 69.75 35.62 79.329 0.63 39.17 14.38 37.7110 19.80 56.25 56.25 25.6311 0.00 78.96 39.58 82.0812 6.25 82.5 32.08 74.3813 7.5 75.0 18.54 47.0814 17.5 78.75 24.79 77.0815 0.0 71.42 39.73 73.9616 49.65 71.46 71.15 53.0217 1.25 43.75 43.75 30.4218 0.24 48.13 2.92 11.4619 39.47 71.46 71.56 76.0520 0.0 76.56 9.79 13.96Avg 12.81 68.30 37.79 57.21Table 5: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with thefeedforward controller. DiscreteD-NTM model bootstraps the discrete attention with the continuousattention, using the curriculum method that we have introduced in Section 4. DiscreteyD-NTM modelis the continuous-attention model which uses discrete-attention at the test time.15Under review as a conference paper at ICLR 2017F D-NTM WITH BOW F ACT REPRESENTATIONIn Table 6, we provide results for D-NTM using BoW with positional encoding (PE) Sukhbaatar et al.(2015) as the representation of the input facts. The facts representations are provided as an input tothe GRU controller. In agreement to our results with the GRU fact representation, with the BoW factrepresentation we observe improvements with multi-step of addressing over single-step and discreteaddressing over continuous addressing.Soft Discrete Soft DiscreteTask D-NTM(1-step) D-NTM(1-step) D-NTM(3-steps) D-NTM(3-steps)1 0.00 0.00 0.00 0.002 61.04 59.37 56.87 55.623 55.62 57.5 62.5 57.54 27.29 24.89 26.45 27.085 13.55 12.08 15.83 14.786 13.54 14.37 21.87 13.337 8.54 6.25 8.75 14.588 1.69 1.36 3.01 3.029 17.7 16.66 37.70 17.0810 26.04 27.08 26.87 23.9511 20.41 3.95 2.5 2.2912 0.41 0.83 0.20 4.1613 3.12 1.04 4.79 5.8314 62.08 58.33 61.25 60.6215 31.66 26.25 0.62 0.0516 54.47 48.54 48.95 48.9517 43.75 31.87 43.75 30.6218 33.75 39.37 36.66 36.0419 64.63 69.21 67.23 65.4620 1.25 0.00 1.45 0.00Avg 27.02 24.98 26.36 24.05Table 6: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples withthe GRU controller and representations of facts are obtained with BoW using positional encoding.16
Hk8N3Sclg
Published as a conference paper at ICLR 2017MULTI -AGENT COOPERATIONAND THE EMERGENCE OF (NATURAL ) LANGUAGEAngeliki Lazaridou1, Alexander Peysakhovich2, Marco Baroni2;31Google DeepMind,2Facebook AI Research,3University of Trentoangeliki@google.com ,falexpeys,mbaroni g@fb.comABSTRACTThe current mainstream approach to train natural language systems is to exposethem to large amounts of text. This passive learning is problematic if we are in-terested in developing interactive machines, such as conversational agents. Wepropose a framework for language learning that relies on multi-agent communi-cation. We study this learning in the context of referential games. In these games,a sender and a receiver see a pair of images. The sender is told one of them isthe target and is allowed to send a message from a fixed, arbitary vocabulary tothe receiver. The receiver must rely on this message to identify the target. Thus,the agents develop their own language interactively out of the need to communi-cate. We show that two networks with simple configurations are able to learn tocoordinate in the referential game. We further explore how to make changes to thegame environment to cause the “word meanings” induced in the game to better re-flect intuitive semantic properties of the images. In addition, we present a simplestrategy for grounding the agents’ code into natural language. Both of these arenecessary steps towards developing machines that are able to communicate withhumans productively.1 I NTRODUCTIONI tried to break it to him gently [...] the only way to learn an unknown languageis to interact with a native speaker [...] asking questions, holding a conversation,that sort of thing [...] If you want to learn the aliens’ language, someone [...] willhave to talk with an alien. Recordings alone aren’t sufficient.Ted Chiang, Story of Your LifeOne of the main aims of AI is to develop agents that can cooperate with others to achieve goals(Wooldridge, 2009). Such coordination requires communication. If the coordination partners are toinclude humans, the most obvious channel of communication is natural language. Thus, handlingnatural-language-based communication is a key step toward the development of AI that can thrivein a world populated by other agents.Given the success of deep learning models in related domains such as image captioning or machinetranslation (e.g., Sutskever et al., 2014; Xu et al., 2015), it would seem reasonable to cast the prob-lem of training conversational agents as an instance of supervised learning (Vinyals & Le, 2015).However, training on “canned” conversations does not allow learners to experience the interactiveaspects of communication. Supervised approaches, which focus on the structure of language, are anexcellent way to learn general statistical associations between sequences of symbols. However, theydo not capture the functional aspects of communication, i.e., that humans use words to coordinatewith others and make things happen (Austin, 1962; Clark, 1996; Wittgenstein, 1953).This paper introduces the first steps of a research program based on multi-agent coordination com-munication games . These games place agents in simple environments where they need to develop alanguage to coordinate and earn payoffs. Importantly, the agents start as blank slates, but, by play-ing a game together, they can develop and bootstrap knowledge on top of each others, leading to theemergence of a language.Work done while at Facebook AI Research.1Published as a conference paper at ICLR 2017The central problem of our program, then, is the following: How do we design environments thatfoster the development of a language that is portable to new situations and to new communicationpartners (in particular humans)?We start from the most basic challenge of using a language in order to refer to things in the contextof a two-agent game. We focus on two questions. First, whether tabula rasa agents succeed in com-munication. Second, what features of the environment lead to the development of codes resemblinghuman language.We assess this latter question in two ways. First, we consider whether the agents associate generalconceptual properties, such as broad object categories (as opposed to low-level visual properties),to the symbols they learn to use. Second, we examine whether the agents’ “word usage” is partiallyinterpretable by humans in an online experiment.Other researchers have proposed communication-based environments for the development ofcoordination-capable AI. Work in multi-agent systems has focused on the design of pre-programmedcommunication systems to solve specific tasks (e.g., robot soccer, Stone & Veloso 1998). Most re-lated to our work, Sukhbaatar et al. (2016) and Foerster et al. (2016) show that neural networks canevolve communication in the context of games without a pre-coded protocol. We pursue the samequestion, but further ask how we can change our environment to make the emergent language moreinterpretable.Others (e.g., the SHRLDU program of Winograd 1971 or the game in Wang et al. 2016) proposebuilding a communicating AI by putting humans in the loop from the very beginning. This approachhas benefits but faces serious scalability issues, as active human intervention is required at each step.An attractive component of our game-based paradigm is that humans may be added as players, butdo not need to be there all the time.A third branch of research focuses on “Wizard-of-Oz” environments, where agents learn to playgames by interacting with a complex scripted environment (Mikolov et al., 2015). This approachgives the designer tight control over the learning curriculum, but imposes a heavy engineering burdenon developers. We also stress the importance of the environment (game setup), but we focus onsimpler environments with multiple agents that force them to get smarter by bootstrapping on top ofeach other.We leverage ideas from work in linguistics, cognitive science and game theory on the emergence oflanguage (Wagner et al., 2003; Skyrms, 2010; Crawford & Sobel, 1982; Crawford, 1998). Our gameis a variation of Lewis’ signaling game (Lewis, 1969). There is a rich tradition of linguistic andcognitive studies using similar setups (e.g., Briscoe, 2002; Cangelosi & Parisi, 2002; Spike et al.,2016; Steels & Loetzsch, 2012). What distinguishes us from this literature is our aim to, eventually,develop practical AI. This motivates our focus on more realistic input data (a large collection ofnoisy natural images) and on trying to align the agents’ language with human intuitions.Lewis’ classic games have been studied extensively in game theory under the name of “cheap talk”.These games have been used as models to study the evolution of language both theoretically andexperimentally (Crawford, 1998; Blume et al., 1998; Crawford & Sobel, 1982). A major questionin game theory is whether equilibrium actually occurs in a game as convergence in learning isnot guaranteed (Fudenberg & Peysakhovich, 2014; Roth & Erev, 1995). And, if an equilibriumis reached, which one it will be (since they are typically not unique). This is particularly true forcheap talk games, which exhibit Nash equilibria in which precise language emerges, others wherevague language emerges and others where no language emerges at all (Crawford & Sobel, 1982). Inaddition, because in these games language has no ex-ante meaning and only emerges in the contextof the equilibrium, some of the emergent languages may not be very natural. Our results speak toboth the convergence question and the question of what features of the game cause the appearanceof different types of languages. Thus, our results are also of interest to game theorists.An evolutionary perspective has recently been advocated as a way to mitigate the data hunger oftraditional supervised approaches (Goodfellow et al., 2014; Silver et al., 2016). This research con-firms that learning can be bootstrapped from competition between agents. We focus, however, oncooperation between agents as a way to foster learning while reducing the need for annotated data.2Published as a conference paper at ICLR 20172 G ENERAL FRAMEWORKOur general framework includes K players, each parametrized by k, a collection of tasks/games thatthe players have to perform, a communication protocol Vthat enables the players to communicatewith each other, and payoffs assigned to the players as a deterministic function of a well-definedgoal. In this paper we focus on a particular version of this: referential games . These games arestructured as follows.1. There is a set of images represented by vectors fi1;:::;i Ng, two images are drawn atrandom from this set, call them (iL;iR), one of them is chosen to be the “target” t2fL;Rg2. There are two players, a sender and a receiver, each seeing the images - the sender receivesinputS(iL;iR;t)3. There is a vocabularyVof sizeKand the sender chooses one symbol to send to thereceiver, we call this the sender’s policy s(S(iL;iR;t))2V4. The receiver does not know the target, but sees the sender’s symbol and tries to guess thetarget image. We call this the receiver’s policy r(iL;iR;s(S(iL;iR;t)))2fL;Rg5. Ifr(iL;iR;s(S(iL;iR;t)) =t, that is, if the receiver guesses the target, both playersreceive a payoff of 1 (win), otherwise they receive a payoff of 0 (lose).Many extensions to the basic referential game explored here are possible. There can be more images,or a more sophisticated communication protocol (e.g., communication of a sequence of symbols ormulti-step communication requiring back-and-forth interaction1), rotation of the sender and receiverroles, having a human occasionally playing one of the roles, etc.3 E XPERIMENTAL SETUPImages We use the McRae et al.’s (2005) set of 463 base-level concrete concepts (e.g., cat, ap-ple, car . . . ) spanning across 20 general categories (e.g., animal ,fruit/vegetable ,vehicle . . . ). Werandomly sample 100 images of each concept from ImageNet (Deng et al., 2009). To create tar-get/distractor pairs, we randomly sample two concepts, one image for each concept and whether thefirst or second image will serve as target. We apply to each image a forward-pass through the pre-trained VGG ConvNet (Simonyan & Zisserman, 2014), and represent it with the activations fromeither the top 1000-D softmax layer ( sm) or the second-to-last 4096-D fully connected layer ( fc).Agent Players Both sender and receiver are simple feed-forward networks. For the sender, weexperiment with the two architectures depicted in Figure 1. Both sender architectures take as inputthe target (marked with a green square in Figure 1) and distractor representations, always in thisorder, so that they are implicitly informed of which image is the target (the receiver, instead, seesthe two images in random order).Theagnostic sender is a generic neural network that maps the original image vectors onto a “game-specific” embedding space (in the sense that the embedding is learned while playing the game)followed by a sigmoid nonlinearity. Fully-connected weights are applied to the embedding concate-nation to produce scores over vocabulary symbols.The informed sender also first embeds the images into a “game-specific” space. It then applies1-D convolutions (“filters”) on the image embeddings by treating them as different channels. Theinformed sender uses convolutions with kernel size 2x1 applied dimension-by-dimension to thetwo image embeddings (in Figure 1, there are 4 such filters). This is followed by the sigmoidnonlinearity. The resulting feature maps are combined through another filter (kernel size fx1, wherefis the number of filters on the image embeddings), to produce scores for the vocabulary symbols.Intuitively, the informed sender has an inductive bias towards combining the two images dimension-by-dimension whereas the agnostic sender does not (though we note the agnostic architecture neststhe informed one).1For example, Jorge et al. (2016) explore agents playing a “Guess Who” game to learn about the emergenceof question-asking and answering in language.3Published as a conference paper at ICLR 2017informed sender agnostic sender receiversymbol 1symbol 2symbol 3symbol 1symbol 2symbol 3left imageright imageinformed sender agnostic sender receiversymbol 1symbol 2symbol 3symsymsymleft imageright imageagnostic sender informed sender receiversymbol 1symbol 2symbol 3symbol 1symbol 2symbol 3left imageright imageFigure 1: Architectures of agent players.For both senders, motivated by the discrete nature of language, we enforce a strong communicationbottleneck that discretizes the communication protocol. Activations on the top (vocabulary) layerare converted to a Gibbs distribution (with temperature parameter ), and then a single symbol sissampled from the resulting probability distribution.The receiver takes as input the target and distractor image vectors in random order, as well as thesymbol produced by the sender (as a one-hot vector over the vocabulary). It embeds the images andthe symbol into its own “game-specific” space. It then computes dot products between the symboland image embeddings. Ideally, dot similarity should be higher for the image that is better denotedby the symbol. The two dot products are converted to a Gibbs distribution (with temperature ) andthe receiver “points” to an image by sampling from the resulting distribution.General Training Details We set the following hyperparameters without tuning: embedding di-mensionality: 50, number of filters applied to embeddings by informed sender: 20, temperature ofGibbs distributions: 10. We explore two vocabulary sizes: 10 and 100 symbols.The sender and receiver parameters =hR;Siare learned while playing the game. No weightsare shared and the only supervision used is communication success, i.e., whether the receiver pointedat the right referent.This setup is naturally modeled with Reinforcement Learning (Sutton & Barto, 1998). As out-lined in Section 2, the sender follows policy s(S(iL;iR;t))2Vand the receiver policyr(iL;iR;s(S(iL;iR;t)))2 fL;Rg. The loss function that the two agents must minimize isI E~r[R(~r)]whereRis the reward function returning 1 iff r(iL;iR;s(S(iL;iR;t)) =t. Param-eters are updated through the Reinforce rule (Williams, 1992). We apply mini-batch updates, witha batch size of 32 and for a total of 50k iterations (games). At test time, we compile a set of 10kgames using the same method as for the training games.We now turn to our main questions. The first is whether the agents can learn to successfully coordi-nate in a reasonable amount of time. The second is whether the agents’ language can be thought ofas “natural language”, i.e., symbols are assigned to meanings that make intuitive sense in terms ofour conceptualization of the world.4 L EARNING TO COMMUNICATEOur first question is whether agents converge to successful communication at all. We see that theydo: agents almost perfectly coordinate in the 1k rounds following the 10k training games for everyarchitecture and parameter choice (Table 1).We see, though, some differences between different sender architectures. Figure 2 (left) showsperformance on a sample of the test set as a function of the first 5,000 rounds of training. The agents4Published as a conference paper at ICLR 20170 1k 2k 3k 4k 5k#Games0.40.50.60.70.80.91.0 Communication successagnostic-sender (100 symbols)agnostic-sender (10 symbols)informed-sender (100 symbols)informed-sender (10 symbols)0.000.030.060.0921015202538 100Singular Value PositionNormalized SpectrumFigure 2: Left: Communication success as a function of training iterations, we see that informedsenders converge faster than agnostic ones. Right: Spectrum of an example symbol usage matrix:the first few dimensions do capture only partial variance, suggesting that the usage of more symbolsby the informed sender is not just due to synonymy.id sender vis voc used comm purity (%)obs-chancerep size symbols success ( %) purity (%)1 informed sm 100 58 100 46 272 informed fc 100 38 100 41 233 informed sm 10 10 100 35 184 informed fc 10 10 100 32 175 agnostic sm 100 2 99 21 156 agnostic fc 10 2 99 21 157 agnostic sm 10 2 99 20 158 agnostic fc 100 2 99 19 15Table 1: Playing the referential game: test results after 50K training games. Used symbols columnreports number of distinct vocabulary symbols that were produced at least once in the test phase. Seetext for explanation of comm success andpurity . All purity values are highly significant ( p<0:001)compared to simulated chance symbol assignment when matching observed symbol usage. The obs-chance purity column reports the difference between observed and expected purity under chance.converge to coordination quite fast, but the informed sender reaches higher levels more quickly thanthe agnostic one.The informed sender makes use of more symbols from the available vocabulary, while the agnosticsender constantly uses a compact 2-symbol vocabulary. This suggests that the informed sender isusing more varied and word-like symbols (recall that the images depict 463 distinct objects, so wewould expect a natural-language-endowed sender to use a wider array of symbols to discriminateamong them). However, it could also be the case that the informed sender vocabulary simply con-tains higher redundancy/synonymy. To check this, we construct a (sampled) matrix where rows aregame image pairs, columns are symbols, and entries represent how often that symbol is used for thatpair. We then decompose the matrix through SVD. If the sender is indeed just using a strategy withfew effective symbols but high synonymy, then we should expect a 1- or2-dimensional decomposi-tion. Figure 2 (right) plots the normalized spectrum of this matrix. While there is some redundancyin the matrix (thus potentially implying there is synonymy in the usage), the language still requiresmultiple dimensions to summarize (cross-validated SVD suggests 50 dimensions).We now turn to investigating the semantic properties of the emergent communication protocol. Re-call that the vocabulary that agents use is arbitrary and has no initial meaning. One way to understandits emerging semantics is by looking at the relationship between symbols and the sets of images theyrefer to.5Published as a conference paper at ICLR 2017accordionairplanealligatorambulanceanchorapartmentappleapronarmourashtrayasparagusavocadoaxebagbagpipeballballoonbanana banjobannerbarnbarrelbasementbasketbathtubbatonbayonetbazookabeanbearbeaverbedbedroombeehivebeetbeetlebeltbenchbikebirchbisonblackbirdblenderblouseblueberryboatboltbombbookbookcase●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●accordionairplanealligatorambulanceanchorapartmentappleapronarmourashtrayasparagusavocadoaxebagbagpipeballballoonbananabanjobannerbarnbarrelbasementbasketbathtubbatonbayonetbazookabeanbearbeaverbedbedroombeehivebeetbeetlebeltbenchbikebirchbisonblackbirdblenderblouseblueberryboatboltbombbookbookcase●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●Figure 3: t-SNE plots of object fc vectors color-coded by majority symbols assigned to them byinformed sender. Object class names shown for a random subset. Left: configuration of 4th row ofTable 1. Right: 2nd row of Table 2.The objects in our images were categorized into 20 broader categories (such as weapon andmammal )by McRae et al. (2005). If the agents converged to higher level semantic meanings for the symbols,we would expect that objects belonging to the same category would activate the same symbols, e.g.,that, say, when the target images depict bayonets and guns, the sender would use the same symbolto refer to them, whereas cows and guns should not share a symbol.To quantify this, we form clusters by grouping objects by the symbols that are most often activatedwhen target images contain them. We then assess the quality of the resulting clusters by measuringtheir purity with respect to the McRae categories. Purity (Zhao & Karypis, 2003) is a standardmeasure of cluster “quality”. The purity of a clustering solution is the proportion of category labelsin the clusters that agree with the respective cluster majority category. This number reaches 100%for perfect clustering and we always compare the observed purity to the score that would be obtainedfrom a random permutation of symbol assignments to objects. Table 1 shows that purity, while farfrom perfect, is significantly above chance in all cases. We confirm moreover that the informedsender is producing symbols that are more semantically natural than those of the agnostic one.Still, surprisingly, purity is significantly above chance even when the latter is only using two sym-bols. From our qualitative evaluations, in this case the agents converge to a (noisy) characterizationof objects as “living-vs-non-living” which, intriguingly, has been recognized as the most basic onein the human semantic system (Caramazza & Shelton, 1998).Rather than using hard clusters, we can also ask whether symbol usage reflects the semantics of thevisual space. To do so we construct vector representations for each object (defined by its ImageNetlabel) by averaging the CNN fc representations of all category images in our data-set (see Section3 above). Note that the fc layer, being near the top of a deep CNN, is expected to capture high-level visual properties of objects (Zeiler & Fergus, 2014). Moreover, since we average across manyspecific images, our vectors should capture rather general, high-level properties of objects.We map these average object vectors to 2 dimensions via t-SNE mapping (Van der Maaten & Hinton,2008) and we color-code them by the majority symbol the sender used for images containing thecorresponding object. Figure 3 (left) shows the results for the current experiment. We see thatobjects that are close in CNN space (thus, presumably, visually similar) are associated to the samesymbol (same color). However, there still appears to be quite a bit of variation.4.1 O BJECT -LEVEL REFERENCEWe established that our agents can solve the coordination problem, and we have at least tentativeevidence that they do so by developing symbol meanings that align with our semantic intuition. We6Published as a conference paper at ICLR 2017id sender vis voc used comm purity (%)obs-chancerep size symbols success( %) purity (%)1 informed fc 100 43 100 45 212 informed fc 10 10 100 37 193 agnostic fc 100 2 92 23 74 agnostic fc 10 3 98 28 12Table 2: Playing the referential game with image-level targets: test results after 50K training plays.Columns as in Table 1. All purity values significant at p<0:001.turn now to a simple way to tweak the game setup in order to encourage the agents to further pursuehigh-level semantics.The strategy is to remove some aspects of “common knowledge” from the game. Common knowl-edge, in game-theoretic parlance, are facts that everyone knows, everyone knows that everyoneknows, and so on (Brandenburger et al., 2014). Coordination can only occur if the basis of thecoordination is common knowledge (Rubinstein, 1989), therefore if we remove some facts fromcommon knowledge, we will preclude our agents from coordinating on them. In our case, we wantto remove facts pertaining to the details of the input images, thus forcing the agents to coordinate onmore abstract properties. We can remove all low-level common knowledge by letting the agents playonly using class-level properties of the objects. We achieve this by modifying the game to show theagents different pairs of images but maintaining the ImageNet class of both the target and distractor(e.g., if the target is dog, the sender is shown a picture of a Chihuahua and the receiver that of aBoston Terrier).Table 2 reports results for various configurations. We see that the agents are still able to coordinate.Moreover, we observe a small increase in symbol usage purity, as expected since agents can nowonly coordinate on general properties of object classes, rather than on the specific properties of eachimage. This effect is clearer in Figure 3 (right), when we repeat t-SNE based visualization of therelationship that emerges between visual embeddings and the words used to refer to them in thisnew experiment.5 G ROUNDING AGENTS ’ COMMUNICATION IN HUMAN LANGUAGEThe results in Section 4 show communication robustly arising in our game, and that we can changethe environment to nudge agents to develop symbol meanings which are more closely related to thevisual or class-based semantics of the images. Still, we would like agents to converge on a languagefully understandable by humans, as our ultimate goal is to develop conversational machines. To dothis, we will need to ground the communication.Taking inspiration from AlphaGo (Silver et al., 2016), an AI that reached the Go master level bycombining interactive learning in games of self-play with passive supervised learning from a largeset of human games, we combine the usual referential game, in which agents interactively developtheir communication protocol, with a supervised image labeling task, where the sender must learnto assign objects their conventional names. This way, the sender will naturally be encouraged to usesuch names with their conventional meaning to discriminate target images when playing the game,making communication more transparent to humans.In this experiment, the sender switches, equiprobably, between game playing and a supervised im-age classification task using ImageNet classes. Note that the supervised objective does not aim atimproving agents’ coordination performance. Instead, supervision provides them with basic ground-ing in natural language (in the form of image-label associations), while concurrent interactive gameplaying should teach them how to effectively use this grounding to communicate.We use the informed sender, fc image representations and a vocabulary size of 100. Supervisedtraining is based on 100 labels that are a subset of the object names in our data-set (see Section 3above). When predicting object names, the sender uses the usual game-embedding layer coupledwith a softmax layer of dimensionality 100 corresponding to the object names. Importantly, thegame-embedding layers used in object classification and the reference game are shared. Conse-7Published as a conference paper at ICLR 2017dolphinfenceFigure 4: Example pairs from the ReferItGame set, with word produced by sender. Target imagesframed in green.quently, we hope that, when playing, the sender will produce symbols aligned with object namesacquired in the supervised phase.The supervised objective has no negative effect on communication success: the agents are still ableto reach full coordination after 10k training trials (corresponding to 5k trials of reference gameplaying). The sender uses many more symbols after training than in any previous experiment (88)and symbol purity dramatically increases to 70% (the obs-chance purity difference also increases to37%).Even more importantly, many symbols have now become directly interpretable, thanks to their directcorrespondence to labels. Considering the 632 image pairs where the target gold standard labelcorresponds to one of the labels that were used in the supervised phase, in 47% of these cases thesender produced exactly the symbol corresponding to the correct supervised label for the targetimage (chance: 1%).For image pairs where the target image belongs to one of the directly supervised categories, it is notsurprising that the sender adopted the “conventional” supervised label to signal the target . However,a very interesting effect of supervision is that it improves the interpretability of the code even whenagents must communicate about images that do not contain objects in the supervised category set .This emerged in a follow-up experiment in which, during training, the sender was again exposed(with equal probability) to the same supervised classification task as above, but now the agentsplayed the referential game on a different dataset of images derived from ReferItGame (Kazemzadehet al., 2014). In its general format, the ReferItGame contains annotations of bounding boxes in realimages with referring expressions produced by humans when playing the game. For our purposes,we constructed 10k pairs by randomly sampling two bounding boxes, to act as target and distractor.Again, the agents converged to perfect communication after 15k trials, and this time used all 100available symbols in some trial.We then asked whether this language was human-interpretable. For each symbol used by the trainedsender, we randomly extracted 3 image pairs in which the sender picked that symbol and the receiverpointed at the right target (for two symbols, only 2 pairs matched these criteria, leading to a set of 298image pairs). We annotated each pair with the word corresponding to the symbol in the supervisedset. Out of the 298 pairs, only 25 (8%) included one of the 100 words among the correspondingreferring expressions in ReferItGame. So, in the large majority of cases, the sender had been facedwith a pair not (saliently) containing the categories used in the supervised phase of its training, andit had to produce a word that could, at best, only indirectly refer to what is depicted in the targetimage. We then tested whether this code would be understandable by humans. In essence, it is as ifwe replaced the trained agent receiver with a human.We prepared a crowdsourced survey using the CrowdFlower platform. For each pair, human partici-pants were shown the two images and the sender-emitted word (that is, the ImageNet label associatedto the symbol produced by the sender; see examples in Figure 4). The participants were asked topick the picture that they thought was most related to the word. We collected 10 ratings for eachpair.We found that in 68% of the cases the subjects were able to guess the right image. A logisticregression predicting subject image choice from ground-truth target images, with subjects and wordsas random effects, confirmed the highly significant correlation between the true and guessed images8Published as a conference paper at ICLR 2017(z= 16:75,p < 0:0001 ). Thus, while far from perfect, we find that supervised learning on aseparate data set does provide some grounding for communication with humans, that generalizesbeyond the conventional word denotations learned in the supervised phase.Looking at the results qualitatively, we found that very often sender-subject communication suc-ceeded when the sender established a sort of “metonymic” link between the words in its possessionand the contents of an image. Figure 4 shows an example where the sender produced dolphin torefer to a picture showing a stretch of sea, and fence for a patch of land. Similar semantic shiftsare a core characteristic of natural language (e.g., Pustejovsky, 1995), and thus subjects were, inmany cases, able to successfully play the referential game with our sender (10/10 subjects guessedthe dolphin target, and 8/10 the fence). This is very encouraging. Although the language developedin referential games will be initially very limited, if both agents and humans possess the sort offlexibility displayed in this last experiment, the noisy but shared common ground might suffice toestablish basic communication.6 D ISCUSSIONOur results confirmed that fairly simple neural-network agents can learn to coordinate in a referentialgame in which they need to communicate about a large number of real pictures. They also suggestthat the meanings agents come to assign to symbols in this setup capture general conceptual prop-erties of the objects depicted in the image, rather than low-level visual properties. We also showeda path to grounding the communication in natural language by mixing the game with a supervisedtask.In future work, encouraged by our preliminary experiments with object naming, we want to studyhow to ensure that the emergent communication stays close to human natural language. Predictivelearning should be retained as an important building block of intelligent agents, focusing on teachingthem structural properties of language (e.g., lexical choice, syntax or style). However, it is alsoimportant to learn the function-driven facets of language, such as how to hold a conversation, andinteractive games are a potentially fruitful method to achieve this goal.REFERENCESJohn Langshaw Austin. How to do things with words . Harvard University Press, Cambridge, MA,1962.Andreas Blume, Douglas V DeJong, Yong-Gwan Kim, and Geoffrey B Sprinkle. Experimentalevidence on the evolution of meaning of messages in sender-receiver games. The American Eco-nomic Review , 88(5):1323–1340, 1998.Adam Brandenburger, Eddie Dekel, et al. Hierarchies of beliefs and common knowledge. TheLanguage of Game Theory: Putting Epistemics into the Mathematics of Games , 5:31, 2014.Ted Briscoe (ed.). Linguistic evolution through language acquisition . Cambridge University Press,Cambridge, UK, 2002.Angelo Cangelosi and Domenico Parisi (eds.). Simulating the evolution of language . Springer, NewYork, 2002.Alfonso Caramazza and Jennifer Shelton. Domain-specific knowledge systems in the brain theanimate-inanimate distinction. Journal of Cognitive Neuroscience , 10(1):1–34, 1998.Herbert H Clark. Using language. 1996. Cambridge University Press: Cambridge) , 952:274–296,1996.Vincent Crawford. A survey of experiments on communication via cheap talk. Journal of Economictheory , 78(2):286–298, 1998.Vincent P Crawford and Joel Sobel. Strategic information transmission. Econometrica: Journal ofthe Econometric Society , pp. 1431–1451, 1982.9Published as a conference paper at ICLR 2017Jia Deng, Wei Dong, Richard Socher, Lia-Ji Li, and Li Fei-Fei. Imagenet: A large-scale hierarchicalimage database. In Proceedings of CVPR , pp. 248–255, Miami Beach, FL, 2009.Jakob N. Foerster, Yannis M. Assael, Nando de Freitas, and Shimon Whiteson. Learning tocommunicate to solve riddles with deep distributed recurrent q-networks. Technical ReportarXiv:1602.02672, 2016. URL http://arxiv.org/pdf/1602.02672v1 .Drew Fudenberg and Alexander Peysakhovich. Recency, records and recaps: learning and non-equilibrium behavior in a simple decision problem. In Proceedings of the fifteenth ACM confer-ence on Economics and Computation , pp. 971–986. ACM, 2014.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Emilio Jorge, Mikael K ̊ageb ̈ack, and Emil Gustavsson. Learning to play guess who? and inventinga grounded language as a consequence. https://arxiv.org/abs/1611.03218 , 2016.Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara L Berg. Referitgame: Referring toobjects in photographs of natural scenes. In EMNLP , pp. 787–798, 2014.David Lewis. Convention . Harvard University Press, Cambridge, MA, 1969.Ken McRae, George Cree, Mark Seidenberg, and Chris McNorgan. Semantic feature productionnorms for a large set of living and nonliving things. Behavior Research Methods , 37(4):547–559,2005.Tomas Mikolov, Armand Joulin, and Marco Baroni. A roadmap towards machine intelligence. arXivpreprint arXiv:1511.08130 , 2015.James Pustejovsky. The Generative Lexicon . MIT Press, Cambridge, MA, 1995.Alvin E Roth and Ido Erev. Learning in extensive-form games: Experimental data and simpledynamic models in the intermediate term. Games and economic behavior , 8(1):164–212, 1995.Ariel Rubinstein. The electronic mail game: Strategic behavior under ‘almost common knowledge’.The American Economic Review , pp. 385–391, 1989.David Silver, Aja Huang, Christopher Maddison, Arthur Guez, Laurent Sifre, George van denDriessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot,Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lilli-crap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering thegame of Go with deep neural networks and tree search. Nature , 529:484–503, 2016.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. arXiv preprint arXiv:1409.1556 , 2014.Brian Skyrms. Signals: Evolution, learning, and information . Oxford University Press, 2010.Matthew Spike, Kevin Stadler, Simon Kirby, and Kenny Smith. Minimal requirements for the emer-gence of learned signaling. Cognitive Science , 2016. In press.Luc Steels and Martin Loetzsch. The grounded naming game. In Luc Steels (ed.), Experiments inCultural Language Evolution , pp. 41–59. John Benjamins, Amsterdam, 2012.Peter Stone and Manuela Veloso. Towards collaborative and adversarial learning: A case study inrobotic soccer. International Journal of Human-Computer Studies , 48(1):83–104, 1998.Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. Learning multiagent communication withbackpropagation. arXiv preprint arXiv:1605.07736 , 2016.Ilya Sutskever, Oriol Vinyals, and Quoc Le. Sequence to sequence learning with neural networks.InProceedings of NIPS , pp. 3104–3112, Montreal, Canada, 2014.10Published as a conference paper at ICLR 2017Richard Sutton and Andrew Barto. Reinforcement Learning: An Introduction . MIT Press, Cam-bridge, MA, 1998.Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of MachineLearning Research , 9(2579-2605), 2008.Oriol Vinyals and Quoc Le. A neural conversational model. In Proceedings of the ICML DeepLearning Workshop , Lille, France, 2015. Published online: https://sites.google.com/site/deeplearning2015/accepted-papers .Kyle Wagner, James A Reggia, Juan Uriagereka, and Gerald S Wilkinson. Progress in the simulationof emergent communication and language. Adaptive Behavior , 11(1):37–69, 2003.S. I. Wang, P. Liang, and C. Manning. Learning language games through interaction. In Associationfor Computational Linguistics (ACL) , 2016.Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcementlearning. Machine learning , 8(3-4):229–256, 1992.Terry Winograd. Procedures as a representation for data in a computer program for understandingnatural language. Technical Report AI 235, Massachusetts Institute of Technology, 1971.Ludwig Wittgenstein. Philosophical Investigations . Blackwell, Oxford, UK, 1953. Translated byG.E.M. Anscombe.Michael Wooldridge. An introduction to multiagent systems . John Wiley & Sons, 2009.Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, RichZemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visualattention. In Proceedings of ICML , pp. 2048–2057, Lille, France, 2015.Matthew Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Pro-ceedings of ECCV (Part 1) , pp. 818–833, Zurich, Switzerland, 2014.Ying Zhao and George Karypis. Criterion functions for document clustering: Experiments andanalysis. Technical Report 01-40, University of Minnesota Department of Computer Science,2003.11
SygGlIBcel
Under review as a conference paper at ICLR 2017OPENING THE VOCABULARY OF NEURAL LANGUAGEMODELS WITH CHARACTER -LEVEL WORD REPRESEN -TATIONSMatthieu LabeauLIMSI-CNRS / Orsay, Francelabeau@limsi.frAlexandre AllauzenLIMSI-CNRS / Orsay, Franceallauzen@limsi.frABSTRACTThis paper introduces an architecture for an open-vocabulary neural languagemodel. Word representations are computed on-the-fly by a convolution networkfollowed by pooling layer. This allows the model to consider any word, in thecontext or for the prediction. The training objective is derived from the Noise-Contrastive Estimation to circumvent the lack of vocabulary. We test the ability ofour model to build representations of unknown words on the MT task of IWSLT-2016 from English to Czech, in a reranking setting. Experimental results showpromising results, with a gain up to 0.7 BLEU point. They also emphasize thedifficulty and instability when training such models with character-based repre-sentations for the predicted words.1 I NTRODUCTIONMost of neural language models, such as n-gram models Bengio et al. (2003) are word based andrely on the definition of a finite vocabulary V. As a consequence, a Look-up table is associated toVin which each word w2V is mapped to a vector of dEreal valued features stored in a matrixL2RjVjdE. While this approach has proven successful for a variety of tasks and languages, see forinstance Schwenk (2007) in speech recognition and Le et al. (2012); Devlin et al. (2014); Bahdanauet al. (2014) in machine translation, it induces several limitations.For morphologically-rich languages, like Czech or German, the lexical coverage is still an importantissue, since there is a combinatorial explosion of word forms, most of which are hardly observed ontraining data. On the one hand, growing the Look-up table is not a solution, since it would increasethe number of parameters without having enough training example for a proper estimation. On theother hand, rare words can be replaced by a special token. Nevertheless, this acts as a word classmerging very different words without any distinction and using different word classes to handle out-of-vocabulary words Allauzen & Gauvain (2005) does not really solve this issue, since rare wordsare difficult to classify.Moreover, for most inflected or agglutinative forms, as well as for compound words, the word struc-ture is overlooked, wasting parameters for modeling forms that could be more efficiently handledby word decomposition. While the use of subword units Botha & Blunsom (2014); Sennrich et al.(2016) could improve the generalization power of such models, it relies on a proper and efficientmethod to induce these subword units.To overcome these issues, we propose to investigate a word based language model with an openvocabulary. Since most of existing models and training criteria rely on the assumption of a finitevocabulary, the definition of an open vocabulary model, along with a training criterion, constitutesa scientific challenge. Our goal is to build word representations every words. Word representationsare inferred on-the-fly from its character sequence, using convolution filters which implicitly cap-ture subword patterns, as described in section 2. The architecture is based on a neural ngram modelinspired from Bengio et al. (2003), while this idea can be extended to other kind of models. Byrelaxing the normalized constraint, the objective function borrows from the noise contrastive esti-mation Gutmann & Hyv ̈arinen (2012) to allow our model to consider a possibly infinite vocabulary.This paper focusses on this challenge and its related training issues. To assess the efficiency of1Under review as a conference paper at ICLR 2017this approach, the experimental setup described in section 3 uses a large scale translation task in areranking setting. The experimental results summarized in section 4 show promising results as wellas training issues.2 M ODEL DESCRIPTIONWord embeddings are parameters, stored in a Look-up matrix L. The embedding ewordw of a wordwis simply the column of Lcorresponding to its index in the vocabulary:ewordw = [L]w2.1 C HARACTER -LEVEL WORD EMBEDDINGSTo infer a word embedding from its character embeddings, we use a convolution layer Waibel et al.(1990); Collobert et al. (2011), similar to layers used in Santos & Zadrozny (2014); Kim et al.(2015). As illustrated in figure 1, a word wis a character sequence fc1;::;cjwjgrepresented by theirembeddingsfCc1;::;Ccjwjg, where Ccidenotes the vector associated to the character ci. A convo-lution filter Wconv2RdeRdcncis applied over a sliding window of nccharacters, producinglocal features :xn=Wconv(Ccnnc+1::::Ccn)T+bconvwherexnis a vector of size deobtained for each position nin the word1. The notation ( Ccn1:Ccn)denotes the concatenation of two embeddings. The i-th element of the embedding of wis the meanover thei-th elements of the feature vectors, passed by the activation function :[echar]i=0@jwjnc+1Xn=1[xn]ijwjnc+ 11A (1)Using a mean after a sliding convolution window ensures that the embedding combines local featuresfrom the whole word, and that the gradient is redistributed at scale for each character n-gram. Theparameters of the layer are the matrices CandWconvand the bias bconv.2.2 M ODELSOur model follows the classic n-gram feedforward architecture. The input of the network is a n-words context Hi= (wi1;:::;wNi+1), and its output the probability P(wjHi)for each wordw2V. The embeddings of the word in the context are concatenated and fed into a hidden layer:hHi=(Whidden(ei1:::::eNi+1) +bhidden)A second hidden layer my be added. Finally, the output layer computes scores for each word:sHi= exp ( WouthHi+bout)Whidden,bhidden,Woutandboutare the parameters of the model. As the input Lookup-matrixL, the output weight matrix Woutcontains word embeddings, that are output representations of thewords in the vocabulary:eoutw= [Wout]wThen, the output probabilities are expressed as:P(wjHi) =expeoutwhHiP1<j<jVjexpeoutjhHiLater, we will use three different input layer to obtain word representations:1Two padding character tokens are used to deal with border effects. The first is added at the beginning andthe second at the end of the word, as many times as it is necessary to obtain the same number of windows thanthe length of the word. Their embeddings are added to C.2Under review as a conference paper at ICLR 2017jCjdCdEjVjLook-up Table LCharacter look-up Table Ccj2cj1cjcj+1cj+2mean (.)WconvCEWEWhiddenWoutecharwi1echarwi2echarwi3sCihiCi= (wi:wi1:wi2)ewordwi1ewordwi2ewordwi3Character-level representationWord-levelrepresentationFigure 1: CWE Model architectureA classic NLM using word-level embeddings only, that we will note WE, which usesjVjdeparameters.A NLM using embeddings constructed from character n-grams by convolution + pooling,that we will note CE, which usesjVcjdc+dcncdeparameters.A NLM using a concatenation of these two types of embeddings as word representation,that we will note CWE .2.3 O BJECTIVE FUNCTION FOR OPEN VOCABULARY MODELSUsually, such a model is trained by maximizing the log-likelihood. For a given word given itscontext, the model parameters are estimated in order to maximize the following function for allthe n-grams observed in the training data:LL() =X1<i<jDjlogP(wijHi):This objective function raises two important issues. For conventional word models, it implies a verycostly summation imposed by the softmax activation of the output layer. More importantly, thisobjective requires the definition of a finite vocabulary, while the proposed model may use character-based word embeddings, especially at the output, making the notion of vocabulary obsolete.Therefore, the parameters estimation relies on Noise Contrastive Estimation (NCE) introducedin Gutmann & Hyv ̈arinen (2012); Mnih & Teh (2012). This criterion allows us to train both typesof models based on conventional word embeddings, along with character-based embeddings. TheNCE objective function aims to discriminate between examples sampled from the real data and froma noise distribution. When presented with examples coming from a mixture of one sample from thedata distribution Pdandkfrom the noise distribution Pn,PH(w2D)denotes the posterior proba-bility of a word wgiven its context Hto be sampled from the training data D. This probability canbe expressed as follows:PH(w2D) =PHd(w)PHd(w) +kPn(w)As suggested in Mnih & Teh (2012), Pnonly depends on where, since we chose the unigramdistribution estimated on the training data. IfsH(w) = exp ( eouthH+bout) (2)denotes the non-normalized score given by the model to a specific word w, as a function of theparametersand the context H, the final NCE objective function has the following form Gutmann3Under review as a conference paper at ICLR 2017& Hyv ̈arinen (2012):JH=EsHlogsH(w)sH(w) +kPn(w)+kEPnlogkPn(w)sH(w) +kPn(w);wheresHwill tend toPHdwithout the need for an explicit normalization.2.4 C HARACTER -BASED OUTPUT WEIGHTS WITH NOISE -CONTRASTIVE ESTIMATIONThe output weights representing each word in the vocabulary eoutcan also be replaced by embed-dings computed by a convolution layer on character n-grams. In this case the model can efficientlyrepresent and infer a score to any word, observed during the training process or not, while withconventional word embeddings, out of vocabulary words only share the same representation anddistribution. Instead of using a parameter matrix Woutto estimate the score like in equation 2, theoutput representation of a word w,eoutwcan be replaced by a vector echaroutw estimated on the flybased on its character sequence as described in equation 1, using jVcjdc+dcncdhparameters.With this extension the model does not rely on a vocabulary anymore, hence motivating our choiceof the NCE. This unnormalized objective allows us to handle an open vocabulary, since we only needto computek+ 1word representations for each training examples. Models that use character-basedembeddings both for input and output words are denoted by CWE-CWE .Moreover, with this extension, the representations of words sharing character n-grams are tied. Thisis an important property to let the model generalize to unseen words. However, it can be also anissue: the limited number of updates for output representations ( k+ 1words) has a “rich get richer”effect: the most frequent words are usually short and will get most of the update. They may therefore”contaminate” the representation of longer words with which they share character n-grams, even ifthese words are not related. This issue is further addressed in section 4.1.3 E XPERIMENTAL SET -UPThe impact of the models described in section 2 is evaluated within the machine translation (MT)shared task of IWSLT-20162from Englih to Czech. This language pair is highly challenging sinceCzech is a morphologically-rich language. Neural language models are integrated in a two stepsapproach: the first step uses a conventional MT system to produce an n-best list (the nmost likelytranslations); in the second step, these hypothesis are re-ranked by adding the score of the neurallanguage model. To better benefit from the open vocabulary models introduced in section 2.1, a morecomplex system is also used: first an MT system is used to translate from English to a simplifiedform of Czech which is reinflected. With this pipeline we expect n-best lists with more diversity andalso words unseen during the training process. The neural language models are then used to re-rankthe reinflected n-best lists.3.1 D ATAThe IWSLT16 MT task is focused on the translation of TED talks. The translation systems aretrained on parallel data from the TED ,QED andeuroparl . Our Neural language models are trainedon the same data, but training examples are sampled from these corpora given weights that arecomputed to balance between in-domain parallel data ( TED ), out-of domain parallel data, and ad-ditional monolingual data. Finally, we use the concatenation of TED.dev2010 ,TED.dev2011 andTED.tst2010 as development set, while TED.tst2012 andTED.tst2013 provide the test set.3.2 C ZECH RE-INFLECTIONIn Czech, a morphologically rich language, each lemma can take a lot of possible word forms. Mostof them won’t appear - or with a very low frequency - in training data. For an important part of thewords found in test data and unseen during training, their lemmas however can be observed but witha different morphological derivation.2http://workshop2016.iwslt.org4Under review as a conference paper at ICLR 2017A non-observed word form can’t be generated by the translation system, and one seen too rarelywon’t be used in a relevant way. To circumvent this limitation, in a similar fashion as the methoddescribed in Marie et al. (2015), each noun, pronoun and adjective is replaced in the training corporaby its lemma along with some morphological features. These word forms are considered in factoredway, where some of the POS tags are discarded to reduce the vocabulary. After the translation pro-cess, a cascade of Conditional Random Fields (CRF) are used to reintroduce the discarded features,such as gender, number and case, and to generate a new word form.Formally, the MT system translates English into a simplified version of Czech, that is reinflected.Within this process, the MT system can produce a n-best list, that can be extended to a nk-best list,considering for each translation hypothesis the k-best reinflected sentences given by the factorizedCRF. Intuitively, this process can introduce word forms potentially not yet seen in training data, butbased on known paradigms, which can give an advantage to language models able to build a wordrepresentation from character n-grams.3.3 B ASELINE TRANSLATION SYSTEMOur baseline is built with a Statistical Machine Translation system based on bilingual n-grams,NCODE3, described in Crego et al. (2011). We follow the same setup as in Marie et al. (2015).3.4 NLM TRAINING AND OPTIMIZATIONFirst, some comparative experiments on a smaller dataset are carried out to better understand howopen vocabulary NLM behave and to set the hyper-parameters. First trained using stochastic gra-dient descent, we observed a quite unstable training process, restricting a proper hyper-parameterschoices. We found that especially the embedding dimensions, and the activation functions usedcould make the NCE-objective hard to optimize. This was aggravated in Czech, which we foundmore difficult to work with than other morphologically complex languages, like German and Rus-sian. The use of Adagrad Duchi et al. (2010) clearly helps to solve most of these issues, but addsconsequent computation time. Following preliminary results on our work with a similar model ona different task Labeau et al. (2015), we made the choice of not implementing LSTMs to obtaincharacter-level word representations. It gave similar results, at the cost of unstable training and ex-tended computation time. We then train using batches of 128, for various context sizes, WE,CWE ,andCWE-CWE models. The ReLu activation function is used, along with an embedding size ofde= 128 . When relevant, we used a character embedding size of dc= 32 and a convolution onnc= 5-grams of characters for all experiments4. Concerning the NCE training, we sampled k= 25examples from the unigram distribution obtained from the training data, for each example sampledfrom the data. The models were implemented using C++5.3.5 R ERANKINGThe re-ranking step uses additional features to find a better translation among the n-best generatedby the decoder (in our case, n= 300 ): we use the score (probability) of WE,CWE andCWE-CWE models given to each sentence by our models as such a feature. Tuning for re-ranking wasperformed with KB-M IRACherry & Foster (2012), and evaluation using BLEU score.4 E XPERIMENTAL RESULTSThe first set of experiments investigates the impact of the padding design on the character-levelrepresentation followed by a study of the learning behavior of our proposed models and trainingcriterion. Then, the proposed models are evaluated within the MT task. The final set of experimentsanalyzes the issues of the model based on character-level representation for output words, in orderto propose remedies.3http://ncode.limsi.fr4Results did not differ significantly when increasing these embedding sizes, with an impact on convergencespeed and computation time.5Implementation will be made available.5Under review as a conference paper at ICLR 20174.1 T IES BETWEEN CHARACTER -LEVEL REPRESENTATION OF OUTPUT WORDSPreliminary results on smaller dataset are quite poor for models using character-level representation,and far worse when used for the output layer. We suspect that groups of characters are updated farmore together, yielding a ”contamination” of several character n-grams by very frequent short words.Indeed, our simple padding scheme, as shown in the left part of table 1, makes words sharing firstor last letter(s) systematically share at least one character n-gram: we suppose it gives the modelsmore chance to detect similarities in word forms sharing prefixes and suffixes.The representations of any of the character n-grams that are included in the frequent words willthus be re-used in a large part of the other words in the corpus. A huge number of word forms areaffected: a little more than one third of the training data shares its first character n-gram with one ofthe ten most frequent words, and a little more than one quarter shares its last.While considering varying size of character n-grams when building our word representation, asin Kim et al. (2015), would certainly help, it would increase our computation time. We thus choose toalleviate our padding scheme, as shown on the right part of table 1. We add only one character tokenat the beginning of the word, and one at the end6. While it may inhibit the capacity of the modelto build links between words sharing prefixes or suffixes, it improves results drastically, especiallywhen using character-level outputs, as shown in figure 3. This limited padding scheme is used forthe following experiments.a aale naale naaby zaaby zaaz bylaaz bylaani dvaani dvaasi trebaasi trebaTable 1: Padding for word decomposition in character 5-grams: is a character token indicatingthe beginning of the word, while indicates the end of the word. The left part of the table showsour original padding scheme, which makes very different words share character 5-grams, especiallywith short, frequent words. The right part of the table shows our alleviated padding scheme.4.2 NLM TRAININGWhile the perplexity of our language models is not our main focus, it is still related to the quantitythat our training seeks to optimize - since the NCE gradient approaches the maximum likelihoodgradient Mnih & Teh (2012). On figure 2 are shown perplexity values of each model during training.These values are based on a vocabulary containing the 250K most frequent words on the training data- it is also the vocabulary used in the model when relevant. They are computed on the developmentset after each epoch. An epoch includes 2,5M N-grams sampled from the training data. On table 2are shown the best perplexity obtained on the development set by each model, during training.Context size (Number of words) 3 6WE 227 193CWE 207 185CWE-CWE 308 243Table 2: Best perplexity reached on the development set, on a 250K output vocabulary, after 15epochs of 2,5M n-gramsTable 2 shows that a character-level word representation helps to decrease the perplexity, even ifa larger context closes the gap. To compute the perplexity of CWE-CWE models, we use the6For short words, we add the numbers of tokens necessary for the word to have at least nC= 5characters,as shown in table 16Under review as a conference paper at ICLR 2017Figure 2 Figure 3Figure 4: Model perplexity measured on the development set during training. The context size is3 words. Figure 3 shows models based on character-level word representations, with and withoutcomplete padding. Models are trained on the same data than Figure 2 but on smaller epochs (250Kn-grams).same vocabulary as for other models, and use the ’unknown’ tokens for words and characters-basedrepresentations. Hence, the perplexity computed is difficult to interpret. The main downside ofAdagrad is that the learning rate determined by accumulating the history of past gradients is usuallytoo aggressive and stops learning rather early. We simply reset this history every five epochs to givethe model a chance to improve, which explains the flattening followed by small improvements wesee for WE andCWE models. We choose to do that reset 2 times, based on previous experiments.Despite adaptive gradient, training of CWE-CWE models stays unstable.4.3 R ERANKINGSystem to be re-ranked BLEU ReferenceCWE CWE-CWE WEn=3 n=6 n=3 n=6 n=3 n=6En!Cz Baseline system 19.6 20.1 20.3 19.8 20.0 20.0 20.2En!Simplified CzReinflected baseline system 19.5 20.0 20.2 19.6 20.1 20.1 20.03-best Reinflected baseline system 19.9 20.3 19.6 20.0 20.1 20.15-best Reinflected baseline system 19.9 20.3 19.5 19.9 20.0 20.1Table 3: Best BLEU score obtained after n-best reranking of the hypothesis given by the translationand translation + k-best reinflection systems. nis the context size (in number of words)The reranking results are shown in table 3. The first line corresponds to experiments with a di-rect translation from English to Czech, where n-best lists generated by the MT system are simplyrescored by our models. The best result is given by the longest-context CWE model, which producesa+0:7BLEU score improvement. CWE models gives on average +0:1BLEU point compared toWE models, while CWE-CWE are0:2BLEU point under. Doubling the context size consistentlyimproves results of +0:2BLEU point.Experimental results on reinflected Czech seems to follow a similar trend: CWE models behave alittle better than WE models, while CWE-CWE models are under. While simply reranking n-bestlists is not as efficient as doing it directly in Czech, reranking nk-best lists extended by the factorizedCRF gives a small improvement, reaching an improvement of +0:7BLEU point. As a general rule,small context models seem to have difficulties with reinflected Czech. The main advantage givenby the CWE model is an ability to better rerank nk-best lists. These results suggest that, whilethe normalization + reinflection procedure may introduce diversity in the output to be reranked, ourmodels are not able to draw any significant advantage from it.7Under review as a conference paper at ICLR 20174.4 A NALYSIS OF CHARACTER -LEVEL OUTPUT REPRESENTATIONS PERFORMANCEModels using character-level output representations gave sub-par results on re-ranking. It is sur-prising, especially for re-inflected Czech: such a model is supposed to behave better on unknownwords, and thus should benefit from diversity given by generating new words. However, as we cansee in table 4, re-inflection doesn’t add that much diversity (About 0.1 % of OOV words, and about0.001 % of words never seen by the model before). Diversity is also inhibited by our training algo-rithm: while we train open-vocabulary models, the negative examples used with Noise-contrastiveestimation come from a closed vocabulary.Full training vocabulary 250K words vocabularyReference 0.131 % 0.995 %En!Cz (300-best) 0.566 % 1.173 %En!Simplified Cz + Reinflection 0.567 % 1.263 %En!Simplified Cz + 3-Best reinflection 0.567 % 1.277 %En!Simplified Cz + 5-Best reinflection 0.568 % 1.285 %Table 4: Ratio of unknown words in system outputs measured on the test set.This can related to the nature of the unigram distribution used to sample negative examples. Asexplained in section 4.1, it makes frequent short words completely outweigh the others in numberof updates, and we are forced to reduce the ability of the model to find common morphologicalattributes between words to avoid ’contamination’ of character n-gram representations.5 R ELATED WORKSThere is a number of different strategies to efficiently train NNLMs with large vocabularies, such asdifferent types of hierarchical softmax Mnih & Hinton (2009); Le et al. (2011), importance samplingBengio & S ́en ́ecal (2003), and Noise contrastive estimation Gutmann & Hyv ̈arinen (2012); Mnih &Teh (2012). Vaswani et al. (2013) has showed the interest of training a NLM with NCE to re-rankk-best lists, while Devlin et al. (2014) uses a self-normalization. Recently, a comparative study Chenet al. (2016) has been made on how to deal with a large vocabulary. However, the purpose of thispaper is to explore models with open vocabulary rather large vocabulary.There is a surge of interest into using character-level information for a wide range of NLP tasks,with improved results in POS Tagging Santos & Zadrozny (2014), Text classification Zhang &LeCun (2015), Parsing Ballesteros et al. (2015), Named entity recognition Lample et al. (2016).In language modeling, first applications to language modeling were strictly using characters, andperformed less than word-level models Mikolov et al. (2012), while showing impressive results fortext generation Sutskever et al. (2011); Graves (2013), using bi-directional LSTM Graves et al.(2013). Recently, Ling et al. (2015) has used bi-directional LSTM to build word representationsfrom characters, with improvements in language modeling and POS-tagging.The recent work of Kim et al. (2015), that uses convolutional networks and pooling to constructa word representation from character n-grams, coupled with highway networks Srivastava et al.(2015), showed on various languages that using characters improves results on the language mod-eling task (for a small corpus), even more so for languages with complex morphology. A similararchitecture was used J ́ozefowicz et al. (2016) on a larger dataset, conjointly with bi-directionalLSTMs, and trained with importance sampling, showing great results.On the study of NNLMs in the context of Machine Translation, we can mention the work of Luonget al. (2015) on the effect of the number of layers on reranking n-best lists. Finally, while notdirectly related to our work, Luong & Manning (2016) very recently showed great improvementson a translation task by handling rare words with character-level recurrent networks, with a neuraltranslation model.8Under review as a conference paper at ICLR 20176 C ONCLUSIONIn this work, we addressed the challenge of designing an open vocabulary Neural Language Model.For that purpose, word representations are estimated on-the-fly from n-grams of characters. Twokinds of models are introduced: first, NLMs using word and character-level embeddings to representthe input context ( CWE ); then its extension to an open-vocabulary even for the predicted words(CWE-CWE ). These models were used to re-rank outputs of translation systems from English toCzech. We also carried out experiments on translation systems from English to a simplified Czech,which is then re-inflected into Czech before re-ranking.We obtained a slight improvement in BLEU score using a CWE model, which, given the littlevariety of the words generated by translation systems, makes us suppose there is room for more. Weplan to investigate with more complex translation systems, as well as with other applications, suchas morphological re-inflection.While the performance of our open-vocabulary models are to some extent disappointing, they openquestions about the learned representations we will explore. We also plan to investigate on a morefitted noise distribution to use with NCE when training open-vocabulary models.ACKNOWLEDGMENTSREFERENCESA. Allauzen and J.L Gauvain. Open vocabulary asr for audiovisual document indexation. In IEEEInternational Conference on Acoustics, Speech and Signal Processing (ICASSP) , April 2005.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. CoRR , abs/1409.0473, 2014.Miguel Ballesteros, Chris Dyer, and Noah A. Smith. Improved transition-based parsing by mod-eling characters instead of words with lstms. In Llus Mrquez, Chris Callison-Burch, Jian Su,Daniele Pighin, and Yuval Marton (eds.), EMNLP , pp. 349–359. The Association for Computa-tional Linguistics, 2015. ISBN 978-1-941643-32-7. URL http://dblp.uni-trier.de/db/conf/emnlp/emnlp2015.html#BallesterosDS15 .Yoshua Bengio and Jean-S ́ebastien S ́en ́ecal. Quick training of probabilistic neural nets by impor-tance sampling. In Proceedings of the conference on Artificial Intelligence and Statistics (AIS-TATS) , 2003.Yoshua Bengio, R ́ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilisticlanguage model. Journal of Machine Learning Research , 3:1137 1155, 2003.Jan A. Botha and Phil Blunsom. Compositional Morphology for Word Representations and Lan-guage Modelling. In Proceedings of the International Conference of Machine Learning (ICML) ,Beijing, China, jun 2014.Wenlin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neural lan-guage models. In Proceedings of the 54th Annual Meeting of the Association for ComputationalLinguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers , 2016. URLhttp://aclweb.org/anthology/P/P16/P16-1186.pdf .Colin Cherry and George Foster. Batch tuning strategies for statistical machine translation. InProceedings of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies (NAACL-HLT) , pp. 427–436, Montr ́eal, Canada, June 2012.Ronan Collobert, Jason Weston, L ́eon Bottou, Michael Karlen, Koray Kavukcuoglu, and PavelKuksa. Natural language processing (almost) from scratch. J. Mach. Learn. Res. , 12:2493–2537, November 2011. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=1953048.2078186 .Josep Maria Crego, Franois Yvon, and Jos B. Mari ̃no. N-code: an open-source Bilingual N-gramSMT Toolkit. Prague Bulletin of Mathematical Linguistics , 96:49–58, 2011.9Under review as a conference paper at ICLR 2017Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M. Schwartz, and JohnMakhoul. Fast and robust neural network joint models for statistical machine translation. InProceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers , pp. 1370–1380, 2014.URLhttp://aclweb.org/anthology/P/P14/P14-1129.pdf .John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learningand stochastic optimization. Technical Report UCB/EECS-2010-24, EECS Department, Univer-sity of California, Berkeley, Mar 2010. URL http://www2.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-24.html .Alex Graves. Generating sequences with recurrent neural networks. CoRR , abs/1308.0850, 2013.URLhttp://arxiv.org/abs/1308.0850 .Alex Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. Hybrid speech recognition with deepbidirectional LSTM. In 2013 IEEE Workshop on Automatic Speech Recognition and Understand-ing, Olomouc, Czech Republic, December 8-12, 2013 , pp. 273–278, 2013. doi: 10.1109/ASRU.2013.6707742. URL http://dx.doi.org/10.1109/ASRU.2013.6707742 .Michael U. Gutmann and Aapo Hyv ̈arinen. Noise-contrastive estimation of unnormalized statisticalmodels, with applications to natural image statistics. J. Mach. Learn. Res. , 13(1):307–361, Febru-ary 2012. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=2503308.2188396 .Rafal J ́ozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring thelimits of language modeling. CoRR , abs/1602.02410, 2016. URL http://arxiv.org/abs/1602.02410 .Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural languagemodels. arXiv preprint arXiv:1508.06615 , 2015.Matthieu Labeau, Kevin L ̈oser, and Alexandre Allauzen. Non-lexical neural architecture for fine-grained pos tagging. In Proceedings of the Conference on Empirical Methods in Natural Lan-guage Processing (EMNLP) , pp. 232–237, Lisbon, Portugal, September 2015.Guillaume Lample, Miguel Ballesteros, Kazuya Kawakami, Sandeep Subramanian, and Chris Dyer.Neural architectures for named entity recognition. In In proceedings of NAACL-HLT (NAACL2016). , San Diego, US, 2016.Hai Son Le, Ilya Oparin, Alexandre Allauzen, Jean-Luc Gauvain, and Franc ̧ois Yvon. Structuredoutput layer neural network language model. In Proceedings of the IEEE International Con-ference on Acoustics, Speech, and Signal Processing, ICASSP 2011, May 22-27, 2011, PragueCongress Center, Prague, Czech Republic , pp. 5524–5527, 2011. doi: 10.1109/ICASSP.2011.5947610. URL http://dx.doi.org/10.1109/ICASSP.2011.5947610 .Hai-Son Le, Alexandre Allauzen, and Franc ̧ois Yvon. Continuous space translation models withneural networks. In Proceedings of the North American Chapter of the Association for Computa-tional Linguistics: Human Language Technologies (NAACL-HLT) , pp. 39–48, Montr ́eal, Canada,June 2012. Association for Computational Linguistics.Wang Ling, Chris Dyer, Alan W. Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, LusMarujo, and Tiago Lus. Finding function in form: Compositional character models for openvocabulary word representation. In EMNLP , pp. 1520–1530. The Association for ComputationalLinguistics, 2015. ISBN 978-1-941643-32-7. URL http://dblp.uni-trier.de/db/conf/emnlp/emnlp2015.html#LingDBTFAML15 .Minh-Thang Luong and Christopher D. Manning. Achieving open vocabulary neural machinetranslation with hybrid word-character models. CoRR , abs/1604.00788, 2016. URL http://arxiv.org/abs/1604.00788 .Thang Luong, Michael Kayser, and Christopher D. Manning. Deep neural language models formachine translation. In Proceedings of the 19th Conference on Computational Natural LanguageLearning, CoNLL 2015, Beijing, China, July 30-31, 2015 , pp. 305–309, 2015. URL http://aclweb.org/anthology/K/K15/K15-1031.pdf .10Under review as a conference paper at ICLR 2017Benjamin Marie, Alexandre Allauzen, Franck Burlot, Quoc-Khanh Do, Julia Ive, elena knyazeva,Matthieu Labeau, Thomas Lavergne, Kevin L ̈oser, Nicolas P ́echeux, and Franc ̧ois Yvon.Limsi @wmt’15 : Translation task. In Proceedings of the Tenth Workshop on Statistical MachineTranslation , pp. 145–151, Lisbon, Portugal, September 2015. Association for Computational Lin-guistics. URL http://aclweb.org/anthology/W15-3016 .Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and Jan Cernocky.Subword language modeling with neural networks. 2012. Unpublished.Andriy Mnih and Geoffrey Hinton. A scalable hierarchical distributed language model. In Advancesin Neural Information Processing Systems , volume 21, pp. 1081–1088, 2009.Andriy Mnih and Yee Whye Teh. A fast and simple algorithm for training neural probabilisticlanguage models. In ICML . icml.cc / Omnipress, 2012. URL http://dblp.uni-trier.de/db/conf/icml/icml2012.html#MnihT12 .Cicero D. Santos and Bianca Zadrozny. Learning character-level representations for part-of-speechtagging. In Tony Jebara and Eric P. Xing (eds.), Proceedings of the 31st International Conferenceon Machine Learning (ICML-14) , pp. 1818–1826. JMLR Workshop and Conference Proceedings,2014. URL http://jmlr.org/proceedings/papers/v32/santos14.pdf .Holger Schwenk. Continuous space language models. Computer Speech and Language , 21(3):492–518, July 2007.Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare wordswith subword units. In Proceedings of the Annual Meeting on Association for ComputationalLinguistics (ACL) , pp. 1715–1725, Berlin, Germany, August 2016.Rupesh Kumar Srivastava, Klaus Greff, and J ̈urgen Schmidhuber. Training very deep networks.CoRR , abs/1507.06228, 2015. URL http://arxiv.org/abs/1507.06228 .Ilya Sutskever, James Martens, and Geoffrey Hinton. Generating text with recurrent neural networks.In Lise Getoor and Tobias Scheffer (eds.), Proceedings of the 28th International Conference onMachine Learning (ICML-11) , ICML ’11, pp. 1017–1024, New York, NY , USA, June 2011.ACM.Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. Decoding with large-scaleneural language models improves translation. In EMNLP , pp. 1387–1392. ACL, 2013. ISBN 978-1-937284-97-8. URL http://dblp.uni-trier.de/db/conf/emnlp/emnlp2013.html#VaswaniZFC13 .Alexander Waibel, Toshiyuki Hanazawa, Geofrey Hinton, Kiyohiro Shikano, and Kevin J. Lang.Readings in Speech Recognition , chapter Phoneme Recognition Using Time-delay Neural Net-works, pp. 393–404. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1990.Xiang Zhang and Yann LeCun. Text understanding from scratch. CoRR , abs/1502.01710, 2015.URL http://arxiv.org/abs/1502.01710 .11
HJtN5K9gx
Under review as a conference paper at ICLR 2017LEARNING DISENTANGLED REPRESENTATIONSINDEEPGENERATIVE MODELSN. Siddharth, Brooks Paige, Alban Desmaison, Frank Wood & Philip TorrDepartment of Engineering Science, University of Oxford, Oxford OX13PJ, UKfnsid,brooks,alban,fwood,phst g@robots.ox.ac.ukJan-Willem van de MeentCollege of Computer ScienceNortheastern UniversityMA 02115, USAj.vandemeent@northeastern.eduPushmeet KohliMicrosoft ResearchWA 98052, USApkohli@microsoft.comNoah D. GoodmanDepartment of PsychologyStanford UniversityCA 94305, USAngoodman@stanford.eduABSTRACTDeep generative models provide a powerful and flexible means to learn com-plex distributions over data by incorporating neural networks into latent-variablemodels. Variational approaches to training such models introduce a probabilisticencoder that casts data, typically unsupervised, into an entangled representationspace. While unsupervised learning is often desirable, sometimes even necessary,when we lack prior knowledge about what to represent, being able to incorporatedomain knowledge in characterising certain aspects of variation in the data canoften help learn better disentangled representations. Here, we introduce a newformulation of semi-supervised learning in variational autoencoders that allowsprecisely this. It permits flexible specification of probabilistic encoders as directedgraphical models via a stochastic computation graph, containing both continuousand discrete latent variables, with conditional distributions parametrised by neuralnetworks. We demonstrate how the provision of dependency structures, along witha few labelled examples indicating plausible values for some components of thelatent space, can help quickly learn disentangled representations. We then evalu-ate its ability to do so, both qualitatively by exploring its generative capacity, andquantitatively by using the disentangled representation to perform classification,on a variety of models and datasets.1 I NTRODUCTIONReasoning in complex perceptual domains such as vision often requires the ability to effectivelylearn flexible representations of high-dimensional data, interpret the representations in some form,and understand how the representations can be used to reconstruct the data. The ability to learnrepresentations is a measure of how well one can capture relevant information in the data. Beingable to interpret the learned representations is a measure of extracting consistent meaning in aneffort to make sense of them. Having the ability to reliably reconstruct the data, a tool for predictivesynthesis, can aid in model diagnosis, enable successful transfer learning, and improve generality.Such tasks are typically best addressed by generative models, as they exhibit the flexibility requiredto satisfy all three facets. Discriminative models primarily attend to the first two, learning flexiblerepresentations and conforming to some interpretable space (e.g. classification domain) but don’tperform the predictive synthesis task.Probabilistic graphical models (Koller & Friedman, 2009; Murphy, 2012) are a framework for gen-erative modelling that enables specifying a joint probability distribution on a richly semantic repre-sentation space. As good a fit as they are for specification and representation, the learning processfor both the analysis and synthesis tasks typically suffers in complex perceptual domains such asvision. This is because constructing a generative model requires explicitly specifying the condi-tional distribution of the observed data given latent variables of interest. In practice, designing such1Under review as a conference paper at ICLR 2017likelihood functions by hand is incredibly challenging, and applying generative models to visiondata often requires extensive and significant feature engineering to be successful. One approachto alleviate some of this hardship involves the development of deep generative models: generativemodels that employ neural networks to learn, automatically from data, the unknown conditional dis-tribution in the model. They function as flexible feature learners, where the features are encoded inthe posterior distribution over the latent variables in the model. Recent work exploring the effec-tiveness of such models (e.g. Kingma & Welling (2014); Kulkarni et al. (2015b); Goodfellow et al.(2014)) has shown considerable promise in being able to address the fundamental issues in per-forming this task. These models however are typically unsupervised, learning representations thatare not directly amenable to human interpretation. Any interpretability or disentanglement of thelearned representation is observed or extracted after learning has been performed, by exploring thelatent space along its non-specific axes of variation. A more recent approach by Chen et al. (2016)involves imposition of information-theoretic constraints to better separate factors of variation, buthere too, any interpretability is only established post facto.Figure 1: Variation along (top) light-ing and (bottom) identity axes.While such approaches have considerable merit, particu-larly when faced with the absence of any information aboutthe data, when there are aspects of variation in the data thatcanbe characterised effectively, using and being able toexpress these can often be desirable. For example, whenlearning representations for images of house numbers, hav-ing an explicit “digit” latent variable helps capture a mean-ingful axis of variation, independent of other aspects. Wealso often want to interpret the same data in different waysdepending on context: for a given image of a person, do wecare about the identity, lighting, or indeed any other facetsof the scene (c.f. Figure 1). In these situations, not beingable to enforce context is something of a handicap.In this paper, we seek to combine the best of both worlds: providing the facility to describe the struc-tural constraints under which we would like to interpret the data, while using neural nets to capturevariation for aspects we cannot, or choose not to, explicitly model. By structural constraints, we re-fer to the (arbitrary) dependencies one would like to employ in the recognition model, particularly inregard to there being consistent interpretable semantics of what the variables in the model represent.In particular, we set up our framework in the context of variational autoencoders (V AE Kingma &Welling (2014); Rezende et al. (2014)), as a means for semi-supervised learning in deep generativemodels (Kingma et al., 2014). We provide an alternate formulation of the variational objective and amodified training procedure which permits us to explore a wide space of recognition networks to useas probabilistic encoders. In particular we make no mean-field assumptions for our recognition net-works, allowing arbitrary hierarchical and structured-graphical-model representations, employingboth continuous and discrete latent variables that can be alternately observed, or left unobserved.2 B ACKGROUND AND RELATED WORKVariational autoencoders (Kingma & Welling, 2014; Rezende et al., 2014) simultaneously train botha probabilistic encoder and decoder for a dataset x. The central idea is that an encoding zcan beconsidered a latent variable which allows describing a decoder as a conditional probability densityp(xjz). This is typically a distribution with parameters defined as the output of a determinis-tic multi-layer neural network (itself with parameters ) which takes zas input. Placing a weakprior over z, the corresponding probabilistic encoder can be interpreted as the posterior distributionp(zjx)/p(xjz)p(z). Estimating parameters in this model is challenging, as is performingthe posterior inference necessary to encode data. The variational Bayes approach learns an approx-imate encoder q(zjx), called an “inference network” or a “recognition network”, which aims toapproximate the posterior distribution p(zjx). Then, rather than fitting parameters by maxi-mizing the marginal likelihood p(x), the variational approach maximizes an evidence lower bound(ELBO)L(;;x)logp(x), defined with respect to both decoder and encoder parameters.L(;;x) =Eq(zjx)[logp(x;z)logq(zjx)]; (1)One line of work to embed structure into the latent space zsuch that it exhibits disentangled fea-tures, is through partial supervision. This is either in terms of labelled data (Sohn et al., 2015),2Under review as a conference paper at ICLR 2017or curriculum-learning schemes (Kulkarni et al., 2015b) which explicitly disentangle different fac-tors. Kingma et al. (2014) explore semi-supervised learning in the V AE setting by factoring thelatent space to learn a joint classification model q(yjx)and recognition model q(zjx). Thisis done by separating the latent space into structured, interpretable components yand unstructuredcomponents z, analytically marginalising variables out where discrete. Sohn et al. (2015) performfully-supervised learning in V AEs by transforming an unconditional objective into one where thedata conditions both the (unstructured) latent and the (structured) labels. In contrast to Kingma et al.(2014), the learning objective is a lower bound on the conditional marginal likelihood p(xjy),conditioning the learned V AE on the values of the labelled data. Both of these approaches effec-tively require the label space yto be discrete and finite. Kulkarni et al. (2015b) attend to weakly-supervised learning with V AEs through a novel training procedure that uses data clustered intoequivalence classes along different axes of variation. They then constrain different parts of the latentspace to account for changes along a single axis, by training with data from a particular equivalenceclass. An advantage of this approach is not requiring any explicit labels on the latent space, though itdoes require independence assumptions on structured components, as well as carefully curated data.An alternative approach biases towards interpretable representations by introducing structure in theprior distribution over the latent space p(z). Johnson et al. (2016) explore the combination of graph-ical models and V AEs using classical conjugate exponential family statistical models as structuredpriors over the latent space. They consider relaxation of conjugacy constraints in the likelihoodmodel using neural network approximations, with a training scheme resembling traditional mean-field coordinate ascent algorithms. The recognition network, rather than proposing values outright,proposes parameters of a conjugate-likelihood approximation to the true non-conjugate likelihood.From a specific-instance perspective, Eslami et al. (2016) use a recurrent neural network (RNN)coupled with a spatial transformer network (STN, Jaderberg et al. (2015)) inducing a particularstate-space representation with the approximation distribution of a V AE to parse images into sceneconstituents. Kulkarni et al. (2015a) also explore a specific instance related to a 3D graphics engineby having a programmatic description provide structure using neural networks as surrogates for theperceptual-matching problem. Andreas et al. (2016) explore a more general formulation of structurewith compositional neural network models derived from linguistic dependency parses.3 F RAMEWORK AND FORMULATIONOur method synthesises the semi-supervised and structured-graphical-model approaches. Like John-son et al. (2016), we incorporate graphical model structures, however rather than placing them withinthe generative model p(z;x), we incorporate them into the encoder model q(zjx). For manyperceptual problems in domains such as vision, complex dependencies arise in the posterior due todeterministic interactions during rendering. A mean-field approximation in q(zjx)is a poor fit,even in situations where all the interpretable latent variables are a priori independent. This is animportant reason for our choice of where we embed structure. The use of a structured, multilevelprobabilistic model to define the encoder can also be interpreted as a hierarchical variational model(Ranganath et al., 2015). Interpretability is enforced by occasionally supplying labels to latent vari-ables expected to have a interpretable meaning in the final encoded representation.xlnfunction labelNoise()-- create the node connecting to inputlocal x = nn.Identity()()-- connect a discrete RV to inputlocal l = pp.Discrete({torch.Tensor(1,10)})({x})-- connect a std Gaussian RV to inputlocal n = pp.Gaussian({zeros(1,2), zeros(1,2)})({pp.r(x), pp.r(x)})nngraph.annotateNodes()-- return stochastic computation graphreturn pp.gModule({x}, {l, n})endFigure 2: Example graphical model and its ex-pression in our framework. Further details inthe Appendix.Our framework provides an embedded domain-specific language (EDSL) in Torch (Collobertet al., 2011), that can be used to specify a wide va-riety of graphical models in the form of a stochas-tic computation graph (Schulman et al., 2015). Anexample is shown in Figure 2. These graphicalmodels describe the structure of latent, observable,and partially observable random variables whichexist in an idealized representation space. Specif-ically, we assume a model structure of the formp(x;z;y) =p(xjz;y)p(z;y)where the like-lihoodp(xjz;y)of the data xis conditioned ona set of structured variables yandunstructuredvariables z, for which we define some appropri-3Under review as a conference paper at ICLR 2017ately structured prior p(z;y). The likelihood itself is typically unstructured (e.g. a multivariatenormal distribution). This model structure allows us to optimize the parameters learning a likeli-hood function constrained by the structured latents, but crucially does not require that these latentscompletely explain the data. The approximation to the true posterior is nominally taken to be of theform of the prior distribution q(z;yjx), with parameters but can often include additional struc-ture and alternate factorisations as appropriate. Models with such factoring are useful for situationswhere interpretability is required, or informative, for some axes of variation in the data. It is alsouseful when we wish to interpret the same data from different contexts and when we cannot con-ceivable capture all the variation in the data due to its complexity, settling for particular restrictions,as is often the case with real world data.A particular challenge here lies in choosing a manner for incorporating labelled data for some oftheyinto a training scheme. For example, choosing q(z;yjx) =qz(zjy;x)qy(yjx), de-composes the problem into simultaneously learning a classifier qy(yjx)alongside the generativemodel parameters and encoder qz(zjx;y). In the fully unsupervised setting, the contribution ofa particular data point xito the ELBO can be expressed, with minor adjustments of Equation (1), asL;;xi=Eq(z;yjxi)"logpxijz;yp(z;y)qz(z;yjxi)#: (2)a Monte Carlo approximation of which samples ysqy(yjx)andzsqz(zjy;x).By contrast, in the fully supervised setting the values yare treated as observed and become fixedinputs into the computation graph, instead of being sampled from q. When the label yis ob-served along with the data, for fixed (xi;yi)pairs, the lower bound on the conditional log-marginallikelihood logp(xjy)isLxjy;z;xi;yi=Eqz(zjxi;yi)"logpxijz;yipzjyiqz(zjxi;yi)#: (3)This quantity can be optimized directly to learn model parameters andzsimultaneously viaSGD. However, it does not contain the encoder parameters y. This difficulty was also encounteredin a related context by Kingma et al. (2014). Their solution was to augment the loss function byincluding an explicit additional term for learning a classifier directly on the supervised points.An alternative approach involves extending the model using an auxiliary variable ~y. Definingp(~y;y;zjx) =p(~yjy)p(x;y;z)andq(~y;y;zjx) =p(~yjy)q(y;zjx), with likelihoodp(~yjy) =~y(y), we obtain a model for which marginalization over ~yreproduces the ELBOin Equation (2), and treating ~yas observed gives the supervised objectiveL;;xi~y=yi=Eqy"yi(y)Eqz"logpxijz;yp(z;y)qy(yjxi)qz(zjy;xi)##=qyyijxiEqz"logpxijz;yipz;yiqy(yijxi)qz(zjyi;xi)#=qyyijxiLxjy;z;xi;yi+ logpyilogqyyijxi:(4)This formulation enables a range of capabilities for semi-supervised learning in deep generativemodels. To begin with, it extends the ability to partially-supervise latent variables to those thathave continuous support. This effectively learns a regressor instead of a classifier in the same for-mulation. Next, it automatically balances the trade-off between learning a classifier/regressor andlearning the parameters of the generative model and the remainder of the recognition network. Thisis due to the fact that the classifier qy(yjx)is always present and learned, and is contrast to thehyperparameter-driven approach in Kingma et al. (2014). Finally, it allows for easy automatic im-plementation of a wide variety of models, separating out the labelled and unlabelled variables, toderive a unified objective over both the supervised and unsupervised cases. When unsupervised, thevalue of the label yiis sampled from qy(yjx)and scored in that distribution, and when super-vised, it is set to the given value, and scored in the same distribution. This is in the same spirit as a4Under review as a conference paper at ICLR 2017number of approaches such as Automatic Differentiation (AD) and Probabilistic Program inference,where the choice of representation enables ease of automation for a great variety of different cases.Supervision rate. While learning with this objective, we observe data in batches that are eitherwholly supervised, or wholly unsupervised. This typically obviates the need to construct compli-cated estimators for the partially observed cases, while also helping reduce variance in general overthe learning and gradient computation (details of which are provided in the Appendix). Doing soalso presents a choice relating to how often we observe labelled data in a complete sweep throughthe dataset, referred to as the supervision rate r. Practically, the rate represents a clear trade-off inlearning the generative and recognition-network parameters under interpretability constraints. If therate is too low, the supervision can be insufficient to help with disentangling representation in therecognition network, and if too high, the generative model can overfit to just the (few) superviseddata points. The rate also has a natural relation to the variance of the objective function and its gra-dients. As can be seen from Equation (4), an evaluation of the objective for a given yiinvolves theunsupervised estimation of the conditional ELBO Lxjy. The rate implicitly affects the number ofsuch estimations for any given yiand thus the variance of the objective with respect to that label yi.The same argument applies for the gradients of the objective.Plug-in estimation for discrete variables. In targeting a general class of models, another par-ticular difficulty is the ubiquity of discrete latent variables. To obtain a differentiable objective,one can either marginalize over discrete variables directly (as done by Kingma et al. (2014) andin the STAN probabilistic programming system (Stan Development Team, 2013)), which doesn’tscale over numbers of variables, or use a REINFORCE-style estimator (Williams, 1992; Mnih &Gregor, 2014), which tends to have high variance. A third approach, related to Bengio et al. (2013),is to represent discrete latent variables defined on a finite domain using a one-hot encoding, thenrelaxing them to a continuous probability simplex when used as an input to a recognition network.For example, when yis a one-hot encoding of a discrete value used in a recognition network whichfactors asq(yjx)q(zjy;x), thenq(yjx)is itself a discrete distribution with a probabilityvector=g(x)for some deterministic function g. The value yis itself an input to a secondfunctionh(x;y)producing the parameters for q(zjy;x). Instead of evaluating h(x;y)at asampled value y(or enumerating over the entire domain), we simply evaluate it at the single point ,noting that=Eq(yjx)[y]. This may seem a crude approximation, replacing integration with asingle evaluation, claiming Eq(yjx)[h(x;y)]h(x;Eq(yjx)[y]);which is not true in generalforh(). However, if is actually a one-hot encoding, i.e., when Eq(yjx)[y]has a single non-zerovalue, they are in fact equal. For our experiments we employ this plug-in estimator where applicable,although our framwork can express any of the above methods.4 E XPERIMENTSWe evaluate our framework on along a number of different axes, pertaining to its ability to (i) learndisentangled representation from a little supervision, (ii) demonstrate capability at a relevant clas-sification/regression task, (iii) successfully also learn the generative model, and (iv) admit the useof latent spaces of varying dimensionality Note that we do not set out to build the best possibleclassifier in these tasks. Instead, the classification task is a means to the end of demonstrating thatthe learned representation is indeed disentangled, often with minimal supervision. Also, details ofneural network architectures, graphical models for the recognition networks, dataset characteristics,and hyper-parameter settings are provided in the Appendix.4.1 MNIST AND SVHNxndxndFigure 3: (left) Generative and(right) recognition model withdigitdand stylen.To begin with, we explore the facets of our model in thestandard MNIST and Google Street-View House Numbers(SVHN) datasets. We use this example to highlight how theprovision of even the slightest structure, coupled with minimalsupervision, in often sufficient to induce the emergence of dis-entangled representations in the recognition network. Figure 3shows the structure of the generative and recognition modelsfor this experiment.5Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 4: (a) Visual analogies for the MNIST data, with inferred style latent variable fixed andthe label varied. (b) Exploration in “style” space for a 2D latent gaussian random variable. Visualanalogies for the SVHN data when (c) fully supervised, and (d) supervised with just 100 labels/digit.MNIST SVHNl Ours Kingma et al. (2014) Ours Kingma et al. (2014)10 12.2 (1.38) 11.97 (1.71) - -60 5.28 (0.76) 4.94 (0.13) - -100 4.23 (0.68) 3.60 (0.56) 30.32 ( 2.74) 36.02 (0.10)300 3.94 (0.77) 3.92 (0.63) 23.98 ( 1.83) -Figure 5: (Top) Classification error graphs over different labelled set (per class) sizes and supervisionrates for MNIST (left) and SVHN (right). Note the steep drop in error rate with just a handful oflabels per class ( l), seen just a few times ( r). (Bottom) Classification error rates for different (per-class) labelled-set sizes lover different runs.Figure 4(a) and (c) show the effect of first transforming a given input (leftmost column) into thedisentangled latent space, and with the style latent variable fixed, manipulating the digit through thegenerative model to produce appropriately modified reconstructions. These were both derived withfull supervision over a 50 and 100 dimensional Gaussian latent space for the styles, respectively.Figure 4(b) shows the transformation for a fixed digit, when the style latent is varied. This wasderived with a simple 2D Gaussian latent space for the style. The last part, Figure 4(d) shows theability of the network to begin disentangling the latent space with just 100 labelled samples per digit(training dataset size is 73000 points). Separation between style and class is clearly evident evenwith such little supervision.We compute the classification accuracy of the label-prediction task with this model for both datasets,and the results are reported in the bottom of Figure 5. The results are compared to those reportedin Kingma et al. (2014). For the MNIST dataset, we compare against model M2 as we run directlyon the data, without performing a preliminary feature-extraction step. For the SVHN dataset, wecompare against model M1+M2 even though we run directly on the data, using a CNN to simultane-ously learn to extract features. Confidence estimates for both were computed off of 10 runs. We notethat we fare comparably with these models, and in particular, when employing a CNN for featureextraction for the SVHN dataset, comfortably exceed them.6Under review as a conference paper at ICLR 2017Ours (Full Supervision) Ours (Semi-Supervised) Jampani et al. (2015)Identity 4.2 ( 0.84) 10.3 ( 2.36) 30Lighting 14.2 ( 1.12) 28.4 ( 4.12) 10Figure 7: (Top) Exploring the generative capacity of the model. Column 1: input image. Col-umn 2: reconstruction. Columns 3-7: reconstructions with fixed (inferred) lighting and varyingidentities. (Bottom) Classification and regression error rates for the identity and lighting latent vari-ables, fully-supervised, and semi-supervised with 20 distinct labelled example per variation axis (60total). Classification is a direct 1-out-of-38 choice, whereas for the comparison, error is a nearest-neighbour loss based on the inferred reflectance. Regression loss for lighting is measured as cosineangle distance. Results for Jampani et al. (2015) are estimated from plot asymptotes.Figure 5 shows the effect of the supervision rate ron the error rate. As evident from the graph, therate has a strong affect on how quickly one learns an effective classifier. This indicates that whenlabels are sparse or hard to come by, a training regime that runs largely unsupervised, even only oc-casionally looking at the supervised data, still learns to disentangle the latent-space representations.4.2 I NTRINSIC FACESWe next move to a harder problem involving a generative model of faces, attempting to highlighthow the introduction of stronger dependency structures in the recognition model helps disentanglelatents, particularly when the generative model assumes conditional independence between the la-tents. Here, we use the “Yale B” dataset as processed by Jampani et al. (2015) to train the modelsshown in Figure 6. The primary tasks we are interested in here are (i) the ability to manipulate theinferred latents to evaluate if they qualitatively achieve semantically meaningful disentangled repre-sentations, (ii) the classification of person identity, and (iii) the regression for lighting direction.xi` s rxi`rsFigure 6: (Top) Generative and (Bottom)recognition model with identity i, light-ingl, reflectance r, and shading s.Figure 7 presents both qualitative and quantitative eval-uation of the framework to jointly learn both the struc-tured recognition model, and the generative model pa-rameters. A particular point of note is that we explic-itly encode “identity” as a categorical random variablesince we have knowledge about the domain and the rel-evant axis to explore. Since we also learn the generativemodel, which in the domain of the actual dataset is sim-ply the expression (n:l)r+, we can afford to weaklyspecify the structure allowing for some neural-networkcomponent to take up the requisite slack in order to re-construct the input. This allows us to directly addressthe task of predicting identity, instead of approachingit through surrogate evaluation methods (e.g. nearest-neighbour classification based on inferred reflectance).While this formulation allows us to to perform the identity classification task, the fact that ourrecognition model never supervises the reflectance means that the variable can typically absorbsome of the representational power of other, semi-supervised nodes. This is particularly the casewhen dealing with high-dimensional latent spaces as for reflectance and shading.7Under review as a conference paper at ICLR 2017xcknkdkKKmaxxKdknkckKsize rate (%) error rate (%)Unsup 0 32.25 ( 12.97)500 1 6.42 ( 2.15)500 10 4.21 ( 1.29)1000 1 4.72 ( 1.60)1000 10 2.98 ( 0.93)Figure 8: Generative (l) and recognition (m) model with digit d, stylen, canvasc, and countK.4.3 M ULTI -MNISTFinally, we run an experiment to test the ability of our framework to handle models that induce latentrepresentations of variable dimension. We extend the simple model from the MNIST experiment bycomposing it with a stochastic sequence generator, to test its ability to count the number of digits ina given input image, given its ability to encode and reconstruct the digits in isolation. The graphicalmodels employed are depicted in Figure 8.We observe that we are indeed able to reliable learn to count, at least within the limits of upto 3digits in the multi-mnist dataset. The dataset was generated directly from the MNIST dataset by ma-nipulating the scale and positioning of the standard digits into a combined canvas, evenly balancedacross the counts and digits. The results across different supervised set sizes and supervision ratesare shown in the table in Figure 8.5 D ISCUSSION AND CONCLUSIONIn this paper, we introduce a general framework for semi-supervised learning in the V AE setting thatallows incorporation of graphical models to specify a wide variety of structural constraints on therecognition network. We demonstrate its flexibility by applying it to a variety of different tasks in thevisual domain, and evaluate its efficacy at learning disentangled representations in a semi-supervisedmanner, showing strong performance.This framework ensures that the recognition network learns to make predictions in an interpretableand disentangled space, constrained by the structure provided by the graphical model. The structuredform of the recognition network also is typically a better fit for vision models, as it helps bettercapture complexities in the likelihood (usually the renderer). Given that we encode graphical modelsin the recognition network, and Johnson et al. (2016) encode it in the generative model in concertwith V AEs, a natural extension would be the exploration of the ability to learn effectively whenspecifying structure in both by means of graphical models. This is a direction of future work we areinterested in, particularly in context of semi-supervised learning.The framework is implemented as a Torch library (Collobert et al., 2011), enabling the constructionof stochastic computation graphs which encode the requisite structure and computation. This pro-vides another direction to explore in the future – the extension of the stochastic computation graphframework to probabilistic programming (Goodman et al., 2008; Wingate et al., 2011; Wood et al.,2014). Probabilistic programs go beyond the presented framework to include stochastic inferenceand the ability to specify arbitrary models of computation. The combination of such frameworkswith neural networks has recently been studied in Ritchie et al. (2016); Le et al. (2016), and indi-cates a promising avenue for further exploration.REFERENCESJacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition , 2016.8Under review as a conference paper at ICLR 2017Yoshua Bengio, Nicholas L ́eonard, and Aaron Courville. Estimating or propagating gradientsthrough stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432 , 2013.Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-gan: Interpretable representation learning by information maximizing generative adversarial nets.CoRR , abs/1606.03657, 2016.Ronan Collobert, Koray Kavukcuoglu, and Cl ́ement Farabet. Torch7: A matlab-like environmentfor machine learning. In BigLearn, NIPS Workshop , 2011.S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and Geof-frey. E Hinton. Attend, infer, repeat: Fast scene understanding with generative models. arXivpreprint arXiv:1603.08575 , 2016.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.ND Goodman, VK Mansinghka, D Roy, K Bonawitz, and JB Tenenbaum. Church: A language forgenerative models. In Uncertainty in Artificial Intelligence , pp. 220–229, 2008.Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Ad-vances in Neural Information Processing Systems , pp. 2017–2025, 2015.Varun Jampani, S. M. Ali Eslami, Daniel Tarlow, Pushmeet Kohli, and John Winn. Consensus mes-sage passing for layered graphical models. In International Conference on Artificial Intelligenceand Statistics , pp. 425–433, 2015.Matthew J. Johnson, David K. Duvenaud, Alex B. Wiltschko, Sandeep R. Datta, and Ryan P. Adams.Composing graphical models with neural networks for structured representations and fast infer-ence. In Advances in Neural Information Processing Systems , 2016.Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR ,abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980 .Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the 2ndInternational Conference on Learning Representations , 2014.Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervisedlearning with deep generative models. In Advances in Neural Information Processing Systems ,pp. 3581–3589, 2014.Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques . MITpress, 2009.Tejas D Kulkarni, Pushmeet Kohli, Joshua B Tenenbaum, and Vikash Mansinghka. Picture: Aprobabilistic programming language for scene perception. In Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition , pp. 4390–4399, 2015a.Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutionalinverse graphics network. In Advances in Neural Information Processing Systems , pp. 2530–2538,2015b.Tuan Anh Le, Atilim Gunes Baydin, and Frank Wood. Inference compilation and universal proba-bilistic programming. arXiv preprint arXiv:1610.09900 , 2016.Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. InProceedings of the 31st International Conference on Machine Learning (ICML-14) , pp. 1791–1799, 2014.Kevin P Murphy. Machine learning: a probabilistic perspective . MIT press, 2012.Rajesh Ranganath, Dustin Tran, and David M Blei. Hierarchical variational models. arXiv preprintarXiv:1511.02386 , 2015.9Under review as a conference paper at ICLR 2017Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation andapproximate inference in deep generative models. In Proceedings of The 31st International Con-ference on Machine Learning , pp. 1278–1286, 2014.Daniel Ritchie, Paul Horsfall, and Noah D Goodman. Deep amortized inference for probabilisticprograms. arXiv preprint arXiv:1610.05735 , 2016.John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation usingstochastic computation graphs. In Advances in Neural Information Processing Systems , pp. 3510–3522, 2015.Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation usingdeep conditional generative models. In Advances in Neural Information Processing Systems , pp.3465–3473, 2015.The Stan Development Team. Stan modeling language user’s guide and reference manual. http://mc-stan.org/ , 2013.Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcementlearning. Machine learning , 8(3-4):229–256, 1992.David Wingate, Andreas Stuhlmueller, and Noah D Goodman. Lightweight implementations ofprobabilistic programming languages via transformational compilation. In International Confer-ence on Artificial Intelligence and Statistics , pp. 770–778, 2011.Frank Wood, Jan Willem van de Meent, and Vikash Mansinghka. A new approach to probabilisticprogramming inference. In Artificial Intelligence and Statistics , pp. 1024–1032, 2014.APPENDIXFORMULATIONGradients of the Variational Objective: We consider the gradients of the form in Equation (4)with respect to ;z, andy. In particular, note that for both andzthe gradient is the sameas the gradient with respect to the conditional ELBO Lxjy, up to a per-datapoint scaling factorqyijxi. For continuous latent variables, as well as for many discrete random variables, theexpectations over zcan be reparameterized into a form where the gradients can be approximatedwith a single sampled value. Evaluating Equation (4) at this point yields estimators for the ELBO bLand conditional ELBO bLxjy, as well as corresponding single-sample gradient estimates brL andbrLxjyfor each set of parameters.Gradient estimates for andzare proportional to the gradients of the conditional ELBO asbrL(;;xi)y=yi=qyyijxibrLxjy;brzL(;;xi)y=yi=qyyijxibrzLxjy;while the gradient with respect to the “classifier” parameters ytakes a different form. Applyingthe product rule to Equation (4) we havebryL(;;xi)y=yi=hbLxjy+ logpyilogqyyijxiiryqyyijxiqyyijxirylogqyyijxi=hbLxjy+ logpyilogqyyijxi1iryqyyijxi=qyyijxihbL1irylogqyyijxi:10Under review as a conference paper at ICLR 2017MODEL AND NETWORK PARAMETERSWe note for that all the experiments, save the one involving Street-View House Numbers (SVHN),were run using a 2-3 layer MLP with 512 nodes and using a Bernoulli loss function. For SVHN, weadditionally employed a two stage convolutional and a 2 stage deconvolutional network to effectivelyextract features for the standard MLP model for the recognition network and the generative modelrespectively; training the entire network end-to-end. For learning, we used AdaM (Kingma & Ba,2014) with a learning rate of 0.001 (0.0003 for SVHN) and momentum-correction terms set to theirdefault values. As for the minibatch sizes, they varied from 80-500 depending on the dataset beingused and its size.MODELSThe syntax of our computation graph construction is such that the first call instantiates the compu-tation, and the second instantiates the node and its connections. For specified random variables, thefirst set of parameters defines the prior and second set the parameters for the proposal distributions.In all our models, we extract the common, feature-extraction portions of the recognition model qinto a simple pre-encoder. Parameters and structure for this are specified above.The class-conditional model for MNIST and SVHN.local ndim = 50local program = {}function program:getNetwork()local input = nn.Identity()() -- required to make nngraph play nice-- the actual programlocal d = pp.DiscreteR({torch.Tensor(1,10):fill(1/10)})({input})local mu = nn.Sequential():add(nn.JoinTable(2)):add(nn.FluidLinear(ndim)):add(nn.SoftPlus())local sig = nn.Sequential():add(nn.JoinTable(2)):add(nn.FluidLinear(ndim)):add(nn.SoftPlus())local n = pp.Gaussian({torch.zeros(1,ndim),torch.zeros(1,ndim)})({mu({d, input}), sig({d, input})})-- end programnngraph.annotateNodes() -- necessary to annotate nodes with local varnamesreturn pp.gModule({input}, {d, n})endreturn programThe model used for the faces dataset.local program = {}function program:getNetwork()local input = nn.Identity()() -- required to make nngraph play nice-- the actual programlocal id = pp.DiscreteR({torch.Tensor(1,38):fill(1/38)})({input})local light = pp.Gaussian({torch.zeros(1,3),torch.zeros(1,3)})({pp.r(input), pp.r(input)})local factorQ = nn.Sequential():add(nn.JoinTable(2)):add(nn.FluidLinear(20)):add(nn.SoftPlus())local shading = pp.Gaussian({torch.zeros(1,20),torch.zeros(1,20)})({pp.r(factorQ({id,light})), pp.r(factorQ({id,light}))})local reflectance = pp.Gaussian({torch.zeros(1,20),torch.zeros(1,20)})({pp.r(input), pp.r(input)})-- end programnngraph.annotateNodes() -- necessary to annotate nodes with local varnamesreturn pp.gModule({input}, {shading, reflectance})endreturn program11Under review as a conference paper at ICLR 2017The model used for the multi-mnist dataset.local program = {}local function mnist()local input = nn.Identity()() -- required to make nngraph play nicelocal d = pp.DiscreteR({torch.Tensor(1,10):fill(0.1)})({input})local n = pp.Gaussian({torch.zeros(1,50),torch.zeros(1,50)})({pp.r(input), pp.r(input)})-- end programnngraph.annotateNodes() -- necessary to annotate nodes with local varnamesreturn pp.gModule({input}, {d, n})endfunction program:getNetwork()local input = nn.Identity()() -- required to make nngraph play nice-- the actual programlocal c = pp.Discrete(({torch.Tensor(1,5):fill(0.2)})({input}))-- needswork: have to handle number of inputs and inter-repeat-statelocal ds = pp.Repeat(mnist())({input, c})-- end programnngraph.annotateNodes() -- necessary to annotate nodes with local varnamesreturn pp.gModule({input}, {ds})endreturn program12
BkSqjHqxg
Under review as a conference paper at ICLR 2017SKIP-GRAPH : LEARNING GRAPH EMBEDDINGS WITHAN ENCODER -DECODER MODELJohn Boaz Lee & Xiangnan KongDepartment of Computer ScienceWorcester Polytechnic InstituteWorcester, MA 01609, USAfjtlee, xkongg@wpi.eduABSTRACTIn this work, we study the problem of feature representation learning for graph-structured data. Many of the existing work in the area are task-specific and basedon supervised techniques. We study a method for obtaining a generic featurerepresentation for a graph using an unsupervised approach. The neural encoder-decoder model is a method that has been used in the natural language processingdomain to learn feature representations of sentences. In our proposed approach,we train the encoder-decoder model to predict the random walk sequence of neigh-boring regions in a graph given a random walk along a particular region. The goalis to map subgraphs — as represented by their random walks — that are struc-turally and functionally similar to nearby locations in feature space. We evaluatethe learned graph vectors using several real-world datasets on the graph classifi-cation task. The proposed model is able to achieve good results against state-of-the-art techniques.1 I NTRODUCTIONThe skip-gram model (Mikolov et al., 2013) was originally introduced in the natural language pro-cessing (NLP) domain as a model for learning vector representations of words. Recently, it hasbeen adapted successfully to solve the problem of learning node representations for graph-structureddata (Grover & Leskovec, 2016; Perozzi et al., 2014). The learned vectors can then be used directlyin problems such as link prediction (Miller et al., 2009), or clustering of nodes on a graph (Vinayaket al., 2014). However, in many real-world applications we need to learn a feature representation forthe entire graph instead of representations for just the nodes in the graph. In this paper, we studythe graph representation learning problem, where the task is to learn a feature representation for anygraph object. We propose a novel solution based upon the encoder-decoder model.Graph-structured data can be found in many different domains including biology, chemistry, andthe study of social networks. For instance, in chemistry, chemical compounds can be representedas molecular graphs (Duvenaud et al., 2015). In social network analysis, the interaction amongdifferent entities of a community can be captured using a social graph (Yanardag & Vishwanathan,2015). A natural question that arises in these scenarios is what the structure of a graph tells usabout the properties of the graph ( e.g., what does the molecular graph tell us about the compound’saqueous solubility, or its anti-cancer activity?). In other words, we are often interested in performingmachine learning tasks on graph-structured data. Many techniques have been proposed to solve thisproblem, these include learning graph kernels (Vishwanathan et al., 2010), identifying discriminativesubgraphs (Kong et al., 2011), using specially designed neural network models such as the graphneural network (Scarselli et al., 2009), and learning the graph fingerprint (Duvenaud et al., 2015).Most of the approaches for learning graph features are supervised and task-specific. Our approach,on the other hand, is unsupervised and general-purpose. The learned features can be used directlywith off-the-shelf machine learning methods on different tasks, such as classification or clustering.Perhaps the work that resembles this work the most is the one in (Yanardag & Vishwanathan, 2015).We argue, however, that our approach is different and this is good motivation to pursue the study asthere has not been many work published in the area. For one, we use the skip-thought model (Kiros1Under review as a conference paper at ICLR 2017AAACCACBCBBDFEEAACACDEFECBCBBAFigure 1: A random walk over a graph is split into three subsequences (s1;s2;s3). The middlesequence is input into the encoder and the decoders attempt to reconstruct the previous and nextsub-sequence. The unattached arrows are connected to the encoder output to condition the decoder.et al., 2015) and we are not just interested in structurally similar subgraphs but also functionallysimilar ones.Our approach is based on the encoder-decoder model (Kalchbrenner & Blunsom, 2013; Cho et al.,2014); in particular, we are interested in the skip-thought model. In (Kiros et al., 2015), tuplescomposed of three consecutive sentences from word documents are fed into an RNN model and themodel attempts to reconstruct the previous and next statements given the middle sentence. Aftertraining on a large text corpus, the hidden vector values for an input sentence can be used as thatinput sequence’s feature representation. It has been shown that the model learns a function thatmaps semantically and syntactically similar sentences close to one another in feature space. In thiswork, the idea is to take instead a sequence generated by a random walk along a labeled graph andto divide it into three parts, feeding these into the encoder-decoder model. Since the structure of thegraph determines the random walk sequences that can be generated, we can treat each sub-sequenceas a representation of a particular subgraph in the graph. We argue that by training an encoder-decoder model on a large number of random walk sequences, we can learn a feature representationthat groups structurally and functionally similar subgraphs together. Figure 1 shows an example ofhow we can train the model using a random walk over a graph. A simple example that illustrateshow the model may learn to identify functionally similar subgraphs is shown in Figure 2.After the model is trained on a large sample of random walks generated from a dataset of labeledgraphs, we can then freeze the model and use the encoder as a feature extractor. In particular, weobtain a feature representation of a graph by sampling multiple short random walks and aggregatingthe information encoded in the feature representations of these short walks. We borrow an analogyfrom the NLP domain to highlight the idea. In order to obtain a good feature representation for atext document, short of sampling all the words in the document one may sample a set of sentencesfrom the document and use these to construct the features for the document. Similarly, to obtain afeature representation for a graph, we sample a set of subgraphs (as represented by the short walks)and use the aggregate subgraph features to construct the final graph feature vector. Since we use thetrained encoder as our feature extractor, graphs that share structural and functional properties willtend to have more similar feature vectors.2 P ROPOSED METHOD2.1 S KIP-THOUGHTSince our proposed approach is based on the encoder-decoder model of (Kiros et al., 2015), we beginby briefly introducing the model. The encoder-decoder model uses an RNN with GRU (Chung et al.,2014) activation as the encoder and an RNN with a conditional GRU as the decoder. The model istrained using the Adam stochastic optimization algorithm (Kingma & Ba, 2015).2Under review as a conference paper at ICLR 2017ABDFABBBCCCDFADFABBBBGHGDFsubgraph1subgraph2possiblerandomwalksequences:“B-B-A-B-B-A-C-C-C-D-F-D-F”,“B-B-A-B-B-A-G-H-G-D-F-D-F”Figure 2: Two structurally dissimilar subgraphs can be considered functionally similar if they alwaysappear in the same neighborhood. For instance, subgraphs “C-C-C” and “G-H-G” are structurallydifferent since they are composed of different types of nodes but they seem to be serving the samefunction of connecting the same kind of regions together. If these patterns appear frequently inthe dataset, the encoder-decoder model will learn very similar representations for the random walksequences corresponding to the two subgraphs.The input to the model is a tuple of sentences (si1;si;si+1), withxtibeing the word embeddingfor thet-th word,wti, of sentence si. The word embeddings for the middle sentence, si, are fedsequentially as input to the the encoder. The encoder generates a hidden vector htiat each timestept, this is the information the model retained after processing sequence x1i; :::;xtiand can bethought of as the sequence representation. The hidden state hNican thus be considered the sentencerepresentation, given siis of lengthN. Given a sequence to encode, the encoder iterates through thefollowing equations, as given in (Kiros et al., 2015). Here the subscripts iare dropped for simplicity.rt=(Wrxt+Urht1) (1)zt=(Wzxt+Uzht1) (2)ht=tanh(Wxt+U(rtht1)) (3)ht= (1zt)ht1+ztht(4)where rtis the forget gate, ztis the update gate, htis the proposed hidden state, and is thecomponent-wise product. Here rtdecides what information to discard from the previous state, ztdecides what new information to encode, and the new hidden vector htis calculated accordingly.Values in rtandztare in the range [0;1].Two decoders with separate parameters are used to reconstruct the previous statement si1and thenext statement si+1. The computation for the decoder is similar to that of the encoder, except thistime the models are also conditioned on the encoder output hi. Decoding involves iterating throughthe following statements. Again the subscript i+ 1(similarly,i1) is dropped.rt=(Wdrxt1+Udrht1+Crhi) (5)zt=(Wdzxt1+Udzht1+Czhi) (6)ht=tanh(Wdxt1+Ud(rtht1) +Chi) (7)hti+1= (1zt)ht1+ztht(8)here the Cmatrices are used to bias the computation by the sentence vector produced by the encoder.Also, note that the word embeddings are from the previous and next statements since these are whatis given to the decoders. The probability of word wti+1can be calculated byP(wti+1jw<ti+1;hi)/exp(vwti+1hti+1) (9)where vwti+1is the row vector in the vocabulary vector Vcorresponding to the word wti+1. Thevocabulary matrix, V, is a weight matrix shared by both decoders connecting the decoder’s hiddenstates for computing a distribution over words.Finally, given a sentence tuple, the training objective is given byXtlogP(wti+1jw<ti+1;hi) +XtlogP(wti1jw<ti1;hi) (10)which is the sum of log-probabilities for the words in the previous and next statements, si1andsi+1, conditioned on the sentence representation for si. The total objective would then be the abovesummed for all tuples in the training data.3Under review as a conference paper at ICLR 20172.2 S KIP-GRAPHIn this work, we are interested in graph-structured data in particular. In our setting, we are given aset of labeled graphs D=fG1;G2; :::;Gngwith each graph associated with a class label. A graphG= (V;E;`v)is comprised of a vertex set V, an edge setEVV , and a node labeling function`v:V!LVwhich assigns each node to a label in LV. Additionally, the edges may also be labeledin which case we also have an edge labeling function `e:E!LE. Nodes and edges can also haveassociated feature vectors, these are fv2RDv, andfe2RDe, respectively.2.2.1 U NLABALED GRAPHSAlthough we will be working primarily with labeled graphs, our method can be easily extendedto support unlabeled graphs by including an additional pre-processing step. Algorithms like theWeisfeiler-Lehman algorithm (Weisfeiler & Lehman, 1968; Shervashidze et al., 2011) or the Morganalgorithm (Rogers & Hahn, 2010) for calculating molecular fingerprints are iterative algorithms thatwork by repeatedly calculating the attribute for a node via hashing of the attributes of its neighboringnodes. The final node attributes capture the local structure or topology of the graph. For unlabeledgraphs, all node attributes can be initialized to a constant value and after the algorithm is run, wecan treat the node attributes as the labels for the nodes in the graph.2.2.2 T RAINING SET GENERATIONGiven a set of graphs D, a sample size K, a minimum random walk length lmin, and a maximumrandom walk length lmax, we take each graph G 2D and generate Krandom walk sequences.Specifically, for a graph G,Ksequences of the form`v(v1);:::;` v(vk);`v(vk+1);:::;` v(vk+k0);`v(vk+k0+1);:::;` v(vk+k0+k00)are generated. Here, v12V is a randomly selected start node, (vi;vi+1)2E forifrom 1:::k+k0+k001, andlmink;k0;k00lmax. We can split each sequence into three sub-sequences withs1=`v(v1);:::;` v(vk),s2=`v(vk+1);:::;` v(vk+k0), ands3=`v(vk+k0+1);:::;` v(vk+k0+k00). Foreach sequence, k;k0, andk00are randomly drawn to be between the constraints. Since the lengthof the sub-sequences do not need to have fixed lengths and can instead be between lminandlmax,regions of varying sizes can easily be considered.In the above formulation, we assume that only the vertices in the graph are labeled and node andedge features are not given. When nodes, or edges, are labeled and feature vectors are providedwe can use a one-hot embedding to represent each unique combination of labels and features. Thistreats each distinct combination as a unique “word” and does not capture the relationship betweennodes or edges that share labels or certain features. A better approach is to simply use a one-of- jLjvector to encode the label and concatenate this with the feature vector, this allows the node or edgeembedding to capture shared features and labels.Once all the tuples of random walk sequences have been generated, they can be used to train theencoder-decoder1in an unsupervised fashion.2.2.3 O BTAINING FINAL GRAPH REPRESENTATIONAfter the encoder-decoder has been trained, we can freeze the model and use the encoder to generaterepresentations, hi, for any arbitrary random walk sequence. Ultimately, however, we are interestedin obtaining a representation for entire graphs so we try several strategies for aggregating the encoderrepresentations obtained from a set of independent random walks sampled from a given graph.1.Single walk: In this approach we do not use several encoder representations. Instead, wetrain the model on relatively long (relative to the size of the graphs in the dataset) randomwalk sequences and use a single long walk over the graph to obtain its representation.2.Average: We compute the component-wise average of the encoder representations of thesampled random walk sequences. This is then used as the graph representation.1We use the implementation in https://github.com/ryankiros/skip-thoughts.4Under review as a conference paper at ICLR 20173.Max: As in (Kiela & Bottou, 2014), we take the component-wise absolute maximum ofall encoder representations.4.Cluster: The encoder representations are first fed into a clustering technique like K-means (Hamerly & Elkan, 2003) and we use the cluster information to create a bag-of-cluster vector that serves as the graph’s representation.The procedure for obtaining the graph embeddings is summarized in Algorithm 1. The calculatedgraph embeddings can now be used with any off-the-shelf machine learning method.Algorithm 1: Calculate graph embeddingInput : Training setD, sample size K, walk lengths lminandlmax, aggregate sample size K0,and aggregate method aggOutput : Graph embeddings1Generate set of KjDj random walk tuples, S;2Train encoder-decoder model using S;3foreachGinDdo4 Randomly select K0random walks;5 Obtain encoder representations h1;:::;hK0from the random walks;6 Compute graph embedding with agg(h1;:::;hK0);7end8Return final graph embeddings;3 E XPERIMENTS3.1 D ATASETWe evaluate our proposed method on the binary classification task using four chemical compounddatasets (Kong et al., 2011). The datasets contain chemical compounds encoded in the SMILESformat (Weininger, 1988); class labels indicate the anti-cancer properties (active or inactive) of eachcompound. We use the RDKit2package to obtain the molecular graphs from the SMILES data. Wealso use RDKit to obtain the labels for the nodes (atom type) and edges (bond type). Additionally, weused the number of attached hydrogens as a node feature and bond conjugation as an edge feature.Since the edges in the datasets we evaluate on are also labeled, the generated random walk sequencesinclude edges. The datasets are all highly skewed with far more negative samples than positive ones,we tested the methods on balanced datasets by selecting a random set of negative samples equal tothe positive ones. Table 1 shows a summary of the datasets used. The average size of the moleculargraphs in each of the four datasets is around 30.Table 1: Summary of experimental datasets. “# pos” stands for the number of positive samples.dataset # graphs # pos detailsNCI81 40700 1396 Colon CancerNCI83 27992 2276 Breast CancerNCI123 40152 3112 LeukemiaHIV 7781 266 HIV Anti-virus3.2 C OMPARED METHODSWe compared our proposed approach with several state-of-the-art techniques. Since the methodis a task-irrelevant way to obtain graph representations, the goal of the paper isn’t necessarily tocome up with a method that achieves absolute best performance on the tested datasets so we do nottest against an exhaustive list of methods. Our primary objective is to see whether the method can2http://www.rdkit.org/5Under review as a conference paper at ICLR 2017potentially be used to learn useful graph embeddings as a starting point for future investigation inthe area. Since we are testing the method using molecular graph datasets, we chose to compareagainst techniques that have achieved state-of-the-art performance on these type of graphs. We alsocompare against a method that learns node embeddings instead of an entire graph embedding. Thetested methods are:ECFP (Rogers & Hahn, 2010): Extended-connectivity circular fingerprints, which are arefinement of the Morgan algorithm (Morgan, 1965), use an iterative approach to encodeinformation about substructures in a molecular graph in a fingerprint vector. In this methoda hash function is used to map the concatenated features from a neighborhood to an indexin the fingerprint vector.NeuralFPS (Duvenaud et al., 2015): Neural fingerprints replace the function that is used tocompute a fingerprint vector with a differentiable neural network. This allows the methodto learn from the data, prioritizing useful or discriminative features.DeepWalk (Perozzi et al., 2014): The DeepWalk model learns representations for nodes ina single graph. However, we can also train the model using random walks from multiplegraphs if the various graphs share the same kind of nodes. The model will then learn togenerate similar representations for nodes that co-occur frequently across all the graphs.To generate the final embedding for a graph, we can simply apply average pooling to thevectors of all the nodes in the graph – which is a reasonable strategy to capture the overallprofile of the graph.Skip-graph : Our proposed method. We train an encoder-decoder model using randomwalks generated from the graphs and use the encoder’s random walk representation to cal-culate the graph embedding.To test ECFP and NeuralFPS, we used the library3provided by (Duvenaud et al., 2015). The size ofthe graph embedding was restricted to 164 for all methods and a grid-search was done to optimizethe parameters of the various methods. For ECFP and NeuralFPS, we tested different values for thefollowing parameters: fingerprint radius, `2regularization penalty, step size for the optimization,hidden layer dimension, and convolution layer dimension (only for NeuralFPS). All results reportedare the average over 5-fold cross validation. Since a neural network, with a single hidden layer, wasused as the classifier in Duvenaud et al. (2015), we chose to use the same classifier for our modeland the grid-search was performed over the same set of values for classifier-related parameters. Inparticular, for the neural network, we tested various settings with hidden layer size selected fromf70;100;140g, and`2regularization chosen from f0:0001;0:001;0:01;0:1g.3.3 C LASSIFICATION RESULTSWe show the classification accuracy of the different methods in Table 2. The proposed methodachieves top performance in three of the four datasets we tested. It is a little surprising, however, tofind that NeuralFPS performs slightly worse than ECFP. This seems to suggest that it is overfittingthe data as NeuralFPS is a generalization of ECFP and should, in theory, be at least as good as ECFP.Also, we find that averaging the DeepWalk embeddings trained from random walks generated fromthe entire training set can be a simple yet effective way to generate a graph representation.Table 2: Summary of experimental results.method datasetHIV NCI81 NCI83 NCI123ECFP 68.30% 68.90% 62.06% 60.17%NeuralFPS 67.48% 65.24% 59.91% 60.00%DeepWalk 69.90% 68.00% 63.89% 64.43%Skip-graph 72.77% 69.98% 63.80% 62.60%3https://github.com/HIPS/neural-fingerprint6Under review as a conference paper at ICLR 2017(a) Performance of various aggregation methods (b) Accuracy versus training epochs(c) Accuracy versus number of samples for aggrega-tionFigure 3: The performance of our proposed method under various settings.3.4 P ARAMETER STUDYWe tested the performance of the method using the various aggregation methods. The performancewas extremely poor when we trained the encoder-decoder model on long random walks and used asingle long walk to generate the graph representation. The other three aggregation strategies yieldedbetter results. Figure 3(a) shows the performance of these methods. Averaging the hidden vec-tor representations seems to yield the best performance, calculating the component-wise maximumyielded the second best results while the method that had the additional cluster pre-processing stepperformed slightly worse.We plot the accuracy of the method over the number of training epochs in Figure 3(b). With theexception of the HIV dataset, which has a relatively few number of samples, the results show agradual increase in the classification accuracy as the number of training epochs is increased. This isconsistent with results in other work that show that given a large number of training data, recurrentneural models generally achieve better results when trained longer.Figure 3(c) shows the accuracy in the classification task over different sample sizes K0, or thenumber of samples aggregated to obtain the final graph representation. It is clear from the resultsthat a better graph representation is obtained if we use more samples to calculate the final graphrepresentation. This is quite intuitive as a limited sample may not be representative and may fail tocapture the properties of the graph well enough.We tested several different values for lminandlmax and the one that seemed to perform best in ourcase waslmin= 7andlmax= 12 . This is a reasonable constraint on the random walk length giventhat the average size of the molecular graphs was around 30. We used K= 100 when generating aset of random walks to train the encoder-decoder.7Under review as a conference paper at ICLR 2017Figure 4: The learned embeddings for graphs in the HIV dataset. The 2-d representations werecalculated using Kernel PCA (Mika et al., 1998).3.5 V ISUALIZATION OF GRAPH EMBEDDINGSWe show a scatterplot of the HIV graph embeddings learned by our model in Figure 4. In particular,we highlight two pairs of graphs that had very similar embeddings. We note that the first pairof graphs (the one on the right) are structurally similar, that is they have a large sub-structure incommon. The graphs in the second pair each contain two similar substructures that are joined bysegments that appear to be “functionally” similar.3.6 U SING AN ENSEMBLE OF CLASSIFIERSSince it is possible to generate many different sets of random walks to train the encoder-decodermodel, we tried training five encoders on five separate sets of random walks. An ensemble (Opitz &Maclin, 1999) of five classifiers is then created with each classifier trained on the graph representa-tions obtained from one of the five encoders. We compare the predictive accuracy of the ensembleversus the single classifier when all other settings are fixed. We observed a slight improvement(around 13%) in the accuracy of the model. All the results reported above are for the singleclassifier case.4 C ONCLUSIONWe introduced an unsupervised method, based on the encoder-decoder model, for generating featurerepresentations for graph-structured data. The model was evaluated on the binary classification taskon several real-world datasets. The method outperformed several state-of-the-art algorithms on thetested datasets.There are several interesting directions for future work. For instance, we can try training multipleencoders on random walks generated using very different neighborhood selection strategies. Thismay allow the different encoders to capture different properties in the graphs. We would also like totest the approach using different neural network architectures. Finally, it would be interesting to testthe method on other types of heterogeneous information networks.REFERENCESKyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol-ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder8Under review as a conference paper at ICLR 2017for statistical machine translation. In Proceedings of EMNLP , pp. 1724–1734, 2014.Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation ofgated recurrent neural networks on sequence modeling. In NIPS Deep Learning Workshop , 2014.David K. Duvenaud, Dougal Maclaurin, Jorge Aguilera-Iparraguirre, Rafael Gomez-Bombarelli,Timothy Hirzel, Alan Aspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs forlearning molecular fingerprints. In Proceedings of NIPS , pp. 2224–2232, 2015.Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedingsof KDD , pp. 855–864, 2016.Greg Hamerly and Charles Elkan. Learning the k in k-means. In Proceedings of NIPS , pp. 281–288,2003.Nal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In Proceedings ofEMNLP , pp. 1700–1709, 2013.Douwe Kiela and Leon Bottou. Learning image embeddings using convolutional neural networksfor improved multi-modal semantics. In Proceedings of EMNLP , pp. 36–45, 2014.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings ofICLR , 2015.Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Raquel Urtasun, Antonio Tor-ralba, and Sanja Fidler. Skip-thought vectors. In Proceedings of NIPS , pp. 3294–3302, 2015.Xiangnan Kong, Wei Fan, and Philip S. Yu. Dual active feature and sample selection for graphclassification. In Proceedings of KDD , pp. 654–662, 2011.Sebastian Mika, Bernhard Scholkopf, Alex Smola, Klaus-Robert Muller, Matthias Scholz, and Gun-nar Ratsch. Kernel pca and de-noising in feature spaces. In Proceedings of NIPS , 1998.Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represen-tations in vector space. In Proceedings of ICLR , 2013.Kurt T. Miller, Thomas L. Griffiths, and Michael I. Jordan. Nonparametric latent feature models forlink prediction. In Proceedings of NIPS , pp. 1276–1284, 2009.H.L. Morgan. The generation of a unique machine description for chemical structure. Journal ofChemical Documentation , 5:107–113, 1965.David Opitz and Richard Maclin. Popular ensemble methods: An empirical study. Journal ofArtificial Intelligence Research , 11:169–198, 1999.Bryan Perozzi, Rami Al-Rfou’, and Steven Skiena. Deepwalk: online learning of social representa-tions. In Proceedings of KDD , pp. 701–710, 2014.David Rogers and Mathew Hahn. Extended-connectivity fingerprints. Journal of Chemical Infor-mation and Modeling , 50:742–754, 2010.Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.Computational capabilities of graph neural networks. IEEE Transactions on Neural Networks ,20:1938–1949, 2009.Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten Borg-wardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research , 12:2539–2561,2011.Ramya K. Vinayak, Samet Oymak, and Babak Hassibi. Graph clustering with missing data: Convexalgorithms and analysis. In Proceedings of NIPS , pp. 2996–3004, 2014.S. V . N. Vishwanathan, Nicol N. Schraudolph, Risi Kondor, and Karsten M. Borgwardt. Graphkernels. JMLR , 11:1201–1242, 2010.9Under review as a conference paper at ICLR 2017David Weininger. Smiles, a chemical language and information system. Journal of Chemical Infor-mation and Modeling , 28:31–36, 1988.Boris Weisfeiler and A. Lehman. A reduction of a graph to a canonical form and an algebra arisingduring this reduction. Nauchno-Technicheskaya Informatsiya , 2:12–16, 1968.Pinar Yanardag and S. V . N. Vishwanathan. Deep graph kernels. In Proceedings of KDD , pp.1365–1374, 2015.10
H1oRQDqlg
Under review as a conference paper at ICLR 2017LEARNING TO DRAW SAMPLES : W ITHAPPLICATIONTOAMORTIZED MLE FOR GENERATIVE ADVERSAR -IALLEARNINGDilin Wang, Qiang LiuDepartment of Computer Science, Dartmouth Collegefdilin.wang.gr, qiang.liu g@dartmouth.eduABSTRACTWe propose a simple algorithm to train stochastic neural networks to draw sam-ples from given target distributions for probabilistic inference. Our method isbased on iteratively adjusting the neural network parameters so that the outputchanges along a Stein variational gradient (Liu & Wang, 2016) that maximumlydecreases the KL divergence with the target distribution. Our method works forany target distribution specified by their unnormalized density function, and cantrain any black-box architectures that are differentiable in terms of the parame-ters we want to adapt. As an application of our method, we propose an amor-tized MLE algorithm for training deep energy model, where a neural sampler isadaptively trained to approximate the likelihood function. Our method mimicsan adversarial game between the deep energy model and the neural sampler, andobtains realistic-looking images competitive with the state-of-the-art results.1 I NTRODUCTIONModern machine learning increasingly relies on highly complex probabilistic models to reason aboutuncertainty. A key computational challenge is to develop efficient inference techniques to approx-imate, or draw samples from complex distributions. Currently, most inference methods, includingMCMC and variational inference, are hand-designed by researchers or domain experts. This makesit difficult to fully optimize the choice of different methods and their parameters, and exploit thestructures in the problems of interest in an automatic way. The hand-designed algorithm can also beinefficient when it requires to make fast inference repeatedly on a large number of different distri-butions with similar structures. This happens, for example, when we need to reason about a numberof observed datasets in settings like online learning, or need fast inference as inner loops for otheralgorithms such as maximum likelihood training. Therefore, it is highly desirable to develop moreintelligent probabilistic inference systems that can adaptively improve its own performance to fullythe optimize computational efficiency, and generalize to new tasks with similar structures.Specifically, denote by p(x)a probability density of interest specified up to the normalization con-stant, which we want to draw sample from, or marginalize to estimate its normalization constant.We want to study the following problem:Problem 1. Given a distribution with density p(x)and a function f(;)with parameter andrandom input , for which we only have assess to draws of the random input (without knowing itstrue distribution q0), and the output values of f(;)and its derivative @f(;)givenand. Wewant to find an optimal parameter so that the density of the random output variable x=f(;)withq0closely matches the target density p(x).Because we have no assumption on the structure of f(;)and the distribution of random input,we can not directly calculate the actual distribution of the output random variable x=f(;);this makes it difficult to solve Problem 1 using the traditional variational inference (VI) methods.Recall that traditional VI approximates p(x)using simple proposal distributions q(x)indexed byparameter, and finds the optimal by minimizing KL divergence KL(qjjp) =Eq[log(q=p)],which requires to calculate the density q(x)or its derivative that is not computable by our assump-1Under review as a conference paper at ICLR 2017tion (even when the Monte Carlo gradient estimation and the reparametrization trick (Kingma &Welling, 2013) are applied).In fact, it is this requirement of calculating q(x)that has been the major constraint for the de-signing of state-of-the-art variational inference methods with rich approximation families; the re-cent successful algorithms (e.g., Rezende & Mohamed, 2015b; Tran et al., 2015; Ranganath et al.,2015, to name only a few) have to handcraft special variational families to ensure the computationaltractability of q(x)and simultaneously obtain high approximation accuracy, which require substan-tial mathematical insights and research effects. Methods that do not require to explicitly calculateq(x)can significantly simplify the design and applications of VI methods, allowing practical usersto focus more on choosing proposals that work best with their specific tasks. We will use the termwild variational inference to refer to new variants of variational methods that require no tractabil-ityq(x), to distinguish with the black-box variational inference (Ranganath et al., 2014) whichrefers to methods that work for generic target distributions p(x)without significant model-by-modelconsideration (but still require to calculate the proposal density q(x)).A similar problem also appears in importance sampling (IS), where it requires to calculate the IS pro-posal density q(x)in order to calculate the importance weight w(x) =p(x)=q(x). However, thereexist methods that use no explicit information of q(x), which, seemingly counter-intuitively, givebetter asymptotic variance or converge rates than the typical IS that uses the proposal information(e.g., Liu & Lee, 2016; Briol et al., 2015; Henmi et al., 2007; Delyon & Portier, 2014). Discussionson this phenomenon dates back to O’Hagan (1987), who argued that “Monte Carlo (that uses theproposal information) is fundamentally unsound” for violating the Likelihood Principle, and devel-oped Bayesian Monte Carlo (O’Hagan, 1991) as an example that uses no information on q(x), yetgives better convergence rate than the typical Monte Carlo O(n1=2)rate (Briol et al., 2015). De-spite the substantial difference between IS and VI, these results intuitively suggest the possibility ofdeveloping efficient variational inference without calculating q(x)explicitly.In this work, we propose a simple algorithm for Problem 1 by iteratively adjusting the network pa-rameterto make its output random variable changes along a Stein variational gradient direction(SVGD) (Liu & Wang, 2016) that optimally decreases its KL divergence with the target distribu-tion. Critically, the SVGD gradient includes a repulsive term to ensure that the generated sampleshave the right amount of variability that matches p(x):In this way, we “amortize SVGD” using aneural network, which makes it possible for our method to adaptively improve its own efficiency byleveraging fast experience, especially in cases when it needs to perform fast inference repeatedly ona large number of similar tasks. As an application, we use our method to amortize the MLE trainingof deep energy models, where a neural sampler is adaptively trained to approximate the likelihoodfunction. Our method, which we call SteinGAN , mimics an adversarial game between the energymodel and the neural sampler, and obtains realistic-looking images competitive with the state-of-the-art results produced by generative adversarial networks (GAN) (Goodfellow et al., 2014; Radfordet al., 2015).Related Work The idea of amortized inference (Gershman & Goodman, 2014) has been recentlyapplied in various domains of probabilistic reasoning, including both amortized variational infer-ence (e.g., Kingma & Welling, 2013; Rezende & Mohamed, 2015a), and data-driven proposals for(sequential) Monte Carlo methods (e.g., Paige & Wood, 2016), to name only a few. Most of thesemethods, however, require to explicitly calculate q(x)(or its gradient). One exception is a veryrecent paper (Ranganath et al., 2016) that avoids calculating q(x)using an idea related to Steindiscrepancy (Gorham & Mackey, 2015; Liu et al., 2016; Oates et al., 2014; Chwialkowski et al.,2016). There is also a raising interest recently on a similar problem of “learning to optimize” (e.g.,Andrychowicz et al., 2016; Daniel et al., 2016; Li & Malik, 2016), which is technically easier thanthe more general problem of “learning to sample”. In fact, we show that our algorithm reduces to“learning to optimize” when only one particle is used in SVGD.Generative adversarial network (GAN) and its variants have recently gained remarkable successon generating realistic-looking images (Goodfellow et al., 2014; Salimans et al., 2016; Radfordet al., 2015; Li et al., 2015; Dziugaite et al., 2015; Nowozin et al., 2016). All these methods areset up to train latent variable models (the generator) under the assistant of the discriminator. OurSteinGAN instead performs traditional MLE training for a deep energy model, with the help ofa neural sampler that learns to draw samples from the energy model to approximate the likelihood2Under review as a conference paper at ICLR 2017function; this admits an adversarial interpretation: we can view the neural sampler as a generator thatattends to fool the deep energy model, which in turn serves as a discriminator that distinguishes thereal samples and the simulated samples given by the neural sampler. This idea of training MLE withneural samplers was first discussed by Kim & Bengio (2016); one of the key differences is that theneural sampler in Kim & Bengio (2016) is trained with the help of a heuristic diversity regularizerbased on batch normalization, while SVGD enforces the diversity in a more principled way. Anothermethod by Zhao et al. (2016) also trains an energy score to distinguish real and simulated samples,but within a non-probabilistic framework (see Section 5 for more discussion). Other more traditionalapproaches for training energy-based models (e.g., Ngiam et al., 2011; Xie et al., 2016) are oftenbased on variants of MCMC-MLE or contrastive divergence (Geyer, 1991; Hinton, 2002; Tieleman,2008), and have difficulty generating realistic-looking images from scratch.2 S TEIN VARIATIONAL GRADIENT DESCENT (SVGD)Stein variational gradient descent (SVGD) (Liu & Wang, 2016) is a general purpose Bayesian infer-ence algorithm motivated by Stein’s method (Stein, 1972; Barbour & Chen, 2005) and kernelizedStein discrepancy (Liu et al., 2016; Chwialkowski et al., 2016; Oates et al., 2014). It uses an effi-cient deterministic gradient-based update to iteratively evolve a set of particles fxigni=1to minimizethe KL divergence with the target distribution. SVGD has a simple form that reduces to the typicalgradient descent for maximizing logpwhen using only one particle (n= 1) , and hence can beeasily combined with the successful tricks for gradient optimization, including stochastic gradient,adaptive learning rates (such as adagrad), and momentum.To give a quick overview of the main idea of SVGD, let p(x)be a positive density function on Rdwhich we want to approximate with a set of particles fxigni=1. SVGD initializes the particles bysampling from some simple distribution q0, and updates the particles iteratively byxi xi+(xi);8i= 1;:::;n; (1)whereis a step size, and (x)is a “particle gradient direction” chosen to maximumly decrease theKL divergence between the distribution of particles and the target distribution, in the sense that= arg max2FddKL(q[]jjp)=0; (2)whereq[]denotes the density of the updated particle x0=x+(x)when the density of theoriginal particle xisq, andFis the set of perturbation directions that we optimize over. We chooseFto be the unit ball of a vector-valued reproducing kernel Hilbert space (RKHS) Hd=HHwith eachHassociating with a positive definite kernel k(x;x0); note thatHis dense in the space ofcontinuous functions with universal kernels such as the Gaussian RBF kernel.Critically, the gradient of KL divergence in (2) equals a simple linear functional of , allowing usto obtain a closed form solution for the optimal . Liu & Wang (2016) showed thatddKL(q[]jjp)=0=Exq[Tp(x)]; (3)withTp(x) =rxlogp(x)>(x) +rx(x); (4)whereTpis considered as a linear operator acting on function and is called the Stein operator inconnection with Stein’s identity which shows that the RHS of (3) equals zero if p=q:Ep[Tp] =Ep[rxlogp>+rx] = 0: (5)This is a result of integration by parts assuming the value of p(x)(x)vanishes on the boundary ofthe integration domain.Therefore, the optimization in (2) reduces toD(qjjp)def= max2HdfExq[Tp(x)]s:t:jjjjHd1g; (6)where D(qjjp)is the kernelized Stein discrepancy defined in Liu et al. (2016), which equals zero ifand only ifp=qunder mild regularity conditions. Importantly, the optimal solution of (6) yields aclosed form(x0)/Exq[rxlogp(x)k(x;x0) +rxk(x;x0)]:3Under review as a conference paper at ICLR 2017Algorithm 1 Amortized SVGD for Problem 1Set batch size m, step-size schemeftgand kernelk(x;x0). Initialize0.foriterationtdoDraw randomfigmi=1, calculatexi=f(t;i), and the Stein variational gradient xiin (7).Update parameter using (8), (9) or (10).end forBy approximating the expectation under qwith the empirical average of the current particlesfxigni=1, SVGD admits a simple form of update:xi xi+xi;8i= 1;:::;n;where xi=^Ex2fxigni=1[rxlogp(x)k(x;xi) +rxk(x;xi)]; (7)and^Exfxigni=1[f(x)] =Pif(xi)=n. The two terms in xiplay two different roles: the termwith the gradient rxlogp(x)drives the particles toward the high probability regions of p(x),while the term with rxk(x;xi)serves as a repulsive force to encourage diversity; to see this, con-sider a stationary kernel k(x;x0) =k(xx0), then the second term reduces to ^Exrxk(x;xi) =^Exrxik(x;xi), which can be treated as the negative gradient for minimizing the average similar-ity^Exk(x;xi)in terms ofxi. Overall, this particle update produces diverse points for distributionalapproximation and uncertainty assessment, and also has an interesting “momentum” effect in whichthe particles move collaboratively to escape the local optima.It is easy to see from (7) that xireduces to the typical gradient rxlogp(xi)when there is only asingle particle ( n= 1) andrxk(x;xi)whenx=xi, in which case SVGD reduces to the standardgradient ascent for maximizing logp(x)(i.e., maximum a posteriori (MAP)).3 A MORTIZED SVGD: T OWARDS AN AUTOMATIC NEURAL SAMPLERSVGD and other particle-based methods become inefficient when we need to repeatedly infer a largenumber different target distributions for multiple tasks, including online learning or inner loops ofother algorithms, because they can not improve based on the experience from the past tasks, and mayrequire a large memory to restore a large number of particles. We propose to “amortize SVGD” bytraining a neural network f(;)to mimic the SVGD dynamics, yielding a solution for Problem 1.One straightforward way to achieve this is to run SVGD to convergence and train f(;)to fit theSVGD results. This, however, requires to run many epochs of fully converged SVGD and can beslow in practice. We instead propose an incremental approach in whichis iteratively adjusted sothat the network outputs x=f(;)changes along the Stein variational gradient direction in (7) inorder to decrease the KL divergence between the target and approximation distribution.To be specific, denote by tthe estimated parameter at the t-th iteration of our method; each iterationof our method draws a batch of random inputs figmi=1and calculate their corresponding outputxi=f(;i)based ont; heremis a mini-batch size (e.g., m= 100 ). The Stein variationalgradient xiin (7) would then ensure that x0i=xi+xiforms a better approximation of thetarget distribution p. Therefore, we should adjust to make its output matches fx0ig, that is, wewant to update byt+1 arg minmXi=1jjf(;i)x0ijj22; wherex0i=xi+xi: (8)See Algorithm 1 for the summary of this procedure. If we assume is very small, then (8) reducesto a least square optimization. To see this, note that f(;i)f(t;i) +@f(t;i)(t)byTaylor expansion. Since xi=f(t;i), we havejjf(;i)x0ijj22jj@f(t;i)(t)xijj22:As a result, (8) reduces to the following least square optimization:t+1 t+t;where t= arg minmXi=1jj@f(t;i)xijj22: (9)4Under review as a conference paper at ICLR 2017Update (9) can still be computationally expensive because of the matrix inversion. We can derive afurther approximation by performing only one step of gradient descent of (8) (or (9)), which givest+1 t+mXi=1@f(t;i)xi: (10)Although update (10) is derived as an approximation of (8)-(9), it is computationally faster and wefind it works very effectively in practice; this is because when is small, one step of gradient updatecan be sufficiently close to the optimum.Update (10) also has a simple and intuitive form: (10) can be thought as a “chain rule” that back-propagates the Stein variational gradient to the network parameter . This can be justified byconsidering the special case when we use only a single particle (n= 1) in which case xiin(7) reduces to the typical gradient rxlogp(xi)oflogp(x), and update (10) reduces to the typicalgradient ascent for maximizingE[logp(f(;))];in which case f(;)is trained to maximize logp(x)(that is, learning to optimize ), instead oflearning to draw samples from pfor which it is crucial to use Stein variational gradient xitodiversify the network outputs.Update (10) also has a close connection with the typical variational inference with the reparameter-ization trick (Kingma & Welling, 2013). Let q(x)be the density function of x=f(;),q0.Using the reparameterization trick, the gradient of KL(qjjp)w.r.t.can be shown to berKL(qjjp) =Eq0[@f(;)(rxlogp(x)rxlogq(x))]:Withfigi.i.d. drawn from q0andxi=f(;i);8i, the standard stochastic gradient descent forminimizing the KL divergence ist+1 t+Xi@f(t;i)~xi;where ~xi=rxlogp(xi)rxlogq(xi): (11)This is similar with (10), but replaces the Stein gradient xidefined in (7) with ~xi. The advantageof using xiis that it does not require to explicitly calculate q, and hence admits a solution to Prob-lem 1 in which qis not computable for complex network f(;)and unknown input distributionq0. Further insights can be obtained by noting thatxiExq[rxlogp(x)k(x;xi) +rxk(x;xi)]=Exq[(rxlogp(x)rxlogq(x))k(x;xi)] (12)=Exq[(~x)k(x;xi)];where (12) is obtained by using Stein’s identity (5). Therefore, xican be treated as a kernelsmoothed version of ~xi.4 A MORTIZED MLE FOR GENERATIVE ADVERSARIAL TRAININGOur method allows us to design efficient approximate sampling methods adaptively and automat-ically, and enables a host of novel applications. In this paper, we apply it in an amortized MLEmethod for training deep generative models.Maximum likelihood estimator (MLE) provides a fundamental approach for learning probabilisticmodels from data, but can be computationally prohibitive on distributions for which drawing sam-ples or computing likelihood is intractable due to the normalization constant. Traditional methodssuch as MCMC-MLE use hand-designed methods (e.g., MCMC) to approximate the intractable like-lihood function but do not work efficiently in practice. We propose to adaptively train a generativeneural network to draw samples from the distribution during MLE training, which not only providescomputational advantage, and also allows us to generate realistic-looking images competitive with,or better than the state-of-the-art generative adversarial networks (GAN) (Goodfellow et al., 2014;Radford et al., 2015) (see Figure 1-5).5Under review as a conference paper at ICLR 2017Algorithm 2 Amortized MLE as Generative Adversarial LearningGoal: MLE training for energy model p(xj) = exp((x;)()).Initializeand.foriterationtdoUpdating:Drawiq0,xi=f(;i); updateusing (8), (9) or (10) with p(x) =p(xj).Repeat several times when needed.Updating:Draw a mini-batch of observed data fxi;obsg, and simulated data xi=f(;i),updateby (13).end forTo be specific, denote by fxi;obsga set of observed data. We consider the maximum likelihoodtraining of energy-based models of formp(xj) = exp((x;)());() = logZexp((x;))dx;where(x;)is an energy function for xindexed by parameter and()is the log-normalizationconstant. The log-likelihood function of isL() =1nnXi=1logp(xi;obsj);whose gradient isrL() =^Eobs[@(x;)] +E[@(x;)];where ^Eobs[]andE[]denote the empirical average on the observed data fxi;obsgand the expecta-tion under model p(xj), respectively. The key computational difficulty is to approximate the modelexpectation E[]. To address this problem, we use a generative neural network x=f(;)trainedby Algorithm 1 to approximately sample from p(xj), yielding a gradient update for of form +^rL(); ^rL() =^Eobs[@(x;)] +^E[@(x;)]; (13)where ^Edenotes the empirical average on fxigwherexi=f(;i),figq0. Asis updatedby gradient ascent, is successively updated via Algorithm 1 to followp(xj). See Algorithm 2.We call our method SteinGAN , because it can be intuitively interpreted as an adversarial game be-tween the generative network f(;)and the energy model p(xj)which serves as a discriminator:The MLE gradient update of p(xj)effectively decreases the energy of the training data and in-creases the energy of the simulated data from f(;), while the SVGD update of f(;)decreasesthe energy of the simulated data to fit better with p(xj). Compared with the traditional methodsbased on MCMC-MLE or contrastive divergence, we amortize the sampler as we train , which givesmuch faster speed and simultaneously provides a high quality generative neural network that cangenerate realistic-looking images; see Kim & Bengio (2016) for a similar idea and discussions.5 E MPIRICAL RESULTSWe evaluated our SteinGAN on four datasets, MNIST, CIFAR-10, CelebA (Liu et al., 2015), andLarge-scale Scene Understanding (LSUN) (Yu et al., 2015), on which we find our method tends togenerate realistic-looking images competitive with, sometimes better than DCGAN (Radford et al.,2015) (see Figure 2 - Figure 3). Our code is available at https://github.com/DartML/SteinGAN .Model Setup In order to generate realistic-looking images, we define our energy model based onan autoencoder:p(xj)/exp(jjxD(E(x;);)jj); (14)wherexdenotes the image. This choice is motivated by Energy-based GAN (Zhao et al., 2016) inwhich the autoencoder loss is used as a discriminator but without a probabilistic interpretation. We6Under review as a conference paper at ICLR 2017assumef(;)to be a neural network whose input is a100-dimensional random vector drawn byUniform([1;1]). The positive definite kernel in SVGD is defined by the RBF kernel on the hiddenrepresentation obtained by the autoencoder in (14), that is,k(x;x0) = exp(1h2jjE(x;)E(x0;)jj2):As it is discussed in Section 3, the kernel provides a repulsive force to produce an amount of variabil-ity required for generating samples from p(x). This is similar to the heuristic repelling regularizerin Zhao et al. (2016) and the batch normalization based regularizer in Kim & Bengio (2016), but isderived in a more principled way. We take the bandwidth to be h= 0:5med, where med is themedian of the pairwise distances between E(x)on the image simulated by f(;). This makes thekernel change adaptively based on both (through E(x;)) and(through bandwidth h).Some datasets include both images xand their associated discrete labels y. In these cases, we traina joint energy model on (x;y)to capture both the inner structure of the images and its predictiverelation with the label, allowing us to simulate images with a control on which category it belongsto. Our joint energy model is defined to bep(x;yj)/expjjxD(E(x;);)jjmax[m; (y;E(x;))]; (15)where(;)is the cross entropy loss function of a fully connected output layer. In this case, ourneural sampler first draws a label yrandomly according to the empirical counts in the dataset, andthen passesyinto a neural network together with a 1001random vector to generate image x.This allows us to generate images for particular categories by controlling the value of input y.Stabilization In practice, we find it is useful to modify (13) to be ^Eobs[r(x;)] +(1)^E[r(x;)]: (16)whereis a discount factor (which we take to be = 0:7). This is equivalent to maximizing aregularized likelihood:maxflogp(xj) +()gwhere ()is the log-partition function; note that exp(())is a conjugate prior of p(xj).We initialize the weights of both the generator and discriminator from Gaussian distributionN(0;0:02), and train them using Adam (Kingma & Ba, 2014) with a learning rate of 0:001forthe generator and 0:0001 for the energy model (the discriminator). In order to keep the generatorand discriminator approximately aligned during training, we speed up the MLE update (16) of thediscriminator (by increasing its learning rate to 0:0005 ) when the energy of the real data batch islarger than the energy of the simulated images, while slow down it (by freezing the MLE updateofin (16)) if the magnitude of the energy difference between the real images and the simulatedimages goes above a threshold of 0.5. We used the bag of architecture guidelines for stable trainingsuggested in DCGAN (Radford et al., 2015).Discussion The MNIST dataset has a training set of 60;000examples. Both DCGAN and ourmodel produce high quality images, both visually indistinguishable from real images; see figure 1.CIFAR-10 is very diverse, and with only 50,000 training examples. Figure 2 shows examples ofsimulated images by DCGAN and SteinGAN generated conditional on each category, which lookequally well visually. We also provide quantitively evaluation using a recently proposed inceptionscore (Salimans et al., 2016), as well as the classification accuracy when training ResNet using50;000simulated images as train sets, evaluated on a separate held-out testing set never seen by theGAN models. Besides DCGAN and SteinGAN, we also evaluate another simple baseline obtainedby subsampling 500 real images from the training set and duplicating them 100 times. We observethat these scores capture rather different perspectives of image generation: The inception scorefavors images that look realistic individually and have uniformly distributed labels; as a result, theinception score of the duplicated 500 images is almost as high as the real training set. We find thatthe inception score of SteinGAN is comparable, or slightly lower than that of DCGAN. On the otherhand, the classification accuracy measures the amount information captured in the simulated imagesets; we find that SteinGAN achieves the highest classification accuracy, suggesting that it capturesmore information in the training set.Figure 3 and 4 visualize the results on CelebA (with more than 200k face images) and LSUN (withnearly 3M bedroom images), respectively. We cropped and resized both dataset images into 6464.7Under review as a conference paper at ICLR 2017DCGAN SteinGANFigure 1: MNIST images generated by DCGAN and our SteinGAN. We use the joint model in (15)to allow us to generate images for each digit. We set m= 0:2.airplaneautomobilebirdcatdeerdogfroghorseshiptruckDCGAN SteinGANInception ScoreReal Training Set 500 Duplicate DCGAN SteinGANModel Trained on ImageNet 11.237 11.100 6.581 6.351Model Trained on CIFAR-10 9.848 9.807 7.368 7.428Testing AccuracyReal Training Set 500 Duplicate DCGAN SteinGAN92.58 % 44.96 % 44.78 % 63.81 %Figure 2: Results on CIFAR-10. “500 Duplicate” denotes 500 images randomly subsampled fromthe training set, each duplicated 100 times. Upper: images simulated by DCGAN and SteinGAN(based on joint model (15)) conditional on each category. Middle: inception scores for samplesgenerated by various methods (all with 50,000 images) on inception models trained on ImageNet andCIFAR-10, respectively. Lower: testing accuracy on real testing set when using 50,000 simulatedimages to train ResNets for classification. SteinGAN achieves higher testing accuracy than DCGAN.We setm= 1and= 0:8.6 C ONCLUSIONWe propose a new method to train neural samplers for given distributions, together with a newSteinGAN method for generative adversarial training. Future directions involve more applicationsand theoretical understandings for training neural samplers.8Under review as a conference paper at ICLR 2017DCGAN SteinGANFigure 3: Results on CelebA. Upper: images generated by DCGAN and our SteinGAN. Lower:images generated by SteinGAN when performing a random walk + 0:01Uniform([1;1])on the random input ; we can see that a man with glasses and black hair gradually changes to awoman with blonde hair. See Figure 5 for more examples.DCGAN SteinGANFigure 4: Images generated by DCGAN and our SteinGAN on LSUN.9Under review as a conference paper at ICLR 2017REFERENCESMarcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nandode Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474 ,2016.Andrew D Barbour and Louis Hsiao Yun Chen. An introduction to Stein’s method , volume 4. World Scientific,2005.Franc ̧ois-Xavier Briol, Chris Oates, Mark Girolami, Michael A Osborne, Dino Sejdinovic, et al. Probabilisticintegration: A role for statisticians in numerical analysis? arXiv preprint http://arxiv.org/abs/1512.00933 ,2015.Kacper Chwialkowski, Heiko Strathmann, and Arthur Gretton. A kernel test of goodness of fit. In Proceedingsof the International Conference on Machine Learning (ICML) , 2016.Christian Daniel, Jonathan Taylor, and Sebastian Nowozin. Learning step size controllers for robust neuralnetwork training. In Thirtieth AAAI Conference on Artificial Intelligence , 2016.Bernard Delyon and Franc ̧ois Portier. Integral approximation by kernel smoothing. arXiv preprintarXiv:1409.0733 , 2014.Gintare Karolina Dziugaite, Daniel M. Roy, and Zoubin Ghahramani. Training generative neural networks viamaximum mean discrepancy optimization. In Conference on Uncertainty in Artificial Intelligence (UAI) ,2015.Samuel J Gershman and Noah D Goodman. Amortized inference in probabilistic reasoning. In Proceedings ofthe 36th Annual Conference of the Cognitive Science Society , 2014.Charles J. Geyer. Markov chain Monte Carlo maximum likelihood. In Computing Science and Statistics: Proc.23rd Symp. Interface , pp. 156–163, 1991.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, AaronCourville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Process-ing Systems , pp. 2672–2680, 2014.Jack Gorham and Lester Mackey. Measuring sample quality with Stein’s method. In Advances in NeuralInformation Processing Systems (NIPS) , pp. 226–234, 2015.Masayuki Henmi, Ryo Yoshida, and Shinto Eguchi. Importance sampling via the estimated sampler.Biometrika , 94(4):985–991, 2007.Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation ,14(8):1771–1800, 2002.Taesup Kim and Yoshua Bengio. Deep directed generative models with energy-based probability estimation.arXiv preprint arXiv:1606.03439 , 2016.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In Proceedings of the InternationalConference on Learning Representations (ICLR) , 2013.Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885 , 2016.Yujia Li, Kevin Swersky, and Rich Zemel. Generative moment matching networks. In Proceedings of theInternational Conference on Machine Learning (ICML) , 2015.Qiang Liu and Jason D. Lee. Black-box importance sampling. https://arxiv.org/abs/1610.05247 , 2016.Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm.arXiv preprint arXiv:1608.04471 , 2016.Qiang Liu, Jason D Lee, and Michael I Jordan. A kernelized Stein discrepancy for goodness-of-fit tests. InProceedings of the International Conference on Machine Learning (ICML) , 2016.Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceed-ings of International Conference on Computer Vision (ICCV) , 2015.10Under review as a conference paper at ICLR 2017Jiquan Ngiam, Zhenghao Chen, Pang W Koh, and Andrew Y Ng. Learning deep energy models. In Proceedingsof the International Conference on Machine Learning (ICML) , pp. 1105–1112, 2011.Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers usingvariational divergence minimization. arXiv preprint arXiv:1606.00709 , 2016.Chris J Oates, Mark Girolami, and Nicolas Chopin. Control functionals for Monte Carlo integration. Journalof the Royal Statistical Society, Series B , 2014.Anthony O’Hagan. Monte Carlo is fundamentally unsound. Journal of the Royal Statistical Society. Series D(The Statistician) , 36(2/3):247–249, 1987.Anthony O’Hagan. Bayes–hermite quadrature. Journal of statistical planning and inference , 29(3):245–260,1991.Brooks Paige and Frank Wood. Inference networks for sequential monte carlo in graphical models. arXivpreprint arXiv:1602.06701 , 2016.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutionalgenerative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.R. Ranganath, J. Altosaar, D. Tran, and D.M. Blei. Operator variational inference. 2016.Rajesh Ranganath, Sean Gerrish, and David M Blei. Black box variational inference. In Proceedings of theInternational Conference on Artificial Intelligence and Statistics (AISTATS) , 2014.Rajesh Ranganath, Dustin Tran, and David M Blei. Hierarchical variational models. arXiv preprintarXiv:1511.02386 , 2015.Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedingsof the International Conference on Machine Learning (ICML) , 2015a.Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprintarXiv:1505.05770 , 2015b.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improvedtechniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.Charles Stein. A bound for the error in the normal approximation to the distribution of a sum of dependent ran-dom variables. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability,Volume 2: Probability Theory , pp. 583–602, 1972.Tijmen Tieleman. Training restricted boltzmann machines using approximations to the likelihood gradient. InProceedings of the 25th international conference on Machine learning , pp. 1064–1071. ACM, 2008.Dustin Tran, Rajesh Ranganath, and David M Blei. Variational gaussian process. arXiv preprintarXiv:1511.06499 , 2015.Jianwen Xie, Yang Lu, Song-Chun Zhu, and Ying Nian Wu. A theory of generative convnet. arXiv preprintarXiv:1602.03264 , 2016.Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Constructionof a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 ,2015.Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprintarXiv:1609.03126 , 2016.11Under review as a conference paper at ICLR 2017Figure 5: More images generated by SteinGAN on CelebA.12
S1Y0td9ee
Under review as a conference paper at ICLR 2017SHIFT AGGREGATE EXTRACT NETWORKSFrancesco Orsini12, Daniele Baracchi2and Paolo Frasconi21Department of Computer Science2Department of Information EngineeringKatholieke Universiteit Leuven Universit `a degli Studi di FirenzeCelestijnenlaan 200A Via di Santa Marta 33001 Heverlee, Belgium I-50139 Firenze, Italyfrancesco.orsini@kuleuven.be daniele.baracchi@unifi.itpaolo.frasconi@unifi.itABSTRACTThe Shift Aggregate Extract Network ( SAEN ) is an architecture for learning repre-sentations on social network data. SAEN decomposes input graphs into hierarchiesmade of multiple strata of objects. Vector representations of each object are learntby applying shift,aggregate andextract operations on the vector representationsof its parts. We propose an algorithm for domain compression which takes ad-vantage of symmetries in hierarchical decompositions to reduce the memory us-age and obtain significant speedups. Our method is empirically evaluated on realworld social network datasets, outperforming the current state of the art.1 I NTRODUCTIONMany different problems in various fields of science require the classification of structured data ,i.e. collections of objects bond together by some kind of relation. A natural way to represent suchstructures is through graphs, which are able to encode both the individual objects composing thecollection (as vertices) and the relationships between them (as edges). A number of approaches tothe graph classification problem has been studied in graph kernel and neural network literature.Graph kernels decompose input graphs in substructures such as shortest paths (Borgwardt & Kriegel,2005), graphlets (Shervashidze et al., 2009) or neighborhood subgraph pairs (Costa & De Grave,2010). The similarity between two graphs is then computed by comparing the respective sets ofparts. Methods based on recursive neural networks unfold a neural network over input graphs andlearn vector representations of their nodes employing backpropagation though structure (Goller &Kuchler, 1996). Recursive neural networks have been successfully applied to domains such as nat-ural language (Socher et al., 2011) and biology (Vullo & Frasconi, 2004; Baldi & Pollastri, 2003).An advantage of recursive neural networks over graph kernels, is that the vector representations ofthe input graphs are learnt rather than handcrafted.Learning on social network data can be considerably hard due to their peculiar structure: as opposedto chemical compounds and parse trees, the structure of social network graphs is highly irregular.Indeed in social networks it is common to have nodes in the same graph whose degree differs byorders of magnitude. This poses a significant challenge for the substructure matching approach usedby some graph kernels as the variability in connectivity generates a large number of unique patternsleading to diagonally dominant kernel matrices.We propose Shift Aggregate Extract Networks ( SAEN ), a neural network architecture for learningrepresentations of input graphs. SAEN decomposes input graphs into H-hierarchies made of multiplestrata of objects. Objects in each stratum are connected by “part-of” relations to the objects to thestratum above.In case we wish to classify graphs we can use an H-hierarchical decomposition in which the topstratum contains the graph Gthat we want to classify, while the intermediate strata contain subgraphsofG, subgraphs of subgraphs of Gand so on, until we reach the bottom stratum which contains theverticesvofG.1Under review as a conference paper at ICLR 2017UnlikeR-convolution relations in kernel methods (which decompose objects into the set of theirparts),H-hierarchical decompositions are deep as they can represent the parts of the parts of anobject.Recursive neural networks associate to the vertices of the input graphs vector representations impos-ing that they have identical dimensions. Moreover, the propagation follows the edge connectivityand weights are shared over the whole input graph. If we consider that vector representations ofnodes (whose number of parents can differ by orders of magnitude) must share the same weights,learning on social network data with recursive neural networks might be nontrivial.SAEN compensates the limitations of recursive neural networks by adding the following degrees offlexibility:1. the SAEN computation schema unfolds a neural network over H-decompositions instead of theinput graph,2.SAEN imposes weight sharing and fixed size of the learnt vector representations on a per stratumbasis instead of globally.Indeed SAEN allows to use vector representations of different sizes for different strata of objects(e.g. graphs, subgraphs, subgraphs of subgraphs, edges, vertices etc.) The SAEN schema computesthe vector representation of each object by applying shift,aggregate andextract operations on thevector representations of its parts.Another contribution of this paper is the introduction of a domain compression algorithm, that weuse in our experiments to reduce memory usage and runtime. Domain compression collapses objectsin the same stratum of an H-hierarchical decomposition into a compressed one whenever theseobjects are indistinguishable for the SAEN computation schema. In particular objects made of thesame sets of parts are indistinguishable. In order obtain a lossless compression an H-hierarchicaldecomposition we store counts on symmetries adopting some mathematical results from lifted linearprogramming (Mladenov et al., 2012). The domain compression algorithm is also reminiscent of thework of Sperduti & Starita (1997) in which common substructures of recursive neural networks arecollapsed in order to reduce the computational cost.2 S HIFT -AGGREGATE -EXTRACT NEURAL NETWORKSWe propose a neural network architecture that takes as input an undirected attributed graph G=(V,E,X )whereVis the vertex set, E⊆V×Vis the edge set, and X={xv∈Rp}v∈Vis aset ofp-dimensional vertex attributes. When vertices do not have associated attributes (for examplethis happens in some of the social network datasets of §4.1), we can set xvto some vertex invariantsuch as node centrality or betweenness.2.1H-HIERARCHICAL DECOMPOSITIONSMost graph kernels decompose graphs into parts by using an R-convolution relation (Haussler,1999). We extend this approach by decomposing graphs into a hierarchy ofπ-parametrized “partof” relations. Formally, an H-hierarchical decomposition is a pair ({Sl}Ll=0,{Rl,π}Ll=1)where:•{Sl}Ll=0are disjoint sets of objects Slcalled strata, or levels of the hierarchy. The bottom stratumS0contains non-decomposable objects (e.g. individual vertices), while the other strata Sl, l=1,...,L contain composite objects, oi∈Sl, whose parts oj∈Sl−1belong to the preceding stratum,Sl−1.•{Rl,π}Ll=1is a set ofl,π-parametrizedRl,π-convolution relations. A pair (oi,oj)∈Sl×Sl−1belongs toRl,πiff “ojis part ofoiwith membership type π”. For notational convenience, the partsofoiare denoted asR−1l,π(oi) ={oj|(oj,oi)∈Rl,π}.The membership type πis used to represent the roles of the parts of an object. For example, wecould decompose a graph as a multiset of π-neighborhood subgraphs1in whichπis the radius ofthe neighborhoods (see Figure 1 on the left). Another possible use of the πmembership type is to1Ther-neighborhood subgraph (or ego graph) of a vertex vin a graph Gis the induced subgraph of Gconsisting of all vertices whose shortest-path distance from vis at most r.2Under review as a conference paper at ICLR 2017Ego Graph⇡:ROOT⇡:ELEM⇡:ELEMGraph⇡:0⇡:0⇡:0⇡:0⇡:1⇡:1⇡:1⇡:1⇡:1Ego graph (stratumS1) decomposed intovertices (stratumS2).⇡:0Root of the ego graph.Other vertices of the ego graph.Ego graphs of radius 0.Ego graphs of radius 1.Graph (stratumS2) decomposed into ego graphs ofradius 0 and 1 (stratumS1).Figure 1: Image of an H-hierarchical decomposition (in particular the EGNN explained in§4.2).On the left we decompose a graph into rooted ego graphs of radius 0and1, while on the right wedecompose an ego graph into the set of its vertices. The directed arrows represent “part of” relationslabeled with their membership type π. The membership type πrepresents the radius π= 0,1of theego graphs (decomposition on the left) and the role (i.e. π=ROOT,ELEM ) of a vertex in the egograph (decomposition on the right) respectively.distinguish the root from the other vertices in a rooted neighborhood subgraph (see Figure 1 on theright).AnH-hierarchical decomposition is a multilevel generalization of R-convolution relations, and itreduces to anR-convolution relation for L= 1.2.2 S HIFT AGGREGATE EXTRACT SCHEMA FOR LEARNING REPRESENTATIONSWe propose Shift Aggregate Extract Network ( SAEN ) to learn vector representations for all theobjects of all the strata {Sl}Ll=0in anH-hierarchical decomposition. SAEN unfolds a neural net-work architecture over an H-hierarchical decomposition by using the Shift Aggregate Extract ( SAE)schema.According to the SAE schema the vector representation of each object in the H-hierarchical decom-position is either computed by applying a neural network on the vertex attributes (for the objects inbottom stratum) or defined in terms of the vector representations of its parts (for the other objects).More formally, the SAE schema associates a dl-dimensional representation hi∈Rdlto each objectoi∈Slof theH-hierarchical decomposition according to the following formula:hi=f0(xvi; Θ0) ifoi∈S0fl/parenleftBigg/summationdisplayπ∈Πl/summationdisplayoj∈R−1l,π(oi)(zπ⊗hj)/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipuprightShift/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipuprightAggregate; Θl/parenrightBigg/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipuprightExtractotherwise(1)wherefl(·; Θl), l= 0,...,L are multilayer neural networks with parameters Θl.With respect to the base case (first branch of Eq. 1) we have that each object oiin the bottom stratumS0is in one-to-one correspondence with the vertices vi∈Vof the graph that we are decomposing.Indeed the vector representations hiare computed by evaluating f0(·; Θ0)in correspondence of thevertex attributes xvi∈X.The recursion step (second branch of Eq. 1) follows the Shift Aggregate Extract ( SAE) schema:•Shift : each part representation hj∈Rdl−1is remapped into a space R|Πldl−1|made of|Πl|slots,where each slot has dimension dl−1. This transformation shifts part representations hjby usingthe Kronecker product ⊗between an indicator vector zπ∈R|Πl|and the vector representation hjof partoj∈Sl−1. The indicator vector zπ∈R|Πl|defined aszi=/braceleftBig1ifi=π0otherwise.and it is used to3Under review as a conference paper at ICLR 2017whole graphego graphpatternsvertices....domaincompressionoriginal graphcompressed graph.....compressedH-decomposition.....originalH-decompositionFigure 2: Pictorial representation of the H-hierarchical decomposition of a graph taken from theIMDB -BINARY dataset (see§4.1) together with its compressed version.make sure that vector representations hjof object parts will fall in the same slot if and only if theyhave the same membership type π.•Aggregate : the shifted representations (zπ⊗hj)of the partsojare then aggregated with a sum.•Extract : the aggregated representation is compressed to a dl-dimensional space by a Θl-parametrized nonlinear map fl(·,Θl) :R|Πldl−1|→Rdlimplemented with a multilayer neuralnetwork.The shift and aggregate steps, that we have seen so far, are identical to those used in kernel designwhen computing the explicit feature of a kernel k(x,z)derived from a sum/summationtextπ∈Πkπ(x,z)of basekernelskπ(x,z), π∈Π. In principle, it would be indeed possible to turn SAEN into a kernel methodby removing the extraction step Efrom the SAEschema. However, such an approach would increasethe dimensionality of the feature space by a multiplicative factor |Πl|for each level lof theH-hierarchical decomposition, thus leading to an exponential number of features. When using SAEN ,the feature space growth is prevented by exploiting a distributed representation (via a multilayeredneural network) during the Estep of the SAE schema. As a result, SAEN can easily cope with H-hierarchical decompositions consisting of multiple strata.2.3 E XPLOITING SYMMETRIES FOR DOMAIN COMPRESSIONIn this section we propose a technique, called domain compression , which allows to save memoryand speedup the SAEN computation. Domain compression exploits symmetries in H-hierarchical de-compositions by collapsing equivalent objects in each stratum. The greater the number of collapsedobjects the highest the compression ratio.Two objects a,bin a stratum Slare collapsable a∼bif they share the same representation (i.e.ha=hb) for all the possible values of Θl. A compressed stratum Scomplis the quotient set Sl/∼ofstratumSlw.r.t. the collapsibility relation ∼. We assume that the attributes of the elements in thebottom stratum S0are categorical, so that the same vector representation can be shared by multipleelements with non-zero probability.2While objects in the bottom stratum S0are collapsable whentheir attributes are identical, for all the other strata Sl, l= 1,...,L , objects are collapsable if theyare made by the same sets of parts for all the membership types π.In Figure 2 we provide a pictorial representation of the domain compression of an H-hierarchicaldecomposition ( EGNN , described in§4.2). On the left we show the H-hierarchical decompositionof a graph taken from the IMDB -BINARY dataset (see§4.1) together with its compressed version onthe right.2.3.1 D OMAIN COMPRESSION ALGORITHMIn order to compress H-hierarchical decompositions we adapt the lifted linear programming tech-nique proposed by Mladenov et al. (2012) to the SAEN architecture. If a matrix M∈Rn×phas2Vectors of real valued attributes could be discretized using clustering techniques. However, we leavediscretization in SAEN to future works.4Under review as a conference paper at ICLR 2017m≤ndistinct rows it can be decomposed as the product DMcompwhereMcompis a compressedversion ofMin which the distinct rows of Mappear exactly once. The Boolean decompressionmatrix,D, encodes the collapsibility relation among the rows of Mso thatDij= 1iff theithrowofMfalls in the equivalence class jof∼. A pseudo-inverse CofDcan be computed by dividingthe rows ofD/latticetopby their sum (where D/latticetopis the transpose of D).Example 1 If we look at matrix Min Eq. 2 we notice that row 1and4share the encoding [0,0,0],rows 3and5share the encoding [1,1,0]while the encoding [1,0,1]appears only once at row 2.MatrixMcompis the compressed version of M.M=0 0 01 0 11 1 00 0 01 1 0Mcomp=/bracketleftBigg0 0 01 0 11 1 0/bracketrightBiggD=1 0 00 1 00 0 11 0 00 0 1C=/bracketleftBigg1/20 0 1/200 1 0 0 00 0 1/20 1/2/bracketrightBigg(2)MatrixMcan be expressed as the matrix product between the decompression matrix Dand thecompressed version of Mcomp(i.e.M=DMcomp), while the matrix multiplication between thecompression matrix Cand theMleads to the compressed matrix Mcomp(i.e.Mcomp=CM).To apply domain compression we rewrite Eq. 1 in matrix form as follows:Hl=f0(X; Θ0)/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright|S0|×d0ifl= 0fl/bracketleftbigRl,1,...,Rl,π,...,Rl,|Πl|/bracketrightbig/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright|Sl|×|Πl||Sl−1|Hl−1... 0.........0... Hl−1/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright|Πl||Sl−1|×|Πl|dl−1; Θl/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright|Sl|×dlotherwise(3)where:•Hl∈R|Sl|×dlis the matrix that represents the dl-dimensional encodings of the objects in Sl.The rows of Hlare the vector representations hiin Eq. 1, while the rows of Hl−1are the vectorrepresentations hjin Eq. 1;•X∈R|S0|×pis the matrix that represents the p-dimensional encodings of the vertex attributes inV(i.e. the rows of Xare the xviof Eq. 1);•fl(·; Θl)is unchanged w.r.t. Eq. 1 and is applied to its input matrices row-wise;•Rl,π∈R|Sl|×|Sl−1|∀π∈Πlare the matrix representations of the Rl,π-convolution relations ofEq. 1 whose elements are (Rl,π)ij= 1if(oj,oi)∈Rl,πand0otherwise.Domain compression on Eq. 3 is performed by the DOMAIN -COMPRESSION procedure (see Algo-rithm 3) that takes as input the attribute matrix Xand the part-of matrices Rl,πand returns theircompressed versions Xcompand theRcompl,πrespectively. The algorithm starts by invoking (line 1)the procedure COMPUTE -CDonXto obtain the compression and decompression matrices C0andD0respectively. The compression matrix C0is used to compress X(line 2) then we start iteratingover the levels l= 0,...,L of theH-hierarchical decomposition (line 4) and compress the Rl,πmatrices. The compression of the Rl,πmatrices is done by right-multiplying them by the decom-pression matrix Dl−1of the previous level l−1(line 5). In this way we collapse the parts of relationRl,π(i.e. the columns of Rl,π) as these were identified in stratum Sl−1as identical objects (i.e.those objects corresponding to the rows of XorRl−1,πcollapsed during the previous step). Theresult is a list Rcolcomp= [Rl,πDl−1,∀π= 1,...,|Πl|]of column compressed Rl,π−matrices.We proceed collapsing equivalent objects in stratum Sl, i.e. those made of identical sets of parts:we find symmetries in Rcolcompby invoking COMPUTE -CD(line 6) and obtain a new pair Cl,Dlof compression, and decompression matrices respectively. Finally the compression matrix Clis ap-plied to the column-compressed matrices in Rcolcompin order to obtain the Πlcompressed matrices5Under review as a conference paper at ICLR 2017DOMAIN -COMPRESSION (X,R)1C0,D0=COMPUTE -CD(X)2Xcomp=C0X/ /Compress the Xmatrix.3Rcomp={}/ /Initialize an empty container for compressed matrices.4forl= 1toL5Rcolcomp= [Rl,πDl−1,∀π= 1,...,|Πl|]/ /column compression6Cl,Dl=COMPUTE -CD(Rcolcomp)7 forπ= 1to|Πl|8 Rcompl,π=ClRcolcompπ / /row compression9returnXcomp,RcompFigure 3: DOMAIN -COMPRESSIONof stratumSl(line 8). Algorithm 3 allows us to compute the domain compressed version of Eq. 3which can be obtained by replacing: XwithXcomp=C0X,Rl,πwithRcompl,π=ClRl,πDl−1andHlwithHcompl. Willing to recover the original encodings Hlwe just need to employ the decom-pression matrix Dlon the compressed encodings Hcompl, indeedHl=DlHcompl.As we can see by substituting SlwithScompl, the more are the symmetries (i.e. when |Scompl|/lessmuch|Sl|) the greater the domain compression will be.3 R ELATED WORKSWhen learning with graph inputs two fundamental design aspects that must be taken into account are:the choice of the pattern generator and the choice of the matching operator. The former decomposesthe graph input in substructures while the latter allows to compare the substructures.Among the patterns considered from the graph kernel literature we have paths, shortest paths,walks (Kashima et al., 2003), subtrees (Ramon & G ̈artner, 2003; Shervashidze et al., 2011) andneighborhood subgraphs (Costa & De Grave, 2010). The similarity between graphs GandG/primeiscomputed by counting the number of matches between their common the substructures (i.e. a kernelon the sets of the substructures). The match between two substructures can be defined by usinggraph isomorphism or some other weaker graph invariant.When the number of substructures to enumerate is infinite or exponential with the size of the graph(perhaps this is the case for random walks and shortest paths respectively) the kernel between thetwo graphs is computed without generating an explicit feature map. Learning with an implicit fea-ture map is not scalable as it has a space complexity quadratic in the number of training examples(because we need to store in memory the gram matrix).Other graph kernels such as the Weisfeiler-Lehman Subtree Kernel ( WLST ) (Shervashidze et al.,2011) and the Neighborhood Subgraph Pairwise Distance Kernel ( NSPDK ) (Costa & De Grave,2010) deliberately choose a pattern generator that scales polynomially and produces an explicitfeature map. However the vector representations produced by WLST and NSPDK are handcraftedand not learned.A recent work by Yanardag & Vishwanathan (2015) proposes to uses pattern generators such asgraphlets, shortest paths and WLST subtrees to transform input graphs into documents. The gener-ated substructures are then treated as words and embedded in the Euclidean space with a CBOWor a Skip-gram model. The deep upgrade of existing graph kernels is performed by reweighing thecounts of the substructures by the square root of their word-vector self similarity.Another recent work by Niepert et al. (2016) upgrades the convolutional neural networks CNNs forimages to graphs. While the receptive field of a CNN is usually a square window (Niepert et al.,2016) employ neighborhood subgraphs as receptive fields. As nodes in graphs do not have a specifictemporal or spatial order, (Niepert et al., 2016) employ vertex invariants to impose an order on thenodes of the subgraphs/receptive fields.6Under review as a conference paper at ICLR 20174 E XPERIMENTAL EVALUATIONWe answer to the following experimental questions:Q1How does SAEN compare to the state of the art?Q2Can SAEN exploit symmetries in social networks to reduce the memory usage and the runtime?4.1 D ATASETSIn order to answer the experimental questions we tested our method on six publicly available datasetsfirst proposed by Yanardag & Vishwanathan (2015).•COLLAB is a dataset where each graph represent the ego-network of a researcher, and the task isto determine the field of study of the researcher between High Energy Physics ,Condensed MatterPhysics andAstro Physics .•IMDB -BINARY ,IMDB -MULTI are datasets derived from IMDB where in each graph the ver-tices represent actors/actresses and the edges connect people which have performed in the samemovie. Collaboration graphs are generated from movies belonging to genres Action andRomanceforIMDB -BINARY andComedy ,Romance andSci-Fi forIMDB -MULTI , and for each actor/actress inthose genres an ego-graph is extracted. The task is to identify the genre from which the ego-graphhas been generated.•REDDIT -BINARY ,REDDIT -MULTI 5K,REDDIT -MULTI 12Kare datasets where each graph is de-rived from a discussion thread from Reddit. In those datasets each vertex represent a distinct userand two users are connected by an edge if one of them has responded to a post of the other inthat discussion. The task in REDDIT -BINARY is to discriminate between threads originating froma discussion-based subreddit ( TrollXChromosomes ,atheism ) or from a question/answers-basedsubreddit ( IAmA ,AskReddit ). The task in REDDIT -MULTI 5Kand REDDIT -MULTI 12Kis a multi-class classification problem where each graph is labeled with the subreddit where it has originated(worldnews, videos, AdviceAnimals, aww, mildlyinteresting forREDDIT -MULTI 5KandAskReddit,AdviceAnimals, atheism, aww, IAmA, mildlyinteresting, Showerthoughts, videos, todayilearned,worldnews, TrollXChromosomes forREDDIT -MULTI 12K).4.2 E XPERIMENTSIn our experiments we chose an H-hierarchical decomposition called Ego Graph Neural Network(EGNN ), that mimics the graph kernel NSPDK with the distance parameter set to 0.Before applying EGNN we turn unattributed graphs (V,E)into attributed graphs (V,E,X )by an-notating their vertices v∈Vwith attributes xv∈X. We label vertices vofGwith their degree andencode this information into the attributes xvby employing the 1-hot encoding.EGNN decomposes attributed graphs G= (V,E,X )into a 3levelH-hierarchical decompositionwith the following strata (see Figure 1 for a pictorial representation of EGNN ):•stratumS0contains objects ovthat are in one-to-one correspondence with the vertices v∈V.•stratumS1containsvroot-rootedr-neighborhood subgraphs (i.e. ego graphs) e= (vroot,Ve,Ee)of radiusr= 0,1,...,R and has part-of alphabet Π1={ROOT,ELEM}. Objectsov∈S0are“ELEM -part-of” ego graph eifv∈Ve\{vroot}, while the are “ ROOT -part-of” ego graph eifv=vroot.•stratumS2contains the graph Gthat we want to classify and has part-of alphabet Π2={0,1}which correspond to the radius of the ego graphs e∈S1of whichGis made of.E1We experimented with SAEN applying the EGNNH-decomposition on all the datasets. For eachdataset, we manually chose the parameters of SAEN , i.e. the number of hidden layers for eachstratum, the size of each layer and the maximum radius R. We used the Leaky ReLU (Maas et al.)activation function on all the units. We report the chosen parameters in Table A1 of the appendix.In all our experiments we trained the neural networks by using the Adam algorithm to minimize across entropy loss.The classification accuracy of SAEN was measured with 10-times 10-fold cross-validation. We man-ually chose the number of layers and units for each level of the part-of decomposition; the numberof epochs was chosen manually for each dataset and we kept the same value for all the 100runs ofthe10-times 10-fold cross-validation.7Under review as a conference paper at ICLR 2017Figure 4: Comparison of accuracy results.DATASET DGK PSCN SAEN(Yanardag et al. 2015) (Niepert et al., 2016) (our method)COLLAB 73.09±0.25 72.60±2.16 75.63±0.31IMDB -BINARY 66.96±0.56 71.00±2.29 71.26±0.74IMDB -MULTI 44.55±0.52 45.23±2.84 49.11±0.64REDDIT -BINARY 78.04±0.39 86.30±1.58 86.08±0.53REDDIT -MULTI 5K 41.27±0.18 49.10±0.70 52.24±0.38REDDIT -MULTI 12K 32.22±0.10 41.32±0.42 46.72±0.23Figure 5: Comparison of accuracy on bio-informatics datasets.DATASET PSCN (k= 10E) SAEN(Niepert et al., 2016) (our method)MUTAG 92.63±4.21 84.99±1.82PTC 60.00±4.82 57.04±1.30NCI1 78.59±1.89 77.80±0.42PROTEINS 75.89±2.76 75.31±0.70D&D 77.12±2.41 77.69±0.96The mean accuracies and their standard deviations obtained by our method are reported in Ta-ble 4, where we compare these results with those obtained by Yanardag & Vishwanathan (2015)and by Niepert et al. (2016).Although our method was conceived for social network data, it can also handle other types of graphs.For the sake of completeness in Table 5 we report the mean accuracies obtained with SAEN on themolecule and protein datasets studied in previous works (e.g. Niepert et al. (2016)).Table 1: Comparison of sizes and runtimes of the datasets before and after the compression.DATASETSIZE (MB) RUNTIMEORIGINAL COMP . RATIO ORIGINAL COMP . SPEEDUPCOLLAB 1190 448 0.38 43’ 18” 8’ 20” 5.2IMDB -BINARY 68 34 0.50 3’ 9” 0’ 30” 6.3IMDB -MULTI 74 40 0.54 7’ 41” 1’ 54” 4.0REDDIT -BINARY 326 56 0.17 TO 2’ 35”≥100.0REDDIT -MULTI 5K 952 162 0.17 OOM 9’ 51” –REDDIT -MULTI 12K 1788 347 0.19 OOM 29’ 55” –E2In Table 1 we show the file sizes of the preprocessed datasets before and after the compressiontogether with the data compression ratio.3We also estimate the benefit of the relational compressionfrom a computational time point of view and report the measurement of the runtime for 1run withand without compression together with the speedup factor.For the purpose of this experiment, all tests were run on a computer with two 8-cores Intel XeonE5-2665 processors and 94 GB RAM . Uncompressed datasets which exhausted our server’s memoryduring the test are marked as “ OOM ” (out of memory) in the table, while those who exceeded thetime limit of 100times the time needed for the uncompressed version are marked as “ TO” (timeout).4.3 D ISCUSSIONA1As shown in Table 4, EGNN performs consistently better than the other two methods on all thesocial network datasets. This confirms that the chosen H-hierarchical decomposition is effective onthis kind of problems. Also the results for molecule and protein datasets (see Table 5) are in linewith the current state of the art.A2The compression algorithm has proven to be effective in improving the computational cost of ourmethod. Most of the datasets improved their runtimes by a factor of at least 4while maintaining the3The size of the uncompressed files are shown for the sole purpose of computing the data compression ratio.Indeed the last version of our code compresses the files on the fly.8Under review as a conference paper at ICLR 2017same expressive power. Moreover, experiments on REDDIT -MULTI 5Kand REDDIT -MULTI 12Khaveonly been possible thanks to the size reduction operated by the algorithm as the script exhausted thememory while executing the training step on the uncompressed files.5 C ONCLUSIONSWe proposed SAEN , a novel architecture for learning vector representations of H-decompositionsof input graphs. We applied SAEN for graph classification on 6real world social network datasets,outperforming the current state of the art on 4of them and obtaining state-of-the-art classificationaccuracy on the others. Another important contribution of this paper is the domain compressionalgorithm which greatly reduces memory usage and allowed us to speedup the training time of afactor of at least 4.REFERENCESP Baldi and G Pollastri. The principled design of large-scale recursive neural network architectures–dag-rnns and the protein structure prediction problem. J Mach Learn Res , 4(Sep):575–602, 2003.K M Borgwardt and H-P Kriegel. Shortest-path kernels on graphs. In Proc. of the ICDM-05 , pp.8–pp. IEEE, 2005.F Costa and K De Grave. Fast neighborhood subgraph pairwise distance kernel. In Proc. of theICML-10 , pp. 255–262. Omnipress, 2010.C Goller and A Kuchler. Learning task-dependent distributed representations by backpropagationthrough structure. In Neural Networks, 1996., IEEE International Conference on , volume 1, pp.347–352. IEEE, 1996.D Haussler. Convolution kernels on discrete structures. Technical report, Citeseer, 1999.H Kashima, K Tsuda, and A Inokuchi. Marginalized kernels between labeled graphs. In ICML-03 ,volume 3, pp. 321–328, 2003.A L Maas, A Y Hannun, and A Y Ng. Rectifier nonlinearities improve neural network acousticmodels. In Proc. of the ICML-13 .M Mladenov, B Ahmadi, and K Kersting. Lifted linear programming. In AISTATS-12 , pp. 788–797,2012.M Niepert, M Ahmed, and K Kutzkov. Learning convolutional neural networks for graphs. arXivpreprint arXiv:1605.05273 , 2016.J Ramon and T G ̈artner. Expressivity versus efficiency of graph kernels. In First InternationalWorkshop on Mining Graphs, Trees and Sequences , pp. 65–74. Citeseer, 2003.N Shervashidze, SVN Vishwanathan, T Petri, K Mehlhorn, and K M Borgwardt. Efficient graphletkernels for large graph comparison. In AISTATS-09 , volume 5, pp. 488–495, 2009.N Shervashidze, P Schweitzer, E J van Leeuwen, K Mehlhorn, and K M Borgwardt. Weisfeiler-lehman graph kernels. J Mach Learn Res , 12(Sep):2539–2561, 2011.R Socher, C C Lin, C Manning, and A Y Ng. Parsing natural scenes and natural language withrecursive neural networks. In Proc. of the ICML-11 , pp. 129–136, 2011.A Sperduti and A Starita. Supervised neural networks for the classification of structures. IEEETransactions on Neural Networks , 8(3):714–735, 1997.A Vullo and P Frasconi. Disulfide connectivity prediction using recursive neural networks andevolutionary information. Bioinformatics , 20(5):653–659, 2004.P Yanardag and SVN Vishwanathan. Deep graph kernels. In Proc. of KDD-15 , pp. 1365–1374,2015.9Under review as a conference paper at ICLR 2017APPENDIX : SHIFT AGGREGATE EXTRACT NETWORKSFrancesco Orsini12, Daniele Baracchi2and Paolo Frasconi21Department of Computer Science2Department of Information EngineeringKatholieke Universiteit Leuven Universit `a degli Studi di FirenzeCelestijnenlaan 200A Via di Santa Marta 33001 Heverlee, Belgium I-50139 Firenze, Italyfrancesco.orsini@kuleuven.be daniele.baracchi@unifi.itpaolo.frasconi@unifi.itA P ARAMETERS USED IN THE EXPERIMENTS WITH EGNNIn Table A1 we report for each dataset: the radiuses rof the neighborhood subgraphs used in theEGNN decomposition and the number of units in the hidden layers for each stratum.Figure A1: Parameters for the neural networks used in the experiments.DATASET RADIUSES HIDDEN UNITSr S0 S1 S2COLLAB 0,1 15−5 5−2 5 −3IMDB -BINARY 0,1,2 2 5 −2 5 −3−1IMDB -MULTI 0,1,2 2 5 −2 5 −3REDDIT -BINARY 0,1 10−5 5−2 5 −3−1REDDIT -MULTI 5K 0,1 10 10 6 −5REDDIT -MULTI 12K0,1 10 10 20 −11MUTAG 0,1,2,3 10 5 −5 5 −5−1PTC 0,1 15 15 15 −1NCI1 0,1,2,3 15 15 15 −10−1PROTEINS 0,1,2,3 3−2 6 −5−4 6−3−1D&D 0,1,2,3 10 5 −2 5 −3−11
HJKkY35le
Published as a conference paper at ICLR 2017MODE REGULARIZED GENERATIVE ADVERSARIALNETWORKSyTong Che,zYanran Li,y;xAthul Paul Jacob,yYoshua Bengio,zWenjie LiyMontreal Institute for Learning Algorithms, Universit ́e de Montr ́eal, Montr ́eal, QC H3T 1J4, CanadazDepartment of Computing, The Hong Kong Polytechnic University, Hong KongxDavid R. Cheriton School of Computer Science, University Of Waterloo, Waterloo, ON N2L 3G1, Canadaftong.che,ap.jacob,yoshua.bengio g@umontreal.cafcsyli,cswjlig@comp.polyu.edu.hkABSTRACTAlthough Generative Adversarial Networks achieve state-of-the-art results on avariety of generative tasks, they are regarded as highly unstable and prone to missmodes. We argue that these bad behaviors of GANs are due to the very particularfunctional shape of the trained discriminators in high dimensional spaces, whichcan easily make training stuck or push probability mass in the wrong direction,towards that of higher concentration than that of the data generating distribution.We introduce several ways of regularizing the objective, which can dramaticallystabilize the training of GAN models. We also show that our regularizers canhelp the fair distribution of probability mass across the modes of the data gener-ating distribution, during the early phases of training and thus providing a unifiedsolution to the missing modes problem.1 I NTRODUCTIONGenerative adversarial networks (GAN) (Goodfellow et al., 2014) have demonstrated their potentialon various tasks, such as image generation, image super-resolution, 3D object generation, and videoprediction (Radford et al., 2015; Ledig et al., 2016; Sønderby et al., 2016; Nguyen et al., 2016; Wuet al., 2016; Mathieu et al., 2015). The objective is to train a parametrized function (the generator)which maps noise samples (e.g., uniform or Gaussian) to samples whose distribution is close to thatof the data generating distribution. The basic scheme of the GAN training procedure is to traina discriminator which assigns higher probabilities to real data samples and lower probabilities togenerated data samples, while simultaneously trying to move the generated samples towards the realdata manifold using the gradient information provided by the discriminator. In a typical setting, thegenerator and the discriminator are represented by deep neural networks.Despite their success, GANs are generally considered as very hard to train due to training instabilityand sensitivity to hyper-parameters. On the other hand, a common failure pattern observed whiletraining GANs is the collapsing of large volumes of probability mass onto a few modes. Namely,although the generators produce meaningful samples, these samples are often from just a few modes(small regions of high probability under the data distribution). Behind this phenomenon is the miss-ing modes problem, which is widely conceived as a major problem for training GANs: many modesof the data generating distribution are not at all represented in the generated samples, yielding amuch lower entropy distribution, with less variety than the data generating distribution.This issue has been the subject of several recent papers proposing several tricks and new archi-tectures to stabilize GAN’s training and encourage its samples’ diversity. However, we argue that ageneral cause behind these problems is the lack of control on the discriminator during GAN training.We would like to encourage the manifold of the samples produced by the generator to move towardsthat of real data, using the discriminator as a metric. However, even if we train the discriminatorto distinguish between these two manifolds, we have no control over the shape of the discriminatorfunction in between these manifolds. In fact, the shape of the discriminator function in the dataAuthors contributed equally.1Published as a conference paper at ICLR 2017space can be very non-linear with bad plateaus and wrong maxima and this can therefore hurt thetraining of GANs (Figure 1).Figure 1: Samples with very high discrim-ination values (D=1.0) in DCGAN modeltrained on CelebA dataset.To remedy this problem, we propose a novel regu-larizer for the GAN training target. The basic ideais simple yet powerful: in addition to the gradientinformation provided by the discriminator, we wantthe generator to take advantage of other similaritymetrics with much more predictable behavior, suchas theL2norm. Differentiating these similarity met-rics will provide us with more stable gradients totrain our generator. Combining this idea with an ap-proach meant to penalize the missing modes, we pro-pose a family of additional regularizers for the GAN objective. We then design a set of metrics toevaluate the generated samples in terms of both the diversity of modes and the distribution fairnessof the probability mass. These metrics are shown to be more robust in judging complex generativemodels, including those which are well-trained and collapsed ones.Regularizers usually bring a trade-off between model variance and bias. Our results have shownthat, when correctly applied, our regularizers can dramatically reduce model variance, stabilize thetraining, and fix the missing mode problem all at once, with positive or at the least no negative effectson the generated samples. We also discuss a variant of the regularized GAN algorithm, which caneven improve sample quality as compared to the DCGAN baseline.2 R ELATED WORKThe GAN approach was initially proposed by Goodfellow et al. (2014) where both the generator andthe discriminator are defined by deep neural networks.In Goodfellow et al. (2014), the GAN is able to generate interesting local structure but globallyincoherent images on various datasets. Mirza & Osindero (2014) enlarges GAN’s representationcapacity by introducing an extra vector to allow the generator to produce samples conditioned onother beneficial information. Motivated from this, several conditional variants of GAN has beenapplied to a wide range of tasks, including image prediction from a normal map Wang & Gupta(2016), image synthesis from text Reed et al. (2016) and edge map Isola et al. (2016), real-timeimage manipulation Zhu et al. (2016), temporal image generation Zhou & Berg (2016); Saito &Matsumoto (2016); V ondrick et al. (2016), texture synthesis, style transfer, and video stylization Li& Wand (2016).Researchers also aim at stretching GAN’s limit to generate higher-resolution, photo-realistic images.Denton et al. (2015) initially apply a Laplacian pyramid framework on GAN to generate images ofhigh resolution. At each level of their LAPGAN, both the generator and the discriminator are convo-lutional networks. As an alternative to LAPGAN, Radford et al. (2015) successfully designs a classof deep convolutional generative adversarial networks which has led to significant improvements onunsupervised image representation learning. Another line of work aimed at improving GANs arethrough feature learning, including features from the latent space and image space. The motivation isthat features from different spaces are complementary for generating perceptual and natural-lookingimages. With this perspective, some researchers use distances between learned features as losses fortraining objectives for generative models. Larsen et al. (2015) combine a variational autoencoderobjective with a GAN and utilize the learned features from the discriminator in the GANs for betterimage similarity metrics. It is shown that the learned distance from the discriminator is of greathelp for the sample visual fidelity. Recent literature have also shown impressive results on imagesuper-resolution to infer photo-realistic natural images for 4x upscaling factors Ledig et al. (2016);Sønderby et al. (2016); Nguyen et al. (2016).Despite these promising successes, GANs are notably hard to train. Although Radford et al. (2015)provide a class of empirical architectural choices that are critical to stabilize GAN’s training, itwould be even better to train GANs more robustly and systematically. Salimans et al. (2016) pro-pose feature matching technique to stabilize GAN’s training. The generator is required to match thestatistics of intermediate features of the discriminator. Similar idea is adopted by Zhao et al. (2016).2Published as a conference paper at ICLR 2017In addition to feature distances, Dosovitskiy & Brox (2016) found that the counterpart loss in imagespace further improves GAN’s training stability. Furthermore, some researchers make use of infor-mation in both spaces in a unified learning procedure (Dumoulin et al., 2016; Donahue et al., 2016).In Dumoulin et al. (2016), one trains not just a generator but also an encoder, and the discriminatoris trained to distinguish between two joint distributions over image and latent spaces produced eitherby the application of the encoder on the training data or by the application of the generator (decoder)to the latent prior. This is in contrast with the regular GAN training, in which the discriminator onlyattempts to separate the distributions in the image space. Parallelly, Metz et al. (2016) stabilizeGANs by unrolling the optimization of discriminator, which can be considered as an orthogonalwork with ours.Our work is related to V AEGAN (Larsen et al., 2015) in terms of training an autoencoder or V AEjointly with the GAN model. However, the variational autoencoder (V AE) in V AEGAN is used togenerate samples whereas our autoencoder based losses serves as a regularizer to penalize missingmodes and thus improving GAN’s training stability and sample qualities. We demonstrate detaileddifferences from various aspects in Appendix D.3 M ODE REGULARIZERS FOR GAN SThe GAN training procedure can be viewed as a non-cooperative two player game, in which thediscriminator Dtries to distinguish real and generated examples, while the generator Gtries to foolthe discriminator by pushing the generated samples towards the direction of higher discriminationvalues. Training the discriminator Dcan be viewed as training an evaluation metric on the samplespace. Then the generator Ghas to take advantage of the local gradient rlogD(G)provided by thediscriminator to improve itself, namely to move towards the data manifold.We now take a closer look at the root cause of the instabilities while training GANs. The discrim-inator is trained on both generated and real examples. As pointed out by Goodfellow et al. (2014);Denton et al. (2015); Radford et al. (2015), when the data manifold and the generation manifold aredisjoint (which is true in almost all practical situations), it is equivalent to training a characteristicfunction to be very close to 1 on the data manifold, and 0 on the generation manifold. In order topass good gradient information to the generator, it is important that the trained discriminator pro-duces stable and smooth gradients. However, since the discriminator objective does not directlydepend on the behavior of the discriminator in other parts of the space, training can easily fail if theshape of the discriminator function is not as expected. As an example,Denton et al. (2015) noteda common failure pattern for training GANs which is the vanishing gradient problem, in which thediscriminator Dperfectly classifies real and fake examples, such that around the fake examples, Dis nearly zero. In such cases, the generator will receive no gradient to improve itself.1Another important problem while training GANs is mode missing. In theory, if the generated dataand the real data come from the same low dimensional manifold, the discriminator can help thegenerator distribute its probability mass, because the missing modes will not have near-0 probabilityunder the generator and so the samples in these areas can be appropriately concentrated towardsregions where Dis closer to 1. However, in practice since the two manifolds are disjoint, Dtendsto be near 1 on all the real data samples, so large modes usually have a much higher chance ofattracting the gradient of discriminator. For a typical GAN model, since all modes have similar Dvalues, there is no reason why the generator cannot collapse to just a few major modes. In otherwords, since the discriminator’s output is nearly 0 and 1 on fake and real data respectively, thegenerator is not penalized for missing modes.3.1 G EOMETRIC METRICS REGULARIZERCompared with the objective for the GAN generator, the optimization targets for supervised learningare more stable from an optimization point of view. The difference is clear: the optimization targetfor the GAN generator is a learned discriminator. While in supervised models, the optimizationtargets are distance functions with nice geometric properties. The latter usually provides mucheasier training gradients than the former, especially at the early stages of training.1This problem exists even when we use logD(G(z))as target for the generator, as noted by Denton et al.(2015) and our experiments.3Published as a conference paper at ICLR 2017Inspired by this observation, we propose to incorporate a supervised training signal as a regularizeron top of the discriminator target. Assume the generator G(z) :Z!Xgenerates samples by sam-pling first from a fixed prior distribution in space Zfollowed by a deterministic trainable transforma-tionGinto the sample space X. Together with G, we also jointly train an encoder E(x) :X!Z.Assumedis some similarity metric in the data space, we add Expd[d(x;GE(x))]as a regularizer,wherepdis the data generating distribution. The encoder itself is trained by minimizing the samereconstruction error.In practice, there are many options for the distance measure d. For instance, the pixel-wise L2distance, or the distance of learned features by the discriminator (Dumoulin et al., 2016) or by othernetworks, such as a VGG classifier. (Ledig et al., 2016)The geometric intuition for this regularizer is straight-forward. We are trying to move the generatedmanifold to the real data manifold using gradient descent. In addition to the gradient provided bythe discriminator, we can also try to match the two manifolds by other geometric distances, say,Lsmetric. The idea of adding an encoder is equivalent to first training a point to point mappingG(E(x))between the two manifolds and then trying to minimize the expected distance between thepoints on these two manifolds.3.2 M ODE REGULARIZERIn addition to the metric regularizer, we propose a mode regularizer to further penalize miss-ing modes. In traditional GANs, the optimization target for the generator is the empirical sumPirlogD(G(zi)). The missing mode problem is caused by the conjunction of two facts: (1)the areas near missing modes are rarely visited by the generator, by definition, thus providing veryfew examples to improve the generator around those areas, and (2) both missing modes and non-missing modes tend to correspond to a high value of D, because the generator is not perfect sothat the discriminator can take strong decisions locally and obtain a high value of Deven nearnon-missing modes.Figure 2: Illustration of missing modes problem.As an example, consider the situation in Fig-ure 2. For most z, the gradient of the generatorrlogD(G(z))pushes the generator towardsthe major mode M1. Only when G(z)is veryclose to the mode M2can the generator get gra-dients to push itself towards the minor modeM2. However, it is possible that such zis oflow or zero probability in the prior distributionp0.Given this observation, consider a regularizedGAN model with the metric regularizer. As-sumeM0is a minor mode of the data generat-ing distribution. For x2M0, we know thatifGEis a good autoencoder, G(E(x))willbe located very close to mode M0. Since thereare sufficient training examples of mode M0inthe training data, we add the mode regularizerExpd[logD(GE(x))]to our optimizationtarget for the generator, to encourage G(E(x))to move towards a nearby mode of the data generating distribution. In this way, we can achieve fairprobability mass distribution across different modes.In short, our regularized optimization target for the generator and the encoder becomes:TG=Ez[logD(G(z))] +Expd[1d(x;GE(x)) +2logD(GE(x))] (1)TE=Expd[1d(x;GE(x)) +2logD(GE(x))] (2)4Published as a conference paper at ICLR 20173.3 M ANIFOLD -DIFFUSION TRAINING FOR REGULARIZED GAN SOn some large scale datasets, CelebA for example, the regularizers we have discussed do improvethe diversity of generated samples, but the quality of samples may not be as good without care-fully tuning the hyperparameters. Here we propose a new algorithm for training metric-regularizedGANs, which is very stable and much easier to tune for producing good samples.The proposed algorithm divides the training procedure of GANs into two steps: a manifold stepand a diffusion step. In the manifold step, we try to match the generation manifold and the realdata manifold with the help of an encoder and the geometric metric loss. In the diffusion step, wetry to distribute the probability mass on the generation manifold fairly according to the real datadistribution.An example of manifold-diffusion training of GAN (MDGAN for short) is as follows: we train adiscriminator D1which separates between the samples xandGE(x), forxfrom the data, and weoptimizeGwith respect to the regularized GAN loss E[logD1(GE(x))+d(x;GE(x))]in orderto match the two manifolds. In the diffusion step we train a discriminator D2between distributionsG(z)andGE(x), and we train Gto maximize logD2(G(z)). Since these two distributions arenow nearly on the same low dimensional manifold, the discriminator D2provides much smootherand more stable gradients. The detailed training procedure is given in Appendix A. See Figure 6 forthe quality of generated samples.3.4 E VALUATION METRICS FOR MODE MISSINGIn order to estimate both the missing modes and the sample qualities in our experiments, we usedseveral different metrics for different experiments instead of human annotators.The inception score (Salimans et al., 2016) was considered as a good assessment for sample qualityfrom a labelled dataset:exp (ExKL(p(yjx)jjp(y))) (3)Where xdenotes one sample, p(yjx)is the softmax output of a trained classifier of the labels, andp(y)is the overall label distribution of generated samples. The intuition behind this score is thata strong classifier usually has a high confidence for good samples. However, the inception score issometimes not a good metric for our purpose. Assume a generative model that collapse to a very badimage. Although the model is very bad, it can have a perfect inception score, because p(yjx)canhave a high entropy and p(y)can have a low entropy. So instead, for labelled datasets, we proposeanother assessment for both visual quality and variety of samples, the MODE score:exp (ExKL(p(yjx)jjp(y))KL(p(y)jjp(y))) (4)wherep(y)is the distribution of labels in the training data. According to our human evaluationexperiences, the MODE score successfully measures two important aspects of generative models,i.e., variety and visual quality, in one metric.However, in datasets without labels (LSUN) or where the labels are not sufficient to characterizeevery data mode (CelebA), the above metric does not work well. We instead train a third partydiscriminator between the real data and the generated data from the model. It is similar to the GANdiscriminator but is not used to train the generator . We can view the output of the discriminator asan estimator for the quantity (See (Goodfellow et al., 2014) for proof):D(s)pg(s)pg(s) +pd(s)(5)Wherepgis the probability density of the generator and pdis the density of the data distribution.To preventDfrom learning a perfect 0-1 separation of pgandpd, we inject a zero-mean Gaussiannoise to the inputs when training D. After training, we test Don the test set Tof the real dataset.If for any test sample t2T, the discrimination value D(t)is close to 1, we can conclude that themode corresponding to tis missing. In this way, although we cannot measure exactly the numberof modes that are missing, we have a good estimator of the total probability mass of all the missingmodes.5Published as a conference paper at ICLR 20174 E XPERIMENTS4.1 MNISTTable 1: Grid Search for Hyperparameters.nLayerG [2,3,4]nLayerD [2,3,4]sizeG [400,800,1600,3200]sizeD [256, 512, 1024]dropoutD [True,False]optimG [SGD,Adam]optimD [SGD,Adam]lr [1e-2,1e-3,1e-4]We perform two classes of experiments on MNIST.For the MNIST dataset, we can assume that the datagenerating distribution can be approximated with tendominant modes, if we define the term “mode” here asa connected component of the data manifold.4.1.1 G RIDSEARCH FOR MNIST GAN M ODELSIn order to systemically explore the effect of our pro-posed regularizers on GAN models in terms of im-proving stability and sample quality, we use a largescale grid search of different GAN hyper-parameterson the MNIST dataset. The grid search is based on apair of randomly selected loss weights: 1= 0:2and2= 0:4. We use the same hyper-parameter settings for both GAN and Regularized GAN, andlist the search ranges in Table 1. Our grid search is similar to those proposed in Zhao et al. (2016).Please refer to it for detailed explanations regarding these hyper-parameters.For evaluation, we first train a 4-layer CNN classifier on the MNIST digits, and then apply it tocompute the MODE scores for the generated samples from all these models. The resulting distribu-tion of MODE score is shown in Figure 3. Clearly, our proposed regularizer significantly improvesthe MODE scores and thus demonstrates its benefits on stabilizing GANs and improving samplequalities.Figure 3: The distributions of MODE scores for GAN and regularized GAN.To illustrate the effect of regularizers with different coefficients, we randomly pick an architectureand train it with different 1=2. The results are shown in Figure 4.Figure 4: (Left 1-5) Different hyperparameters for MNIST generation. The values of the 1and2in our Regularized GAN are listed below the corresponding samples. (Right 6-7) Best samplesthrough grid search for GAN and Regularized GAN.4.1.2 C OMPOSITIONAL MNIST DATA WITH 1000 MODESIn order to quantitatively study the effect of our regularizers on the missing modes, we concatenatethree MNIST digits to a number in [0,999] in a single 64x64 image, and then train DCGAN asa baseline model on the 1000 modes dataset. The digits on the image are sampled with different6Published as a conference paper at ICLR 2017probabilities, in order to test the model’s capability to preserve small modes in generation. We againuse a pre-trained classifier for MNIST instead of a human to evaluate the models.Table 2: Results for Compositional MNIST with 1000 modes. The proposed regularization (Reg-DCGAN) allows to substantially reduce the number of missed modes as well as the KL divergencethat measures the plausibility of the generated samples (like in the Inception score).Set 1 Set 2 Set 3 Set 4#Miss KL #Miss KL #Miss KL #Miss KLDCGAN 204.7 77.9 204.3 60.2 103.4 75.9 89.3 77.8Reg-DCGAN 32.1 62.3 71.5 58.9 42.7 68.4 31.6 67.8The performances on the compositional experiment are measured by two metrics. #Miss representsthe classifier-reported number of missing modes, which is the size of the set of numbers that themodel never generates. KL stands for the KL divergence between the classifier-reported distributionof generated numbers and the distribution of numbers in the training data (as for the Inceptionscore). The results are shown in Table 2. With the help of our proposed regularizer, both the numberof missing modes and KL divergence drop dramatically among all the sets of the compositionalMNIST dataset, which again proves the effectiveness of our regularizer for preventing the missingmodes problem.4.2 C ELEB ATo test the effectiveness of our proposal on harder problems, we implement an encoder for theDCGAN algorithm and train our model with different hyper-parameters together with the DCGANbaseline on the CelebA dataset. We provide the detailed architecture of our regularized DCGAN inAppendix B.4.2.1 M ISSING MODES ESTIMATION ON CELEB AWe also employ a third party discriminator trained with injected noise as a metric for missing modeestimation. To implement this, we add noise in the input layer in the discriminator network. For eachGAN model to be estimated, we independently train this noisy discriminator, as mode estimator,with the same architecture and hyper-parameters on the generated data and the training data. Wethen apply the mode estimator to the test data. The images which have high mode estimator outputscan be viewed as on the missing modes.Table 3: Number of images on the missing modes on CelebA estimated by a third-party discrimina-tor. The numbers in the brackets indicate the dimension of prior z.denotes the standard deviationof the added Gaussian noise applied at the input of the discriminator to regularize it. MDGANachieves a very high reduction in the number of missing modes, in comparison to other methods . DCGAN (100) DCGAN (200) Reg-GAN (100) Reg-GAN (200) MDGAN (200)3.5 5463 17089 754 3644 744.0 590 15832 42 391 13The comparison result is shown in Table 3. Both our proposed Regularized-GAN and MDGANoutperform baseline DCGAN models on all settings. Especially, MDGAN suppresses other models,showing its superiority on modes preserving. We also find that, although sharing the same architec-ture, the DCGAN with 200-dimensional noise performs quite worse than that with 100-dimensionalnoise as input. On the contrary, our regularized GAN performs more consistently.To get a better understanding of the models’ performance, we want to figure out when and wherethese models miss the modes. Visualizing the test images associated with missed modes is instruc-tive. In Figure 5, the left three images are missed by all models. It is rare to see in the training datathe cap in the second image and the type of background in the third, which thus can be viewed assmall modes under this situation. These three images should be considered as the hardest test data7Published as a conference paper at ICLR 2017for GAN to learn. Nonetheless, our best model, MDGAN still capture certain small modes. Theseven images on the right in Figure 5 are only missed by DCGAN. The sideface, paleface, black,and the berets are special attributes among these images, but our proposed MDGAN performs wellon all of them.Figure 5: Test set images that are on missing mode. Left: Both MDGAN and DCGAN missing.Right: Only DCGAN missing.4.2.2 Q UALITATIVE EVALUATION OF GENERATED SAMPLESAfter quantitative evaluation, we manually examine the generated samples by our regularized GANto see whether the proposed regularizer has side-effects on sample quality. We compare our modelwith ALI (Dumoulin et al., 2016), V AEGAN (Larsen et al., 2015), and DCGAN (Radford et al.,2015) in terms of sample visual quality and mode diversity. Samples generated from these modelsare shown in Figure 62.Figure 6: Samples generated from different generative models. For each compared model, wedirectly take ten decent samples reported in their corresponding papers and code repositories. Notehow MDGAN samples are both globally more coherent and locally have sharp textures.Both MDGAN and Regularized-GAN generate clear and natural-looking face images. AlthoughALI’s samples are plausible, they are sightly deformed in comparison with those from MDGAN.The samples from V AEGAN and DCGAN seem globally less coherent and locally less sharp.As to sample quality, it is worth noting that the samples from MDGAN enjoy fewer distortions.With all four other models, the majority of generated samples suffer from some sort of distortion.However, for the samples generated by MDGAN, the level of distortion is lower compared with theother four compared models. We attribute it to the help of the autoencoder as the regularizer to alterthe generation manifolds. In this way, the generator is able to learn fine-grained details such as faceedges. As a result, MDGAN is able to reduce distortions.2For fair comparison, we also recommend readers to refer to the original papers Dumoulin et al. (2016);Larsen et al. (2015); Radford et al. (2015) for the reported samples of the compared. The ALI sam-ples are from https://github.com/IshmaelBelghazi/ALI/blob/master/paper/celeba_samples.png and we reverted them to the original 64x64 size. The DCGAN samples are from https://github.com/Newmu/dcgan_code/8Published as a conference paper at ICLR 2017Figure 7: Sideface samples generated by Regularized-GAN and MDGAN.In terms of missing modes problem, we instructed five individuals to conduct human evaluation onthe generated samples. They achieve consensus that MDGAN wins in terms of mode diversities.Two people pointed out that MDGAN generates a larger amount of samples with side faces thanother models. We select several of these side face samples in Figure 7. Clearly, our samples maintainacceptable visual fidelity meanwhile share diverse modes. Combined with the above quantitativeresults, it is convincing that our regularizers bring benefits for both training stability and modevariety without the loss of sample quality.5 C ONCLUSIONSAlthough GANs achieve state-of-the-art results on a large variety of unsupervised learning tasks,training them is considered highly unstable, very difficult and sensitive to hyper-parameters, all thewhile, missing modes from the data distribution or even collapsing large amounts of probabilitymass on some modes. Successful GAN training usually requires large amounts of human and com-puting efforts to fine tune the hyper-parameters, in order to stabilize training and avoid collapsing.Researchers usually rely on their own experience and published tricks and hyper-parameters insteadof systematic methods for training GANs.We provide systematic ways to measure and avoid the missing modes problem and stabilize trainingwith the proposed autoencoder-based regularizers. The key idea is that some geometric metrics canprovide more stable gradients than trained discriminators, and when combined with the encoder,they can be used as regularizers for training. These regularizers can also penalize missing modesand encourage a fair distribution of probability mass on the generation manifold.ACKNOWLEDGEMENTSWe thank Naiyan Wang, Jianbo Ye, Yuchen Ding, Saboya Yang for their GPU support. We also wantto thank Huiling Zhen for helpful discussions, Junbo Zhao for providing the details of grid searchexperiments on the EBGAN model, as well as Anders Boesen Lindbo Larsen for kindly helping uson running V AEGAN experiments. We appreciate for the valuable suggestions and comments fromthe anonymous reviewers. The work described in this paper was partially supported by NSERC,Calcul Quebec, Compute Canada, the Canada Research Chairs, CIFAR, National Natural ScienceFoundation of China (61672445 and 61272291), Research Grants Council of Hong Kong (PolyU152094/14E), and The Hong Kong Polytechnic University (G-YBP6).REFERENCESEmily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using alaplacian pyramid of adversarial networks. In Advances in neural information processing systems ,pp. 1486–1494, 2015.Jeff Donahue, Philipp Kr ̈ahenb ̈uhl, and Trevor Darrell. Adversarial feature learning. arXiv preprintarXiv:1605.09782 , 2016.Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics basedon deep networks. arXiv preprint arXiv:1602.02644 , 2016.Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi-etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704 ,2016.9Published as a conference paper at ICLR 2017Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation withconditional adversarial networks. arxiv , 2016.Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. Autoen-coding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300 , 2015.Christian Ledig, Lucas Theis, Ferenc Husz ́ar, Jose Caballero, Andrew Aitken, Alykhan Tejani, Jo-hannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using agenerative adversarial network. arXiv preprint arXiv:1609.04802 , 2016.Chuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian generativeadversarial networks. arXiv preprint arXiv:1604.04382 , 2016.Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyondmean square error. arXiv preprint arXiv:1511.05440 , 2015.Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarialnetworks. arXiv preprint arXiv:1611.02163 , 2016.Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprintarXiv:1411.1784 , 2014.Anh Nguyen, Jason Yosinski, Yoshua Bengio, Alexey Dosovitskiy, and Jeff Clune. Plug & playgenerative networks: Conditional iterative generation of images in latent space. arXiv preprintarXiv:1612.00005 , 2016.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396 , 2016.Masaki Saito and Eiichi Matsumoto. Temporal generative adversarial nets. arXiv preprintarXiv:1611.06624 , 2016.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz ́ar. Amortisedmap inference for image super-resolution. arXiv preprint arXiv:1610.04490 , 2016.Carl V ondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics.InAdvances In Neural Information Processing Systems , pp. 613–621, 2016.Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversar-ial networks. In ECCV , 2016.Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, and Joshua B Tenenbaum. Learninga probabilistic latent space of object shapes via 3d generative-adversarial modeling. In NeuralInformation Processing Systems (NIPS) , 2016.Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network.arXiv preprint arXiv:1609.03126 , 2016.Yipin Zhou and Tamara L Berg. Learning temporal transformations from time-lapse videos. InEuropean Conference on Computer Vision , pp. 262–277. Springer, 2016.Jun-Yan Zhu, Philipp Kr ̈ahenb ̈uhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipula-tion on the natural image manifold. In Proceedings of European Conference on Computer Vision(ECCV) , 2016.10Published as a conference paper at ICLR 2017A A PPENDIX : PSEUDO CODE FOR MDGANIn this Appendix, we give the detailed training procedure of an MDGAN example we discuss inSection 3.3.Manifold Step:1. Samplefx1;x2;xmgfrom data generating distribution pdata(x).2. Update discriminator D1using SGD with gradient ascent:r1d1mmXi=1[logD1(xi) + log(1D1(G(E(xi))))]3. Update generator Gusing SGD with gradient ascent:rg1mmXi=1[logD1(G(E(xi)))jjxiG(E(xi))jj2]Diffusion Step:4. Samplefx1;x2;xmgfrom data generating distribution pdata(x).5. Samplefz1;z2;zmgfrom prior distribution p(z).6. Update discriminator D2using SGD with gradient ascent:r2d1mmXi=1[logD2(G(E(xi))) + log(1D2(zi))]7. Update generator Gusing SGD with gradient ascent:rg1mmXi=1[logD2(G(zi))]Figure 8: The detailed training procedure of an MDGAN example.B A PPENDIX : ARCHITECTURE FOREXPERIMENTSWe use similar architectures for Compositional MNIST and CelebA experiments. The architectureis based on that found in DCGAN Radford et al. (2015). Apart from the discriminator and generatorwhich are the same as DCGAN, we add an encoder which is the ”inverse” of the generator, byreversing the order of layers and replacing the de-convolutional layers with convolutional layers.One has to pay particular attention to batch normalization layers. In DCGAN, there are batch nor-malization layers both in the generator and the discriminator. However, two classes of data gothrough the batch normalization layers in the generator. One come from sampled noise z, the otherone come from the encoder. In our implementation, we separate the batch statistics for these twoclasses of data in the generator, while keeping the parameters of BN layer to be shared. In this way,the batch statistics of these two kinds of batches cannot interfere with each other.C A PPENDIX : ADDITIONAL SYNTHESIZED EXPERIMENTSTo demonstrate the effectiveness of mode-regularized GANs proposed in this paper, we train a verysimple GAN architecture on synthesized 2D dataset, following Metz et al. (2016).The data is sampled from a mixture of 6 Gaussians, with standard derivation of 0.1. The means ofthe Gaussians are placed around a circle with radius 5. The generator network has two ReLU hiddenlayers with 128 neurons. It generates 2D output samples from 3D uniform noise from [0,1]. Thediscriminator consists of only one fully connected layer of ReLU neurons, mapping the 2D input to11Published as a conference paper at ICLR 2017a real 1D number. Both networks are optimized with the Adam optimizer with the learning rate of1e-4.In the regularized version, we choose 1=2= 0:005. The comparison between the generatordistribution from standard GAN and our proposed regularized GAN are shown in Figure 9.Figure 9: Comparison results on a toy 2D mixture of Gaussians dataset. The columns on the leftshows heatmaps of the generator distributions as the number of training epochs increases, whereasthe rightmost column presents the target, the original data distribution. The top row shows standardGAN result. The generator has a hard time oscillating among the modes of the data distribution, andis only able to “recover” a single data mode at once. In contrast, the bottom row shows results of ourregularized GAN. Its generator quickly captures the underlying multiple modes and fits the targetdistribution.D A PPENDIX : COMPARISON WITH VAEGANIn this appendix section, we demonstrate the effectiveness and uniqueness of mode-regularizedGANs proposed in this paper as compared to Larsen et al. (2015) in terms of its theoretical dif-ference, sample quality and number of missing modes.With regard to the theoretical difference, the optimization of V AEGAN relies on the probabilisticvariational bound, namely p(x)Eq(zjx)[logp(xjz)]KL(q(zjx)jjp(z)). This variational boundtogether with a GAN loss is optimized with several assumptions imposed in V AEGAN:1. In general, V AE is based on the assumption that the true posterior p(zjx)can be wellapproximated by factorized Gaussian distribution q.2. As to V AEGAN, It is also assumed that the maximum likelihood objectives does not con-flict with GAN objective in terms of probabilistic framework.The first assumption does not necessarily hold for GANs. We have found that in some trainedmodels of DCGANs, the real posterior p(zjx)is even not guaranteed to have only one mode, not tomention it is anything close to factorized Gaussian. We believe that this difference in probabilisticframework is an essential obstacle when one tries to use the objective of V AEGAN as a regularizer.However, in our algorithm, where we use a plain auto-encoder instead of V AE as the objective. Plainauto-encooders works better than V AE for our purposes because as long as the model G(z)is ableto generate training samples, there always exists a function E(x)such thatG(E(x)) =x. Ourencoder can therefore be viewed as being trained to approximate this real encoder E. There areno conflicts between a good GAN generator and our regularization objective. Hence, our objectivescan be used as regularizers for encoding the prior knowledge that good models should be able togenerate the training samples. This is why our work is essentially different from V AEGAN. In ourexperiments, we also believe that this is the reason why V AEGAN generates worse samples than acarefully tuned regularized GANs.In terms of sample quality and missing modes, we run the official code of V AEGAN3with theirdefault setting. We train V AEGAN for 30 epochs4and our models for only 20 epochs. For fairness,3https://github.com/andersbll/autoencoding_beyond_pixels4Note that we also trained 20-epoch version of V AEGAN, however the samples seemed worse.12Published as a conference paper at ICLR 2017their model was run 3 times and the trained model with the best sample visual quality was taken forthe comparison.The generated samples are shown in Figure 10. The most obvious difference between our samplesand V AEGAN’s samples is the face distortion, which is consistent with our experimental results inSection 4.2.2. We conjecture that the distortions of V AEGAN’s samples are due to the conflicts be-tween the two objectives, as we present above. In other words, the way we introduce auto-encodersas regularizers for GAN models is different from V AEGAN’s. The difference is that the second as-sumption mentioned above is not required in our approaches. In our framework, the auto-encodershelps alter the generation manifolds, leading to fewer distortions in fine-grained details in our gen-erated samples.Figure 10: Samples generated by our models and V AEGAN. The third line are samples generated byour self-trained V AEGAN model, with default settings. The last line are generated samples reportedin the original V AEGAN paper. We depict both of them here for a fair comparison.In terms of the missing modes problem, we use the same method described in Section 4.2.1 forcomputing the number of images with missing modes. The results are shown below.Table 4: Number of images on the missing modes on CelebA estimated by a third-party discrimina-tor. The numbers in the brackets indicate the dimension of prior z.denotes the standard deviationof the added Gaussian noise applied at the input of the discriminator to regularize it. MDGANachieves a very high reduction in the number of missing modes, in comparison to V AEGAN. V AEGAN (100) Reg-GAN (100) Reg-GAN (200) MDGAN (200)3.5 9720 754 3644 744.0 5862 42 391 13We see that using our proposed regularizers results in a huge drop in the number of missing modes.We conjecture that the reason why V AEGAN performs very bad in our metric for missing modes isbecause the samples generated are of low quality, so the discriminator classifies the samples as “noton mode”. Namely, the data generated is too far away from many real data modes. Essentially if amodel generates very bad samples, we can say that the model misses all or most modes.To conduct more fair evaluation between V AEGAN and our methods, we also perform a blind humanevaluation. Again we instructed five individuals to conduct this evaluation of sample variability.Without telling them which is generated by V AEGAN and which is generated by our methods, fourpeople agree that our method wins in terms of sample diversity. One person thinks the samples areequally diverse.In conclusion, we demonstrate that our proposed mode-regularized GANs, i.e., Reg-GAN andMDGAN, are different from V AEGAN theoretically as discussed above. Such differences empiri-cally result in better sample quality and mode preserving ability, which are our main contributions.13
Hkg4TI9xl
Published as a conference paper at ICLR 2017A B ASELINE FOR DETECTING MISCLASSIFIED ANDOUT-OF-DISTRIBUTION EXAMPLESINNEURAL NETWORKSDan HendrycksUniversity of California, Berkeleyhendrycks@berkeley.eduKevin GimpelToyota Technological Institute at Chicagokgimpel@ttic.eduABSTRACTWe consider the two related problems of detecting if an example is misclassified orout-of-distribution. We present a simple baseline that utilizes probabilities fromsoftmax distributions. Correctly classified examples tend to have greater maxi-mum softmax probabilities than erroneously classified and out-of-distribution ex-amples, allowing for their detection. We assess performance by defining sev-eral tasks in computer vision, natural language processing, and automatic speechrecognition, showing the effectiveness of this baseline across all. We then showthe baseline can sometimes be surpassed, demonstrating the room for future re-search on these underexplored detection tasks.1 I NTRODUCTIONWhen machine learning classifiers are employed in real-world tasks, they tend to fail when thetraining and test distributions differ. Worse, these classifiers often fail silently by providing high-confidence predictions while being woefully incorrect (Goodfellow et al., 2015; Amodei et al.,2016). Classifiers failing to indicate when they are likely mistaken can limit their adoption orcause serious accidents. For example, a medical diagnosis model may consistently classify withhigh confidence, even while it should flag difficult examples for human intervention. The resultingunflagged, erroneous diagnoses could blockade future machine learning technologies in medicine.More generally and importantly, estimating when a model is in error is of great concern to AI Safety(Amodei et al., 2016).These high-confidence predictions are frequently produced by softmaxes because softmax probabil-ities are computed with the fast-growing exponential function. Thus minor additions to the softmaxinputs, i.e. the logits, can lead to substantial changes in the output distribution. Since the soft-max function is a smooth approximation of an indicator function, it is uncommon to see a uniformdistribution outputted for out-of-distribution examples. Indeed, random Gaussian noise fed into anMNIST image classifier gives a “prediction confidence” or predicted class probability of 91%, as weshow later. Throughout our experiments we establish that the prediction probability from a softmaxdistribution has a poor direct correspondence to confidence. This is consistent with a great deal ofanecdotal evidence from researchers (Nguyen & O’Connor, 2015; Yu et al., 2010; Provost et al.,1998; Nguyen et al., 2015).However, in this work we also show the prediction probability of incorrect and out-of-distributionexamples tends to be lower than the prediction probability for correct examples. Therefore, cap-turing prediction probability statistics about correct or in-sample examples is often sufficient fordetecting whether an example is in error or abnormal, even though the prediction probability viewedin isolation can be misleading.These prediction probabilities form our detection baseline, and we demonstrate its efficacy throughvarious computer vision, natural language processing, and automatic speech recognition tasks.While these prediction probabilities create a consistently useful baseline, at times they are less ef-fective, revealing room for improvement. To give ideas for future detection research, we contributeWork done while the author was at TTIC. Code is available at github.com/hendrycks/error-detection1arXiv:1610.02136v3 [cs.NE] 3 Oct 2018Published as a conference paper at ICLR 2017one method which outperforms the baseline on some (but not all) tasks. This new method evaluatesthe quality of a neural network’s input reconstruction to determine if an example is abnormal.In addition to the baseline methods, another contribution of this work is the designation of standardtasks and evaluation metrics for assessing the automatic detection of errors and out-of-distributionexamples. We use a large number of well-studied tasks across three research areas, using standardneural network architectures that perform well on them. For out-of-distribution detection, we pro-vide ways to supply the out-of-distribution examples at test time like using images from differentdatasets and realistically distorting inputs. We hope that other researchers will pursue these tasks infuture work and surpass the performance of our baselines.In summary, while softmax classifier probabilities are not directly useful as confidence estimates,estimating model confidence is not as bleak as previously believed. Simple statistics derived fromsoftmax distributions provide a surprisingly effective way to determine whether an example is mis-classified or from a different distribution from the training data, as demonstrated by our experimentalresults spanning computer vision, natural language processing, and speech recognition tasks. Thiscreates a strong baseline for detecting errors and out-of-distribution examples which we hope futureresearch surpasses.2 P ROBLEM FORMULATION AND EVALUATIONIn this paper, we are interested in two related problems. The first is error and success prediction :can we predict whether a trained classifier will make an error on a particular held-out test example;can we predict if it will correctly classify said example? The second is in- and out-of-distributiondetection : can we predict whether a test example is from a different distribution from the trainingdata; can we predict if it is from within the same distribution?1Below we present a simple baselinefor solving these two problems. To evaluate our solution, we use two evaluation metrics.Before mentioning the two evaluation metrics, we first note that comparing detectors is not asstraightforward as using accuracy. For detection we have two classes, and the detector outputs ascore for both the positive and negative class. If the negative class is far more likely than the positiveclass, a model may always guess the negative class and obtain high accuracy, which can be mislead-ing (Provost et al., 1998). We must then specify a score threshold so that some positive examplesare classified correctly, but this depends upon the trade-off between false negatives (fn) and falsepositives (fp).Faced with this issue, we employ the Area Under the Receiver Operating Characteristic curve (AU-ROC) metric, which is a threshold-independent performance evaluation (Davis & Goadrich, 2006).The ROC curve is a graph showing the true positive rate (tpr =tp=(tp+fn)) and the false positiverate (fpr =fp=(fp+tn)) against each other. Moreover, the AUROC can be interpreted as the prob-ability that a positive example has a greater detector score/value than a negative example (Fawcett,2005). Consequently, a random positive example detector corresponds to a 50% AUROC, and a“perfect” classifier corresponds to 100%.2The AUROC sidesteps the issue of threshold selection, as does the Area Under the Precision-Recallcurve (AUPR) which is sometimes deemed more informative (Manning & Sch ̈utze, 1999). This isbecause the AUROC is not ideal when the positive class and negative class have greatly differingbase rates, and the AUPR adjusts for these different positive and negative base rates. For this reason,the AUPR is our second evaluation metric. The PR curve plots the precision (tp =(tp+fp)) and recall(tp=(tp+fn)) against each other. The baseline detector has an AUPR approximately equal to theprecision (Saito & Rehmsmeier, 2015), and a “perfect” classifier has an AUPR of 100% . Conse-quently, the base rate of the positive class greatly influences the AUPR, so for detection we mustspecify which class is positive. In view of this, we show the AUPRs when we treat success/normalclasses as positive, and then we show the areas when we treat the error/abnormal classes as positive.We can treat the error/abnormal classes as positive by multiplying the scores by 1and labelingthem positive. Note that treating error/abnormal classes as positive classes does not change the AU-1We consider adversarial example detection techniques in a separate work (Hendrycks & Gimpel, 2016a).2A debatable, imprecise interpretation of AUROC values may be as follows: 90%—100%: Excellent,80%—90%: Good, 70%—80%: Fair, 60%—70%: Poor, 50%—60%: Fail.2Published as a conference paper at ICLR 2017ROC since if Sis a score for a successfully classified value, and Eis the score for an erroneouslyclassified value, AUROC =P(S > E ) =P(E >S).We begin our experiments in Section 3 where we describe a simple baseline which uses the maxi-mum probability from the softmax label distribution in neural network classifiers. Then in Section 4we describe a method that uses an additional, auxiliary model component trained to reconstruct theinput.3 S OFTMAX PREDICTION PROBABILITY AS A BASELINEIn what follows we retrieve the maximum/predicted class probability from a softmax distributionand thereby detect whether an example is erroneously classified or out-of-distribution. Specifically,we separate correctly and incorrectly classified test set examples and, for each example, computethe softmax probability of the predicted class, i.e., the maximum softmax probability.3From thesetwo groups we obtain the area under PR and ROC curves. These areas summarize the performanceof a binary classifier discriminating with values/scores (in this case, maximum probabilities fromthe softmaxes) across different thresholds. This description treats correctly classified examples asthe positive class, denoted “Success” or “Succ” in our tables. In “Error” or “Err” we treat thethe incorrectly classified examples as the positive class; to do this we label incorrectly classifiedexamples as positive and take the negatives of the softmax probabilities of the predicted classes asthe scores.For “In,” we treat the in-distribution, correctly classified test set examples as positive and use thesoftmax probability for the predicted class as a score, while for “Out” we treat the out-of-distributionexamples as positive and use the negative of the aforementioned probability. Since the AUPRs forSuccess, Error, In, Out classifiers depend on the rate of positive examples, we list what area a randomdetector would achieve with “Base” values. Also in the upcoming results we list the mean predictedclass probability of wrongly classified examples (Pred Prob Wrong (mean)) to demonstrate that thesoftmax prediction probability is a misleading confidence proxy when viewed in isolation. The“Pred. Prob (mean)” columns show this same shortcoming but for out-of-distribution examples.Table labels aside, we begin experimentation with datasets from vision then consider tasks in naturallanguage processing and automatic speech recognition. In all of the following experiments, the AU-ROCs differ from the random baselines with high statistical significance according to the Wilcoxonrank-sum test.3.1 C OMPUTER VISIONIn the following computer vision tasks, we use three datasets: MNIST, CIFAR-10, and CIFAR-100 (Krizhevsky, 2009). MNIST is a dataset of handwritten digits, consisting of 60000 trainingand 10000 testing examples. Meanwhile, CIFAR-10 has colored images belonging to 10 differentclasses, with 50000 training and 10000 testing examples. CIFAR-100 is more difficult, as it has 100different classes with 50000 training and 10000 testing examples.In Table 1, we see that correctly classified and incorrectly classified examples are sufficiently distinctand thus allow reliable discrimination. Note that the area under the curves degrade with imagerecognizer test error.Next, let us consider using softmax distributions to determine whether an example is in- or out-of-distribution. We use all test set examples as the in-distribution (positive) examples. For out-of-distribution (negative) examples, we use realistic images and noise. For CIFAR-10 and CIFAR-100,we use realistic images from the Scene UNderstanding dataset (SUN), which consists of 397 differ-ent scenes (Xiao et al., 2010). For MNIST, we use grayscale realistic images from three sources.Omniglot (Lake et al., 2015) images are handwritten characters rather than the handwritten digits inMNIST. Next, notMNIST (Bulatov, 2011) consists of typeface characters. Last of the realistic im-ages, CIFAR-10bw are black and white rescaled CIFAR-10 images. The synthetic “Gaussian” data3We also tried using the KL divergence of the softmax distribution from the uniform distribution for detec-tion. With divergence values, detector AUROCs and AUPRs were highly correlated with AUROCs and AUPRsfrom a detector using the maximum softmax probability. This divergence is similar to entropy (Steinhardt &Liang, 2016; Williams & Renals, 1997).3Published as a conference paper at ICLR 2017Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred. ProbWrong(mean)Test SetErrorMNIST 97/50 100/98 48/1.7 86 1.69CIFAR-10 93/50 100/95 43/5 80 4.96CIFAR-100 87/50 96/79 62/21 66 20.7Table 1: The softmax predicted class probability allows for discrimination between correctly andincorrectly classified test set examples. “Pred. Prob Wrong(mean)” is the mean softmax probabilityfor wrongly classified examples, showcasing its shortcoming as a direct measure of confidence.Succ/Err Base values are the AUROCs or AUPRs achieved by random classifiers. All entries arepercentages.In-Distribution /Out-of-DistributionAUROC/BaseAUPR In/BaseAUPROut/BasePred. Prob(mean)CIFAR-10/SUN 95/50 89/33 97/67 72CIFAR-10/Gaussian 97/50 98/49 95/51 77CIFAR-10/All 96/50 88/24 98/76 74CIFAR-100/SUN 91/50 83/27 96/73 56CIFAR-100/Gaussian 88/50 92/43 80/57 77CIFAR-100/All 90/50 81/21 96/79 63MNIST/Omniglot 96/50 97/52 96/48 86MNIST/notMNIST 85/50 86/50 88/50 92MNIST/CIFAR-10bw 95/50 95/50 95/50 87MNIST/Gaussian 90/50 90/50 91/50 91MNIST/Uniform 99/50 99/50 98/50 83MNIST/All 91/50 76/20 98/80 89Table 2: Distinguishing in- and out-of-distribution test set data for image classification. CIFAR-10/All is the same as CIFAR-10/(SUN, Gaussian). All values are percentages.is random normal noise, and “Uniform” data is random uniform noise. Images are resized whennecessary.The results are shown in Table 2. Notice that the mean predicted/maximum class probabilities (Pred.Prob (mean)) are above 75%, but if the prediction probability alone is translated to confidence, thesoftmax distribution should be more uniform for CIFAR-100. This again shows softmax probabil-ities should not be viewed as a direct representation of confidence. Fortunately, out-of-distributionexamples sufficiently differ in the prediction probabilities from in-distribution examples, allowingfor successful detection and generally high area under PR and ROC curves.For reproducibility, let us specify the model architectures. The MNIST classifier is a three-layer,256 neuron-wide, fully-connected network trained for 30 epochs with Adam (Kingma & Ba, 2015).It uses a GELU nonlinearity (Hendrycks & Gimpel, 2016b), x(x), where (x)is the CDF of thestandard normal distribution. We initialize our weights according to (Hendrycks & Gimpel, 2016c),as it is suited for arbitrary nonlinearities. For CIFAR-10 and CIFAR-100, we train a 40-4 wideresidual network (Zagoruyko & Komodakis, 2016) for 50 epochs with stochastic gradient descentusing restarts (Loshchilov & Hutter, 2016), the GELU nonlinearity, and standard mirroring andcropping data augmentation.3.2 N ATURAL LANGUAGE PROCESSINGLet us turn to a variety of tasks and architectures used in natural language processing.3.2.1 S ENTIMENT CLASSIFICATIONThe first NLP task is binary sentiment classification using the IMDB dataset (Maas et al., 2011), adataset of polarized movie reviews with 25000 training and 25000 test reviews. This task allowsus to determine if classifiers trained on a relatively small dataset still produce informative softmax4Published as a conference paper at ICLR 2017Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred. ProbWrong(mean)Test SetErrorIMDB 82/50 97/88 36/12 74 11.9Table 3: Detecting correct and incorrect classifications for binary sentiment classification.In-Distribution /Out-of-DistributionAUROC/BaseAUPR In/BaseAUPROut/BasePred. Prob(mean)IMDB/Customer Reviews 95/50 99/89 60/11 62IMDB/Movie Reviews 94/50 98/72 80/28 63IMDB/All 94/50 97/66 84/34 63Table 4: Distinguishing in- and out-of-distribution test set data for binary sentiment classification.IMDB/All is the same as IMDB/(Customer Reviews, Movie Reviews). All values are percentages.distributions. For this task we use a linear classifier taking as input the average of trainable, randomlyinitialized word vectors with dimension 50 (Joulin et al., 2016; Iyyer et al., 2015). We train for 15epochs with Adam and early stopping based upon 5000 held-out training reviews. Again, Table 3shows that the softmax distributions differ between correctly and incorrectly classified examples, soprediction probabilities allow us to detect reliably which examples are right and wrong.Now we use the Customer Review (Hu & Liu, 2004) and Movie Review (Pang et al., 2002) datasetsas out-of-distribution examples. The Customer Review dataset has reviews of products rather thanonly movies, and the Movie Review dataset has snippets from professional movie reviewers ratherthan full-length amateur reviews. We leave all test set examples from IMDB as in-distributionexamples, and out-of-distribution examples are the 500 or 1000 test reviews from Customer Reviewand Movie Review datasets, respectively. Table 4 displays detection results, showing a similar storyto Table 2.3.2.2 T EXT CATEGORIZATIONWe turn to text categorization tasks to determine whether softmax distributions are useful for de-tecting similar but out-of-distribution examples. In the following text categorization tasks, we trainclassifiers to predict the subject of the text they are processing. In the 20 Newsgroups dataset (Lang,1995), there are 20 different newsgroup subjects with a total of 20000 documents for the wholedataset. The Reuters 8 (Lewis et al., 2004) dataset has eight different news subjects with nearly8000 stories in total. The Reuters 52 dataset has 52 news subjects with slightly over 9000 newsstories; this dataset can have as few as three stories for a single subject.For the 20 Newsgroups dataset we train a linear classifier on 30-dimensional word vectors for 20epochs. Meanwhile, Reuters 8 and Retuers 52 use one-layer neural networks with a bag-of-wordsinput and a GELU nonlinearity, all optimized with Adam for 5 epochs. We train on a subset ofsubjects, leaving out 5 newsgroup subjects from 20 Newsgroups, 2 news subjects from Reuters8, and 12 news subjects from Reuters 52, leaving the rest as out-of-distribution examples. Table5 shows that with these datasets and architectures, we can detect errors dependably, and Table 6informs us that the softmax prediction probabilities allow for detecting out-of-distribution subjects.Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred.ProbWrong(mean)Test SetError15 Newsgroups 89/50 99/93 42/7.3 53 7.31Reuters 6 89/50 100/98 35/2.5 77 2.53Reuters 40 91/50 99/92 45/7.6 62 7.55Table 5: Detecting correct and incorrect classifications for text categorization.5Published as a conference paper at ICLR 2017In-Distribution /Out-of-DistributionAUROC/BaseAUPRIn/BaseAUPROut/BasePred. Prob(mean)15/5 Newsgroups 75/50 92/84 45/16 65Reuters6/Reuters2 92/50 100/95 56/4.5 72Reuters40/Reuters12 95/50 100/93 60/7.2 47Table 6: Distinguishing in- and out-of-distribution test set data for text categorization.Dataset AUROC/BaseAUPRSucc/BaseAUPRErr/BasePred. ProbWrong(mean)Test SetErrorWSJ 96/50 100/96 51/3.7 71 3.68Twitter 89/50 98/87 53/13 69 12.59Table 7: Detecting correct and incorrect classifications for part-of-speech tagging.3.2.3 P ART-OF-SPEECH TAGGINGPart-of-speech (POS) tagging of newswire and social media text is our next challenge. We use theWall Street Journal portion of the Penn Treebank (Marcus et al., 1993) which contains 45 distinctPOS tags. For social media, we use POS-annotated tweets (Gimpel et al., 2011; Owoputi et al.,2013) which contain 25 tags. For the WSJ tagger, we train a bidirectional long short-term memoryrecurrent neural network (Hochreiter & Schmidhuber, 1997) with three layers, 128 neurons perlayer, with randomly initialized word vectors, and this is trained on 90% of the corpus for 10 epochswith stochastic gradient descent with a batch size of 32. The tweet tagger is simpler, as it is two-layer neural network with a GELU nonlinearity, a weight initialization according to (Hendrycks &Gimpel, 2016c), pretrained word vectors trained on a corpus of 56 million tweets (Owoputi et al.,2013), and a hidden layer size of 256, all while training on 1000 tweets for 30 epochs with Adamand early stopping with 327 validation tweets. Error detection results are in Table 7. For out-of-distribution detection, we use the WSJ tagger on the tweets as well as weblog data from the EnglishWeb Treebank (Bies et al., 2012). The results are shown in Table 8. Since the weblog data is closerin style to newswire than are the tweets, it is harder to detect whether a weblog sentence is out-of-distribution than a tweet. Indeed, since POS tagging is done at the word-level, we are detectingwhether each word is out-of-distribution given the word and contextual features. With this in mind,we see that it is easier to detect words as out-of-distribution if they are from tweets than from blogs.In-Distribution /Out-of-DistributionAUROC/BaseAUPRIn/BaseAUPROut/BasePred. Prob(mean)WSJ/Twitter 80/50 98/92 41/7.7 81WSJ/Weblog* 61/50 88/86 30/14 93Table 8: Detecting out-of-distribution tweets and blog articles for part-of-speech tagging. All valuesare percentages. *These examples are atypically close to the training distribution.3.3 A UTOMATIC SPEECH RECOGNITIONNow we consider a task which uses softmax values to construct entire sequences rather than deter-mine an input’s class. Our sequence prediction system uses a bidirectional LSTM with two-layersand a clipped GELU nonlinearity, optimized for 60 epochs with RMSProp trained on 80% of theTIMIT corpus (Garofolo et al., 1993). The LSTM is trained with connectionist temporal classifica-tion (CTC) (Graves et al., 2006) for predicting sequences of phones given MFCCs, energy, and firstand second deltas of a 25ms frame. When trained with CTC, the LSTM learns to have its phonelabel probabilities spike momentarily while mostly predicting blank symbols otherwise. In this way,the softmax is used differently from typical classification problems, providing a unique test for ourdetection methods.We do not show how the system performs on correctness/incorrectness detection because errorsare not binary and instead lie along a range of edit distances. However, we can perform out-of-6Published as a conference paper at ICLR 2017In-Distribution /Out-of-DistributionAUROC/BaseAUPRIn/BaseAUPROut/BasePred. Prob(mean)TIMIT/TIMIT+Airport 99/50 99/50 99/50 59TIMIT/TIMIT+Babble 100/50 100/50 100/50 55TIMIT/TIMIT+Car 98/50 98/50 98/50 59TIMIT/TIMIT+Exhibition 100/50 100/50 100/50 57TIMIT/TIMIT+Restaurant 98/50 98/50 98/50 60TIMIT/TIMIT+Street 100/50 100/50 100/50 52TIMIT/TIMIT+Subway 100/50 100/50 100/50 56TIMIT/TIMIT+Train 100/50 100/50 100/50 58TIMIT/Chinese 85/50 80/34 90/66 64TIMIT/All 97/50 79/10 100/90 58Table 9: Detecting out-of-distribution distorted speech. All values are percentages.distribution detection. Mixing the TIMIT audio with realistic noises from the Aurora-2 dataset(Hirsch & Pearce, 2000), we keep the TIMIT audio volume at 100% and noise volume at 30%,giving a mean SNR of approximately 5. Speakers are still clearly audible to the human ear butconfuse the phone recognizer because the prediction edit distance more than doubles. For more out-of-distribution examples, we use the test examples from the THCHS-30 dataset (Wang & Zhang,2015), a Chinese speech corpus. Table 9 shows the results. Crucially, when performing detection,we compute the softmax probabilities while ignoring the blank symbol’s logit. With the blanksymbol’s presence, the softmax distributions at most time steps predict a blank symbol with highconfidence, but without the blank symbol we can better differentiate between normal and abnormaldistributions. With this modification, the softmax prediction probabilities allow us to detect whetheran example is out-of-distribution.4 A BNORMALITY DETECTION WITH AUXILIARY DECODERSHaving seen that softmax prediction probabilities enable abnormality detection, we now show thereis other information sometimes more useful for detection. To demonstrate this, we exploit thelearned internal representations of neural networks. We start by training a normal classifier andappend an auxiliary decoder which reconstructs the input, shown in Figure 1. Auxiliary decodersare sometimes known to increase classification performance (Zhang et al., 2016). The decoder andscorer are trained jointly on in-distribution examples. Thereafter, the blue layers in Figure 1 arefrozen. Then we train red layers on clean and noised training examples, and the sigmoid output ofthe red layers scores how normal the input is. Consequently, noised examples are in the abnormalclass, clean examples are of the normal class, and the sigmoid is trained to output to which class aninput belongs. After training we consequently have a normal classifier, an auxiliary decoder, andwhat we call an abnormality module . The gains from the abnormality module demonstrate thereare possible research avenues for outperforming the baseline.4.1 TIMITWe test the abnormality module by revisiting the TIMIT task with a different architecture and showhow these auxiliary components can greatly improve detection. The system is a three-layer, 1024-neuron wide classifier with an auxiliary decoder and abnormality module. This network takes asinput 11 frames and must predict the phone of the center frame, 26 features per frame. Weights areinitialized according to (Hendrycks & Gimpel, 2016c). This network trains for 20 epochs, and theabnormality module trains for two. The abnormality module sees clean examples and, as negativeexamples, TIMIT examples distorted with either white noise, brown noise (noise with its spectraldensity proportional to 1=f2), or pink noise (noise with its spectral density proportional to 1=f) atvarious volumes.We note that the abnormality module is nottrained on the same type of noise added to the testexamples. Nonetheless, Table 10 shows that simple noised examples translate to effective detectionof realistically distorted audio. We detect abnormal examples by comparing the typical abnormality7Published as a conference paper at ICLR 2017In-Distribution /Out-of-DistributionAUROC/BaseSoftmaxAUROC/BaseAbModAUPRIn/BaseSoftmaxAUPRIn/BaseAbModAUPROut/BaseSoftmaxAUPROut/BaseAbModTIMIT/+Airport 75/50 100/50 77/41 100/41 73/59 100/59TIMIT/+Babble 94/50 100/50 95/41 100/41 91/59 100/59TIMIT/+Car 70/50 98/50 69/41 98/41 70/59 98/59TIMIT/+Exhib. 91/50 98/50 92/41 98/41 91/59 98/59TIMIT/+Rest. 68/50 95/50 70/41 96/41 67/59 95/59TIMIT/+Subway 76/50 96/50 77/41 96/41 74/59 96/59TIMIT/+Street 89/50 98/50 91/41 99/41 85/59 98/59TIMIT/+Train 80/50 100/50 82/41 100/41 77/59 100/59TIMIT/Chinese 79/50 90/50 41/12 66/12 96/88 98/88Average 80 97 77 95 80 98Table 10: Abnormality modules can generalize to novel distortions and detect out-of-distributionexamples even when they do not severely degrade accuracy. All values are percentages.In-Distribution /Out-of-DistributionAUROC/BaseSoftmaxAUROC/BaseAbModAUPRIn/BaseSoftmaxAUPRIn/BaseAbModAUPROut/BaseSoftmaxAUPROut/BaseAbModMNIST/Omniglot 95/50 100/50 95/52 100/52 95/48 100/48MNIST/notMNIST 87/50 100/50 88/50 100/50 90/50 100/50MNIST/CIFAR-10bw 98/50 100/50 98/50 100/50 98/50 100/50MNIST/Gaussian 88/50 100/50 88/50 100/50 90/50 100/50MNIST/Uniform 99/50 100/50 99/50 100/50 99/50 100/50Average 93 100 94 100 94 100Table 11: Improved detection using the abnormality module. All values are percentages.module outputs for clean examples with the outputs for the distorted examples. The noises are fromAurora-2 and are added to TIMIT examples with 30% volume. We also use the THCHS-30 datasetfor Chinese speech. Unlike before, we use the THCHS-30 training examples rather than test setexamples because fully connected networks can evaluate the whole training set sufficiently quickly.It is worth mentioning that fully connected deep neural networks are noise robust (Seltzer et al.,2013), yet the abnormality module can still detect whether an example is out-of-distribution. To seewhy this is remarkable, note that the network’s frame classification error is 29.69% on the entiretest (not core) dataset, and the average classification error for distorted examples is 30.43%—thisis unlike the bidirectional LSTM which had a more pronounced performance decline. Because theclassification degradation was only slight, the softmax statistics alone did not provide useful out-of-distribution detection. In contrast, the abnormality module provided scores which allowed thedetection of different-but-similar examples. In practice, it may be important to determine whetheran example is out-of-distribution even if it does not greatly confuse the network, and the abnormalitymodule facilitates this.4.2 MNISTFinally, much like in a previous experiment, we train an MNIST classifier with three layers of width256. This time, we also use an auxiliary decoder and abnormality module rather than relying on onlysoftmax statistics. For abnormal examples we blur, rotate, or add Gaussian noise to training images.Gains from the abnormality module are shown in Table 11, and there is a consistent out-of-sampledetection improvement compared to softmax prediction probabilities. Even for highly dissimilarexamples the abnormality module can further improve detection.8Published as a conference paper at ICLR 20175 D ISCUSSION AND FUTURE WORKThe abnormality module demonstrates that in some cases the baseline can be beaten by exploitingthe representations of a network, suggesting myriad research directions. Some promising futureavenues may utilize the intra-class variance: if the distance from an example to another of the samepredicted class is abnormally high, it may be out-of-distribution (Giryes et al., 2015). Another pathis to feed in a vector summarizing a layer’s activations into an RNN, one vector for each layer.The RNN may determine that the activation patterns are abnormal for out-of-distribution examples.Others could make the detections fine-grained: is the out-of-distribution example a known-unknownor an unknown-unknown? A different avenue is not just to detect correct classifications but tooutput the probability of a correct detection. These are but a few ideas for improving error andout-of-distribution detection.We hope that any new detection methods are tested on a variety of tasks and architectures of theresearcher’s choice. A basic demonstration could include the following datasets: MNIST, CIFAR,IMDB, and tweets because vision-only demonstrations may not transfer well to other architecturesand datasets. Reporting the AUPR and AUROC values is important, and so is the underlying classi-fier’s accuracy since an always-wrong classifier gets a maximum AUPR for error detection if erroris the positive class. Also, future research need not use the exact values from this paper for com-parisons. Machine learning systems evolve, so tethering the evaluations to the exact architecturesand datasets in this paper is needless. Instead, one could simply choose a variety of datasets andarchitectures possibly like those above and compare their detection method with a detector based onthe softmax prediction probabilities from their classifiers. These are our basic recommendations forothers who try to surpass the baseline on this underexplored challenge.6 C ONCLUSIONWe demonstrated a softmax prediction probability baseline for error and out-of-distribution detec-tion across several architectures and numerous datasets. We then presented the abnormality module,which provided superior scores for discriminating between normal and abnormal examples on testedcases. The abnormality module demonstrates that the baseline can be beaten in some cases, and thisimplies there is room for future research. Our hope is that other researchers investigate architec-tures which make predictions in view of abnormality estimates, and that others pursue more reliablemethods for detecting errors and out-of-distribution inputs because knowing when a machine learn-ing system fails strikes us as highly important.ACKNOWLEDGMENTSWe would like to thank John Wieting, Hao Tang, Karen Livescu, Greg Shakhnarovich, and ourreviewers for their suggestions. We would also like to thank NVIDIA Corporation for donatingseveral TITAN X GPUs used in this research.REFERENCESDario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man ́e. Con-crete problems in ai safety. arXiv , 2016.Ann Bies, Justin Mott, Colin Warner, and Seth Kulick. English Web Treebank, 2012.Yaroslav Bulatov. notMNIST dataset. 2011.Jesse Davis and Mark Goadrich. The relationship between precision-recall and ROC curves. InInternational Conference on Machine Learning (ICML) , 2006.Tom Fawcett. An introduction to ROC analysis . Pattern Recognition Letters, 2005.John Garofolo, Lori Lamel, William Fisher, Jonathan Fiscus, David Pallett, Nancy Dahlgren, andVictor Zue. TIMIT Acoustic-Phonetic Continuous Speech Corpus . Linguistic Data Consortium,1993.9Published as a conference paper at ICLR 2017Kevin Gimpel, Nathan Schneider, Brendan O0Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein,Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. Part-of-Speech Taggingfor Twitter: Annotation, Features, and Experiments . Association for Computational Linguistics(ACL), 2011.Raja Giryes, Guillermo Sapiro, and Alex M. Bronstein. Deep neural networks with random gaussianweights: A universal classification strategy? arXiv , 2015.Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarialexamples. In International Conference on Learning Representations (ICLR) , 2015.Alex Graves, Santiago Fern ́andez, Faustino Gomez, and J ̈urgen Schmidhuber. Connectionist tem-poral classification: Labeling unsegmented sequence data with recurrent neural networks. InInternational Conference on Machine Learning (ICML) , 2006.Dan Hendrycks and Kevin Gimpel. Methods for detecting adversarial images and a colorful saliencymap. arXiv , 2016a.Dan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with Gaus-sian error linear units. arXiv , 2016b.Dan Hendrycks and Kevin Gimpel. Adjusting for dropout variance in batch normalization andweight initialization. arXiv , 2016c.Hans-G ̈unter Hirsch and David Pearce. The Aurora experimental framework for the performanceevaluation of speech recognition systems under noisy conditions. ISCA ITRW ASR2000 , 2000.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory . Neural Computation, 1997.Minqing Hu and Bing Liu. Mining and Summarizing Customer Reviews . Knowledge Discovery andData Mining (KDD), 2004.Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum ́e Iii. Deep Unordered Compo-sition Rivals Syntactic Methods for Text Classification . Association for Computational Linguistics(ACL), 2015.Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficienttext classification. arXiv , 2016.Diederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization . InternationalConference for Learning Representations (ICLR), 2015.Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images, 2009.Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learningthrough probabilistic program induction. Science , 2015.Ken Lang. Newsweeder: Learning to filter netnews. In International Conference on MachineLearning (ICML) , 1995.David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. Rcv1: A new benchmark collection fortext categorization research. Journal of Machine Learning Research (JMLR) , 2004.Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with restarts. arXiv , 2016.Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y . Ng, and ChristopherPotts. Learning word vectors for sentiment analysis. In Association for Computational Linguistics(ACL) , 2011.Chris Manning and Hinrich Sch ̈utze. Foundations of Statistical Natural Language Processing . MITPress, 1999.Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotatedcorpus of English: The Penn Treebank. Computational linguistics , 1993.10Published as a conference paper at ICLR 2017Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High con-fidence predictions for unrecognizable images. In Computer Vision and Pattern Recognition(CVPR) , 2015.Khanh Nguyen and Brendan O’Connor. Posterior calibration and exploratory analysis for naturallanguage processing models. In Empirical Methods in Natural Language Processing (EMNLP) ,2015.Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A.Smith. Improved part-of-speech tagging for online conversational text with word clusters. InNorth American Chapter of the Association for Computational Linguistics (NAACL) , 2013.Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. Thumbs up? sentiment classification usingmachine learning techniques. In Empirical Methods in Natural Language Processing (EMNLP) ,2002.Foster Provost, Tom Fawcett, and Ron Kohavi. The case against accuracy estimation for comparinginduction algorithms. In International Conference on Machine Learning (ICML) , 1998.Takaya Saito and Marc Rehmsmeier. The precision-recall plot is more informative than the ROCplot when evaluating binary classifiers on imbalanced datasets. In PLoS ONE . 2015.Michael L. Seltzer, Dong Yu, and Yongqiang Wang. Investigation of deep neural networks for noiserobust speech recognition. In IEEE International Conference on Acoustics, Speech, and SignalProcessing (ICASSP) , 2013.Jacob Steinhardt and Percy Liang. Unsupervised risk estimation using only conditional indepen-dence structure. In Neural Information Processing Systems (NIPS) , 2016.Dong Wang and Xuewei Zhang. Thchs-30 : A free chinese speech corpus. In Technical Report ,2015.Gethin Williams and Steve Renals. Confidence measures for hybrid hmm/ann speech recognition.InProceedings of EuroSpeech , 1997.Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. Sun database:Large-scale scene recognition from abbey to zoo. In IEEE Conference on Computer Vision andPattern Recognition (CVPR) , 2010.Dong Yu, Jinyu Li, and Li Deng. Calibration of confidence measures in speech recognition. In IEEETransactions on Audio, Speech, and Language , 2010.Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. British Machine Vision Confer-ence, 2016.Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsu-pervised objectives for large-scale image classification. In International Conference on MachineLearning (ICML) , 2016.11Published as a conference paper at ICLR 2017A A BNORMALITY MODULE EXAMPLEFigure 1: A neural network classifying a diamond image with an auxiliary decoder and an abnormal-ity module. Circles are neurons, either having a GELU or sigmoid activation. The blurred diamondreconstruction precedes subtraction and elementwise squaring. The probability vector is the soft-max probability vector. Blue layers train on in-distribution data, and red layers train on both in- andout-of-distribution examples.12
Bks8cPcxe
Published as a conference paper at ICLR 2017DEEPDSL: A C OMPILATION -BASED DOMAIN -SPECIFIC LANGUAGE FOR DEEPLEARNINGTian Zhao & Xiao Bing HuangDepartment of Computer ScienceUniversity of Wisconsin – MilwaukeeMilwaukee, WI, USAftzhao,xiaobing g@uwm.eduYu CaoDepartment of Computational NeuroscienceThe University of Massachusetts, LowellLowell, MA, USAycao@cs.uml.eduABSTRACTIn recent years, Deep Learning (DL) has found great success in domains suchas multimedia understanding. However, the complex nature of multimedia datamakes it difficult to develop DL-based software. The state-of-the-art tools, suchas Caffe, TensorFlow, Torch7, and CNTK, while are successful in their applicabledomains, are programming libraries with fixed user interface, internal represen-tation, and execution environment. This makes it difficult to implement portableand customized DL applications.In this paper, we present DeepDSL , adomain specific language (DSL) embeddedin Scala, that compiles deep networks written in DeepDSL to Java source code.Deep DSL provides (1) intuitive constructs to support compact encoding of deepnetworks; (2) symbolic gradient derivation of the networks; (3) static analysisfor memory consumption and error detection; and (4) DSL-level optimization toimprove memory and runtime efficiency.DeepDSL programs are compiled into compact, efficient, customizable, andportable Java source code, which operates the CUDA and CUDNN interfaces run-ning on NVIDIA GPU via a Java Native Interface (JNI) library. We evaluatedDeepDSL with a number of popular DL networks. Our experiments show thatthe compiled programs have very competitive runtime performance and memoryefficiency compared to the existing libraries.1 I NTRODUCTIONMultimedia is increasingly becoming the ”biggest big data” as the most important and valuablesource for insights and information Chen et al. (2015a). Recently, a new set of machine learningalgorithms named ”Deep Learning” (DL) LeCun et al. (2015), which aims at learning multiple levelsof representation and abstraction that help infer knowledge from multimedia data (e.g. text, image,audio, and video) is making astonishing gains in machine vision, speech recognition, multimediaanalysis, and drug designing.However, current tools, such as Theano Bergstra et al. (2010), Torch7 Collobert et al. (2011),Caffe Jia et al. (2014), Computational Network Toolkit (CNTK) Agarwal et al. (2014), and Tensor-Flow Abadi et al. (2016), while are efficient in their applicable domains, are essentially applicationlibraries with some inherent limitations.As with all programming libraries, the DL libraries have fixed bindings for key data structures suchas tensors and tensor related computations. Users have to adhere to the data structure, which limitstheir ability to apply application-specific optimization or port it to target runtime platforms. Theinternal representation of their control flow logic is opaque to users. For example, TensorFlow andCNTK use directed acyclic graphs to represent the DL network computation and generate runtimebinaries from the graphs. However, these graphs are not designed for user-level access, which limitsthe runtime platforms of the DL applications to what the libraries provide.In general, the current libraries have to be built against specific platforms that they are designedfor, which can be difficult for platforms such as Windows. Also, changing the implementation of1Published as a conference paper at ICLR 2017Figure 1: Basic workflow of DeepDSL.specific type of layers or data structure is very challenging without thorough understanding of theunderlying implementation. This limits the portability and reusability of these libraries.To address these limitations, we present DeepDSL, a domain specific language embedded in Scala,for developing DL applications. DeepDSL allows users to define DL networks as tensor functions.Unlike the existing DL libraries, DSL tensors are not built-in entities. Instead, they are defined asindexed scalar expressions. This exposes tensor related computation at DSL level. As a result, thesymbolic gradient derivation of the DL network is fully abstract and the resulting DSL programallows compiler-based optimizations such as code motion and common sub-expression elimination.DeepDSL compiler translates the optimized DSL program into a Java source program that is com-pact, efficient, customizable, and portable. The generated Java source only requires a small Javalibrary JCuda1that calls the NVIDIA CUDA interface using JNI. Since JVM is supported on allmajor operating systems, the generated Java source can run on any CUDA enabled platforms. Also,since the generated Java source is compact and human readable, users can customize it easily throughan editor or IDE such as eclipse2. The generated Java source automatically saves the learned pa-rameters into files after a training period is over. When the user starts the program again (perhapsafter adjusting some parameters such as momentum and learning rate), it automatically loads thesaved parameters and continues the training from where it stopped at the previous execution. Thecode also supports loading parameters trained with different data for fine tuning purpose.DeepDSL supports statical analysis of the DSL program to detect network design errors such asmismatching tensor dimensions before compiling the DSL program into Java source. It staticallyanalyzes the memory consumption used at each step of the computation and produces a table detail-ing the memory usage that would occur at runtime, which includes the memory for feature maps,gradient maps, parameter weights, and convolution workspace. It also uses the static information toreschedule computation so that tensor memory can be freed as early as possible to reduce memoryconsumption at runtime. Such processing has demonstrated to have great benefit. For example,DeepDSL continues to run well under the GPU memory limit on the testing server with a singleGPU when the batch size of ResNet is increased from 32 to 64, while both Caffe and Tensorflowfail due to out of memory exception.DeepDSL is available at https://github.com/deepdsl/deepdsl .The rest of the paper is organized as follows. We give an overview of DeepDSL in Section 2 andexplain the DSL syntax using examples in Section 3. We discuss the intermediate representationin Section 4 and code generation in Section 5. We present details of performance evaluation usingDeepDSL in Section 6 and related work in Section 7. We conclude the paper in Section 8.2 O VERVIEWDeepDSL directly encodes the mathematical representation of DL networks, where each layer isrepresented as a tensor function. The entire network is then represented as a composition of these1http://www.jcuda.org2http://www.eclipse.org2Published as a conference paper at ICLR 20171 val K = 10 // # of classes2 val N = 500; val C = 1; val N1 = 28; val N2 = 28 // batch size, channel, and x/y size34 // Specifying training (and test) dataSet5 val y = Vec._new(Mnist, "label", "Y", N) // labels6 val x = Vec._new(Mnist, "image", "X", N, C, N1, N2) // images78 val cv1 = CudaLayer.convolv("cv1", 5, 20) // kernel size (5,5), output channel 209 val cv2 = CudaLayer.convolv("cv2", 5, 50)10 val mp = CudaLayer.max_pool(2) // max pooling, kernel 2 stride 211 val flat = Layer.flatten(4, 1) // flatten a 4-D tensor from axis 1 to 312 val f = Layer.full("fc1", 500) // fully connected layer, output 50013 val f2 = Layer.full("fc2", K)14 val relu = CudaLayer.relu(2) // 2-D ReLU activation15 val softmax = CudaLayer.softmax // softmax1617 // o is a left-associative operator for function composition18 val network = f2 o relu o f o flat o mp o cv2 o mp o cv11920 val x1 = x.asCuda // load x to GPU21 val y1 = y.asIndicator(K).asCuda // turn each label into an indicator vector22 val c = (Layer.log_loss(y1) o softmax o network) (x1) // training loss23 val p = (Layer.precision(y1) o network) (x1) // test accuracy2425 val param = c.freeVar.toList // parameters to be trained2627 // output file, train and test iteration, learn rate, momentum, decay, gradient cropping (0 means none)28 val solver = Train("lenet", 1000, 10, 0.01f, 0.9f, 0.0005f, 0)2930 val loop = Loop(c, p, (x, y), param, solver) // training and testing loop31 cudnn_gen.print(loop) // generate Java source programFigure 2: DeepDSL code for training and testing Lenet .functions. DeepDSL symbolically derives the partial derivatives of the tensor functions with respectto tensor variables so that the backward gradients of network parameters are generated automatically.A high-level overview of DeepDSL is shown in Figure 1. A DeepDSL program is compiled in sev-eral stages. At the first stage, the backward gradients of deep networks are derived symbolicallyto become the intermediate representation (IR). The IR expressions are in turn passed through aseries of simplification and optimization at the second stage. At the third stage, DeepDSL compilerperforms a SSA (Static Single Assignment) transformation of the optimized IR to break down com-plex expressions. Redundant computation is eliminated at this stage and the resulting expressionsare reordered to optimize memory usage. Memory deallocation and in-place computation are alsoscheduled at this stage. Lastly, the finalized IR expressions are translated to Java source code.DeepDSL supports two mode of computation: memory efficient or runtime efficient. In the memoryefficient mode, tensor memory in GPU will be dynamically allocated and deallocated, which mightdecrease runtime performance. In the runtime efficient mode, tensor memory in GPU is reused andnot deallocated until the end of the training. In this mode, more memory may be used but with greaterruntime performance. To make the switch, the user only needs to switch a flag in the generated Javasource.The memory efficient mode can be used for machines with limited GPU memory. Furthermemory reduction can be achieved by placing a limit on the (convolution) workspace memory.3 S YNTAXFigure 2 shows the complete implementation for compiling a program to train and test Lenet LeCunet al. (1998). Since DeepDSL is embedded in Scala, the program is in Scala syntax and it can becompiled and executed with a programming tool such as eclipse. This program consists of variabledeclarations of the form val x = e , where val starts a declaration for the variable xand assignsit with the value of e.Line 5 and 6 declare the tensors that represent labels and images for the training data. We also usethe same variables for testing since the DSL compiles the same variables into different code fortraining and testing.3Published as a conference paper at ICLR 2017Line 8–15 declare the tensor functions that represent the layers in the network. Most of the layersare self-explanatory except val flat = Layer.flatten(4, 1) , which is used to convertthe 4-D tensor returned by the last pooling layer into a 2-D layer for the next fully connected layer.Line 18 constructs the network as function compositions using the operator o, which is left asso-ciative. For example, f2 o relu o f should be read as (f2 o relu) o f . A composedfunction such as network is still a function.Line 22 defines the expression that represents the loss of the network when applied to the trainingdata. Line 23 defines the testing accuracy of the trained network.Line 25 extracts the parameters such as weights and biases from the loss expression. Line 28–31defines the solver object, passes it to the loop object for training and testing, and then generates theJava source code.Layer reuse Since each layer is a tensor function, for the layers such as ReLU and pooling thatdo not contain parameters, we can simply reuse them in a network. For example, in the followingdefinition for Alexnet ,relu2 (2 dimensional), relu (4 dimensional), pool (max pooling), drop(drop out), and lrn (local response normalization) are reused.1 val network = full8 o2 drop o relu2 o full7 o3 drop o relu2 o full6 o flat o4 pool o relu o cv5 o5 relu o cv4 o6 relu o cv3 o7 pool o lrn o relu o cv2 o8 pool o lrn o relu o cv1Layer function reuse simplifies the definitions of deep networks. For Alexnet , only 5 convolutionlayers and 3 fully connected layers need to be defined separately. Note that the above definition canbe written in just one line and the line breaks are only for clarity.Network reuse For complex network such as Googlenet , we can define reusable subnet to achievecompact definitions. For example, the Scala method inception below returns a tensor functionthat represents an inception subnet in Googlenet .1 val w = Param.xavier // Xavier initialization for weight2 val b0 = Param.const(0, 2, 0) // constant 0 for bias, learn rate/decay multiplier 2 and 03 val b02 = Param.const(0.2f, 2, 0) // constant 0.2 for bias4 val ipool = CudaLayer.max_pool(3, 1, 1) // max pooling kernel size, stride, and padding56 def inception(n: Int) = {7 // convolution name, kernel size, channel, stride, padding, weight and bias configuration8 val icv1 = CudaLayer.convolv(s"cv${n}1", 1, 64, 1, 0, w, b02)9 val icv2 = CudaLayer.convolv(s"cv${n}2", 1, 96, 1, 0, w, b02)10 val icv3 = CudaLayer.convolv(s"cv${n}3", 3, 128, 1, 1, w, b02)11 val icv4 = CudaLayer.convolv(s"cv${n}4", 1, 16, 1, 0, w, b02)12 val icv5 = CudaLayer.convolv(s"cv${n}5", 5, 32, 1, 2, w, b02)13 val icv6 = CudaLayer.convolv(s"cv${n}6", 1, 32, 1, 0, w, b02)1415 val p = Vec._new(4) // a 4-dimensional tensor variable1617 // a tensor function with parameter p18 VecFun(p, CudaLayer.concat( (relu o icv1)(p), // concatenation of 4 subnets connected to p19 (relu o icv3 o relu o icv2)(p),20 (relu o icv5 o relu o icv4)(p),21 (relu o icv6 o ipool)(p) )22 )23 }Using the inception method, we can define three subnets that are used to define the test accuracyp(line 6 below) of the main branch of Googlenet .1 val network3 = full7 o flat o drop o pool7 o inception(9) o inception(8) o pool o inception(7)2 val network2 = inception(6) o inception(5) o inception(4)3 val network1 = inception(3) o pool o inception(2) o inception(1) o4 pool o lrn o relu o cv3 o relu o cv2 o lrn o pool o relu o cv156 val p = Layer.precision(y1)(network3(network2(network1(x1)))) // accuracy at main branch4Published as a conference paper at ICLR 2017The three subnets are also used to define the training loss c(line 16 below) that adds up the lossesof the three branches of Googlenet .1 def branch(n: Int) = { // a subnet reused in the two side branches of Googlenet2 val cv = CudaLayer.convolv(s"b${n}cv", 1, 128, 1, 0, w, b02)3 val f1 = Layer.full(s"b${n}fc1", 1024, w, b02)4 val f2 = Layer.full(s"b${n}fc2", K, w, b0)5 f2 o drop2 o relu2 o f1 o flat o relu o cv o bpool6 }7 val stage2 = { // Vec2ScalarFun defines a function from tensor to scalar8 val p = Vec._new(4)9 Vec2ScalarFun(p, softmax_loss(network3(p)) + softmax_loss(branch(2)(p)) *Real(0.3f, "loss2"))10 }11 val stage1 = { // Real(0.3f, "loss1") is a named constant of value 0.312 val p = Vec._new(4)13 Vec2ScalarFun(p, stage2(network2(p)) + softmax_loss(branch(1)(p)) *Real(0.3f, "loss1"))14 }1516 val c = (stage1 o network1)(x1) // training loss of the three branchesOther than some definitions of shared layers such as activation, pooling, normalization, drop out,and softmax loss, this is the complete definition of Googlenet .This compact style of definition is similar to that of Theano, Tensorflow, Torch, and Mxnet. Inthe example, we used two types of functions VecFun andVec2ScalarFun , which model com-putation that takes a tensor as input and returns a tensor or scalar respectively. These functionscan be composed or applied to arguments. When applied, they are similar to functions in Theano,Tensorflow, and Mxnet. When composed, they are similar to the sequential container of Torch.4 I NTERMEDIATE REPRESENTATIONThe unique advantage of DeepDSL is that it is entirely high-level so that it permits static analysis ofthe deep networks for error checking, memory analysis, optimization, and code generation.While DeepDSL compiler is implemented in Scala, it has no runtime dependency on code in Scala atall. The whole purpose of using Scala as the host language for DeepDSL is that Scala is a stronglytyped language with flexible syntax. As a result, the syntax of DeepDSL can resemble that of astandalone DSL without having a parser. After taking symbolic gradients, a DeepDSL program isimmediately evaluated to intermediate representation (IR), which is essentially an abstract syntaxtree(AST). DeepDSL compiler analyzes its IR expressions by performing a series of optimizationand simplification steps. During this process, DeepDSL checks the compatibility of the layers, infersconcrete dimensions for variables, removes duplicated computation, and optimizes IR expressionsfor code generation.The IR expressions of DeepDSL are also abstract and human readable. For example, Figure 3shows a portion of the IR expressions for Lenet, where the first column shows an IR expression thatrepresents a single-step computation, the second column shows the dimensions of the tensor beingcomputed if applicable, the third column shows the memory usage of that tensor, the fourth columnshows the current memory consumption if memory is dynamically allocated and deallocated, andthe last column shows the memory consumption if memory is reused instead of deallocated.IR expression such as Line 14 is for GPU memory deallocation. DeepDSL compiler analyzes thedependencies of the IR expressions, reorders them, and determines the earliest point where a tensorcan be freed. For example, the last use of the tensor X18 in at line 13, it can be freed next. Thetensor X8cannot be freed until much later since it is used at line 26.If we compile IR expressions such as line 14 to actual memory deallocation, then the maximumdynamic memory consumed is peaked at line 27, which is about 59 MB. However, frequent memoryallocation and deallocation in NVIDIA GPU reduces runtime performance. Therefore, DeepDSLruntime library (implemented in Java) supports memory reuse instead of deallocation. DeepDSLruntime maintains a pool of allocated memory blocks and when a tensor is freed, its memory isreturned to the pool and when a tensor is allocated, the runtime tries to find a suitable block in thepool first. With memory reuse, the memory consumption always peaks at the last line, which is about77 MB. Note that the above memory figure is for storing intermediate results such as gradients; thestatic memory allocated for parameters and convolution workspace are calculated separately.5Published as a conference paper at ICLR 20171IR expression Dimensions Current mem Total w/o dealloc2---------------------------------------------------------------------------------------------3val X7 = Cuda(X) 500 1 28 28 1.568000 1.568000 1.5680004val X8 = Convolv(1,0)(X7,cv1_W,cv1_B) 500 20 24 24 23.040001 24.608000 24.6080005val X9 = Pooling(2,2,0,true)(X8) 500 20 12 12 5.760000 30.368000 30.3680006val X10 = Convolv(1,0)(X9,cv2_W,cv2_B) 500 50 8 8 6.400000 36.768002 36.7680027val X11 = Pooling(2,2,0,true)(X10) 500 50 4 4 1.600000 38.368000 38.3680008val X12 = (X11[1><3])(i | @) *(fc1_W)(j | @) 500 500 1.000000 39.368000 39.3680009val X14 = (X12 + (i) => fc1_B) 500 500 0.000000 39.368000 39.36800010val X15 = ReLU()(X14) 500 500 0.000000 39.368000 39.36800011val X16 = (X15)(i | @) *(fc2_W)(j | @) 500 10 0.020000 39.388000 39.38800012val X18 = (X16 + (i) => fc2_B) 500 10 0.000000 39.388000 39.38800013val X19 = Softmax()(X18) 500 10 0.020000 39.408001 39.40800114Dealloc(X18) -0.020000 39.388000 39.40800115val X20 = Cuda(Indicator(Y, 10)) 500 10 0.020000 39.408001 39.40800116val X21 = Log X19.copy 500 10 0.020000 39.428001 39.42800117val X52 = 1/(X19.copy) 500 10 0.020000 39.448002 39.44800218Print(((0 - (X20 . X21)) / |500|)) 0.000000 39.448002 39.4480021920................... 30 lines omitted .......................................................2122cv2_B < X71 *d_Convolv(1,0)()/d_cv2_B 0.000000 36.768002 48.44800223val X72 = X71 *d_Convolv(1,0)(cv2_W)/d_X9 500 20 12 12 5.760000 42.528000 54.20800024cv2_W < X71 *d_Convolv(1,0)(X9)/d_cv2_W 0.000000 42.528000 54.20800025Dealloc(X71) -6.400000 36.127998 54.20800026val X74 = X72 *d_Pooling(2,2,0,true)(X9,X8)/d_X827 500 20 24 24 23.040001 59.167999 77.24800128Dealloc(X72) -5.760000 53.408001 77.24800129Dealloc(X9) -5.760000 47.647999 77.24800130Dealloc(X8) -23.040001 24.608000 77.24800131cv1_B < X74 *d_Convolv(1,0)()/d_cv1_B 0.000000 24.608000 77.24800132cv1_W < X74 *d_Convolv(1,0)(X7)/d_cv1_W 0.000000 24.608000 77.24800133Dealloc(X74) -23.040001 1.568000 77.24800134Dealloc(X7) -1.568000 0.000000 77.248001Figure 3: A portion of the IR expressions and memory information compiled from LenetDeepDSL compiler generates Java source code for each of the IR expressions. For example, line3 loads a batch of images into GPU memory. Line 4 and line 5 perform forward convolution andpooling computation respectively. Line 18 prints out the training loss. Line 22 updates of the biasof the second convolution layer with its gradient.Some computation (e.g. Log) is always in-place. Therefore we make a copy of a ten-sor if it is passed to such computation (e.g. Log X19.copy ). Gradient update such ascv1_W <X74 *d_Convolv(1,0)(X7)/d_cv1_W may be implemented as in-placecomputation as well by directly updating the tensor cv1_W when computing the backward filtergradient of the convolution layer cv1.5 C OMPILATIONA DeepDSL program compiles to a Java source program, which uses a small Java library JCuda tocall CUDA and CuDNN via a JNI wrapper. The compiled Java code does not depend on DeepDSLcompiler or Scala, which makes it more portable and easier to integrate with other applications.Most of the current tools use platform dependent programming languages such as C, Python, andLua, which compiles to specific binary for each installation. Since our compiled program is Java, itruns directly on any platforms that support JVM. Compilation of Java is trivial for most computingplatforms. For example, the Java source generated by DeepDSL on a Windows laptop can run on aLinux server without any modifications. While it takes efforts to install tools like Tensorflow, Caffe,or Torch on machines with different system architectures, running the Java code generated fromDeepDSL require very little effort.Gradient Derivation and Optimization The gradient derivation and optimization are imple-mented by the Loop class called in code below:1val loop = Loop(loss, accuracy, (x, y), param, solver)6Published as a conference paper at ICLR 2017To derive the gradient of a scalar expression loss with respect to a tensor variable p, we can writeval grad = loss.grad(p) , which evaluates to a tensor expression. The gradient updatesare formed by expressions of the form Update (p;grad;; ), which represents the computationp=p+grad .The gradient updates of all parameters together with the loss expression are then passed to opti-mization functions to obtain a list of IR expressions ready for code generation. The optimizationfunctions implement simplification, loop merging, code motion, vectorization, SSA transformation,common sub-expression elimination, inlining, tensor deallocation, and code scheduling.Generated code The compiled Java code includes just one class. The class fields include theobjects that handle computations such as convolution and activation and the objects that store tensorssuch as parameters and gradients. The class includes one method for the training loop and onemethod for testing.The generated code includes the corresponding IR expressions in the comments to improve read-ability. For example, the code below shows the Java statements generated for the forward inferenceof max pooling. Note that the variable name in the comments has no relation to the variable namesin the code as they are independently generated.1 // val X9 = Pooling(2,2,0,true)(X8)2 JCudaTensor x16;3 JCudaTensor x17;4 x17 = x9;5 x16 = x18.forward(x17);It is easy to perform some customization of the generated code such as changing the number oftraining iterations or reducing learning rate at a specified interval. User can also use the generatedcode as a component of another application.Persistency The compiled Java source includes code to save the trained parameters into files afterthe training is complete. When the same program or another program compiled from the samenetwork starts, it can load the same parameters to resume training or for forward testing.Workspace The convolution layers in the compiled Java source share the same workspace. Thus,users can place a limit on the total workspace by making one change. By reducing workspace andusing memory efficient mode, users may reduce memory consumption to fit into a particular GPU.6 P ERFORMANCEThe primary compilation target of DeepDSL is a Java program that runs on Nvidia GPU throughits CUDA/CuDNN library3. DeepDSL can encode well-known networks such as Alexnet, Overfeat,GoogleNet, Vgg, and Deep Residual Networks (Resnet). In this section, we evaluate the perfor-mance of DeepDSL with Caffe and Tensorflow using these networks. To be consistent, DeepDSL,Caffe, and Tensorflow tests all follow the same Caffe prototxt definitions. Specifically, for Alexnetand GoogleNet, we followed the prototxt from Caffe’s website4; for Vgg (Vgg-16), we followedthe prototxt from this link5; for Overfeat, we followed the prototxt from IntelLabs6; and for DeepResidual Network (ResNet-50), we followed the prototxt from the author’s website7. The Tensor-flow implementation of these networks are either modified from versions of convnet-benchmarks8or created from scratch. Note there are a couple of differences between the tests of Tensorflow andthose of DeepDSL and Caffe. The training data in the Tensorflow tests is generated from randomdata in memory while DeepDSL and Caffe tests load real images from the Lmdb database. Also,3DeepDSL has limited support for CPU with features sufficient to implement Lenet.4github.com/BVLC/caffe/tree/master/models5github.com/ruimashita/caffe-train/blob/master/vgg.train_val.prototxt6github.com/IntelLabs/Latte.jl/blob/master/benchmarks/overfeat/overfeat.prototxt7github.com/KaimingHe/deep-residual-networks/blob/master/prototxt/ResNet-50-deploy.prototxt8github.com/soumith/convnet-benchmarks7Published as a conference paper at ICLR 2017the GoogleNet test of Tensorflow only includes the main branch of the GoogleNet while DeepDSLand Caffe train with the full network. All our tests are trained with ImageNet images that have beenresized to 224 by 224 (though DeepDSL do support random cropping of images when their sizes arelarger than specified dimensions).Alexnet128 Alexnet256 Overfeat128 Overfeat256 Googlenet128 Googlenet256 Vgg64 ResNet32 ResNet6420002004006008001;0001;2001;4001;6001;8002;0002;2002;4002;6002;8003;0003;2003;4003;600Time in milli secondsDeepDSL DeepDSLTensorflow CaffeFigure 4: Runtime performance of DeepDSL, Tensorflow, and Caffe (1 forward/backward iteration).DeepDSL and DeepDSLare performance in runtime-efficient and memory-efficient mode respec-tively. The names of the networks are followed by the batch size. Caffe failed to run GoogleNet(batch 256) and ResNet (batch 64) and Tensorflow failed to run ResNet (batch 64) due to exhaustionof GPU memory.The tests are run on a server with a single NVIDIA Tesla K40C GPU equipped with 12 gigabytesof memory. The server runs the CentOS 7 Linux distribution. DeepDSL uses the JCuda 0.8.0RCbinding that runs against CUDA 8.0.279. DeepDSL programs are publicly available10.The runtime performance of DeepDSL, Tensorflow, and Caffe is compared in Figure 4, whereDeepDSL has significant advantage over Caffe in Alexnet, Overfeat, and Googlenet while onlymarginally slower than Caffe in Vgg and ResNet (Deep Residual Network). DeepDSL is also fasterthan Tensorflow in Alexnet, Googlenet, and ResNet while slightly slower in Overfeat and Vgg.The memory consumption of the DeepDSL, Tensorflow, and Caffe is compared in Figure 5, whereDeepDSL uses less memory in Alexnet, Googlenet, and ResNet while Caffe uses less memory inOverfeat and Vgg. DeepDSL uses significantly less memory for Googlenet and ResNet where Cafferuns out of memory for Googlenet at batch size 256 and ResNet at batch size 64. DeepDSL uses lessmemory than Tensorflow in all tests except Vgg. Tensorflow also ran out of memory for ResNet atbatch size 64. It is unclear why Tensorflow uses similar amount of memory for Overfeat with batchsize 128 and 256.In the tests, DeepDSL programs are run with runtime efficient mode which caches tensor objectsand with memory efficient mode (denoted by DeepDSL) which deallocates tensor objects as soonas possible. DeepDSLuses 10 to 30% less memory with similar percentage of runtime overheadexcept Vgg and Googlenet where runtime overhead is relatively smaller than memory saving.DeepDSL also lets CUDNN to pick the convolution algorithms with max performance. In Overfeat(batch size 128), out of the 4290 megabytes of GPU memory consumed, more than 2700 megabytesare for convolution workspace. While Caffe uses less memory in this test, it also runs much slower.9Note previous CUDA versions such as 6.5 or 7.x can also be used with minor modifications.10github.com/deepdsl/deepdsl8Published as a conference paper at ICLR 2017Alexnet128 Alexnet256 Overfeat128 Overfeat256 Googlenet128 Googlenet256 Vgg64 ResNet32 ResNet640:100:10:20:30:40:50:60:70:80:911:11:2104Memory in megabytesDeepDSL DeepDSLTensorFlow CaffeFigure 5: Peak GPU memory use of DeepDSL, Tensorflow, and Caffe during training. DeepDSLand DeepDSLare performance in runtime-efficient and memory-efficient mode respectively. Cafferan out of GPU memory for Googlenet (batch 256) and ResNet (batch 64). Tensorflow ran out ofmemory for ResNet (batch 64).Among all tests, DeepDSL either outperforms Caffe by a large margin or uses significantly lessmemory with Vgg being the only exception where Caffe uses slightly less time and memory.DeepDSL also has competitive runtime performance when compared with Tensorflow.As a side note, while running DeepDSL requires little setup, installing libraries such as Caffe andTensorflow requires a list of dependencies and long compilation sessions. Consequently, we skippedtesting with Torch 7 due to time limitation.7 R ELATED WORKIn this section, we review some popular tools: Torch7, Theano, Caffe, TensorFlow, and CNTK, andnewer ones such as Chainer Tokui et al. (2015) and MXNet Chen et al. (2015b).Torch7 Collobert et al. (2011) uses Lua language for integration with C program and achieves C-like performance. It has a large set of optimized routines to support CPU, GPU, mobile and FPGAbackends. Theano Bergstra et al. (2010), hosted in Python, allows users to define symbolic variablesand functions (using NumPy van der Walt et al. (2011)) to encode DL networks and compiles thesymbolic expressions to C. Theano performs optimization such as normalizing mathematical expres-sions, numerical stabilization, and code specialization during compilation and the target code canrun on CPU or GPU devices. Caffe Jia et al. (2014) constructs a graph for DL network by connectingthe layers with the 4D arrays that store tensors. Caffe separates its DL network model representa-tion (using ProtocolBuffers Google) from the actual model parameter calculations. With its layeredstructure, Caffe computes the memory needed for each layer and reserves memory accordingly.TensorFlow Abadi et al. (2016) shares largely common design paradigms as that of Caffe. Its coreis written in C++ and its computation graph is described with a graph where tensors and layersare alternatively arranged. Unlike Caffe’s tensor, TensorFlow’s tensor is a typed multi-dimensionalarray and is persistent mutable. Like TensorFlow and Caffe, CNTK describes a network with aconfiguration file. CNTK can encode arbitrary computational network and it can map computationonto multiple GPUs across multiple machines by assigning each computation node to a particularCPU/GPU device.9Published as a conference paper at ICLR 2017Comparing to the “define-and-run” paradigm (adopted by Torch7, Theano, and Caffe),Chainer Tokui et al. (2015) follows a “define-by-run” pattern, which essentially allows modifyingthe control flow during the execution of a computational graph. MXNet Chen et al. (2015b) providesboth declarative and imperative programming styles and multiple language supports by embeddinginto multiple host languages and unifying the execution with one backend engine.The major difference between DeepDSL and the above tools is that DeepDSL is fully abstract untilcode generation. This means that DeepDSL’s intermediate representation can be compiled to dif-ferent languages or to run on different platforms. While the current compilation target of DeepDSLis Java, targeting a different language mainly involves building an interface library to call CUDAroutines while the optimization components of DeepDSL remain the same. This separation betweenoptimization and code generation also means that we can apply generic optimization techniques atIR level without worrying about the underlying data structure such as the representation of tensorsor how the layers are connected. In fact, the optimization of DeepDSL involves nothing specific todeep neural networks since they are mostly compilation techniques.Note that while Theano and DeepDSL have similarity in the way that DSL expressions are optimizedand transformed, there are two important differences that make DeepDSL more efficient and flexible.The first is that while Theano expressions are treated as graphs during optimization, DeepDSL ex-pressions are optimized in two phases. The first phase is at expression level where the training lossand the parameter gradients go through the process of simplification, loop merging, code motion,and vectorization. In the second phase, DeepDSL expressions are reduced to static single assign-ment form for additional optimization such as common subexpression elimination, code scheduling,inlining of in-place computation, and tensor deallocation.The second is that DeepDSL generates target code using a single-pass generator (about 1200 linesof code) that prints Java source code as strings to a file. The input of the generator is DeepDSLexpressions, which are completely independent from the generated code. The generated Java codeis high-level and human readable with a simple Java API that allows customization. This cleanseparation between DSL expression and target code also allows independent evolution of DSL opti-mization and target-code generation. In contrast, the code generation of Theano is embedded in itsfunctions for low-level computation and is tied to C code that is not readable to users.8 C ONCLUSIONWe have developed a domain specific language DeepDSL that compiles to Java source program fordeep learning. The compiled DeepDSL programs are very easy to use and extend as its primarydependencies are just JCuda and CUDA libraries. DeepDSL programs are also efficient and itsruntime performance and memory consumption are significantly better than Caffe and Tensorflowin some DL networks. DeepDSL performs static analysis for early error detection and providesreadable intermediate representation and memory consumption analysis. DeepDSL allows compactencoding of complex networks and since it is based on a strongly typed language Scala, writingDeepDSL programs is less error prone than dynamic languages such as Python.While the compiled DeepDSL programs are efficient, DeepDSL itself is not optimized. Thoughcompiling simpler networks such as Alexnet takes a few seconds, the compilation of complex net-works such as ResNet can take a few minutes. As the future work, we plan to optimize DeepDSL toimprove the compilation efficiency. Also while the memory efficient mode of DeepDSL can reduceGPU memory consumption, it may not be enough for memory intensive networks such as Vgg. Asfuture work, we plan to implement GPU memory virtualization by paging out tensors that are notimmediately needed.REFERENCESM. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y . Jia, R. Jozefowicz,L. Kaiser, M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah, M. Schus-ter, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V . Vanhoucke, V . Vasudevan, F. Vie-10Published as a conference paper at ICLR 2017gas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y . Yu, and X. Zheng. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. ArXiv e-prints , March 2016.Amit Agarwal, Eldar Akchurin, Chris Basoglu, Guoguo Chen, Scott Cyphers, Jasha Droppo, AdamEversole, Brian Guenter, Mark Hillebrand, Ryan Hoens, Xuedong Huang, Zhiheng Huang,Vladimir Ivanov, Alexey Kamenev, Philipp Kranen, Oleksii Kuchaiev, Wolfgang Manousek,Avner May, Bhaskar Mitra, Olivier Nano, Gaizka Navarro, Alexey Orlov, Marko Padmilac,Hari Parthasarathi, Baolin Peng, Alexey Reznichenko, Frank Seide, Michael L. Seltzer, Mal-colm Slaney, Andreas Stolcke, Yongqiang Wang, Huaming Wang, Kaisheng Yao, Dong Yu,Yu Zhang, and Geoffrey Zweig. An introduction to computational networks and the compu-tational network toolkit. Technical Report MSR-TR-2014-112, August 2014. URL http://research.microsoft.com/apps/pubs/default.aspx?id=226641 .James Bergstra, Olivier Breuleux, Fr ́ed ́eric Bastien, Pascal Lamblin, Razvan Pascanu, GuillaumeDesjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPUmath expression compiler. In Proceedings of the Python for Scientific Computing Conference(SciPy) , June 2010. Oral Presentation.Shu-Ching Chen, Ramesh Jain, Yonghong Tian, and Haohong Wang. Guest editorial multimedia:The biggest big data. Multimedia, IEEE Transactions on , 17(9):1401–1403, 2015a.Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu,Chiyuan Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning libraryfor heterogeneous distributed systems. Neural Information Processing Systems, Workshop onMachine Learning Systems , 2015b.R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A matlab-like environment for machinelearning. In BigLearn, NIPS Workshop , 2011.Google. Protocol buffers. http://code.google.com/apis/protocolbuffers/ .Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser-gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed-ding. arXiv preprint arXiv:1408.5093 , 2014.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature , 521(7553):436–444,2015.Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. Chainer: a next-generation opensource framework for deep learning. In Proceedings of Workshop on Machine Learning Systems(LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Sys-tems (NIPS) , 2015. URL http://learningsys.org/papers/LearningSys_2015_paper_33.pdf .St ́efan van der Walt, S. Chris Colbert, and Ga ̈el Varoquaux. The numpy array: a structure forefficient numerical computation. CoRR , abs/1102.1523, 2011. URL http://arxiv.org/abs/1102.1523 .11
r1kQkVFgl
Under review as a conference paper at ICLR 2017LEARNING PYTHON CODE SUGGESTION WITH ASPARSE POINTER NETWORKAvishkar Bhoopchand, Tim Rockt ̈aschel, Earl Barr & Sebastian RiedelDepartment of Computer ScienceUniversity College Londonavishkar.bhoopchand.15@ucl.ac.uk ,ft.rocktaschel,e.barr,s.riedel g@cs.ucl.ac.ukABSTRACTTo enhance developer productivity, all modern integrated development environ-ments (IDEs) include code suggestion functionality that proposes likely next tokensat the cursor. While current IDEs work well for statically-typed languages, their re-liance on type annotations means that they do not provide the same level of supportfor dynamic programming languages as for statically-typed languages. Moreover,suggestion engines in modern IDEs do not propose expressions or multi-statementidiomatic code. Recent work has shown that language models can improve codesuggestion systems by learning from software repositories. This paper introduces aneural language model with a sparse pointer network aimed at capturing very long-range dependencies. We release a large-scale code suggestion corpus of 41M linesof Python code crawled from GitHub. On this corpus, we found standard neurallanguage models to perform well at suggesting local phenomena, but struggle torefer to identifiers that are introduced many tokens in the past. By augmenting aneural language model with a pointer network specialized in referring to predefinedclasses of identifiers, we obtain a much lower perplexity and a 5percentage pointsincrease in accuracy for code suggestion compared to an LSTM baseline. In fact,this increase in code suggestion accuracy is due to a 13times more accurate pre-diction of identifiers. Furthermore, a qualitative analysis shows this model indeedcaptures interesting long-range dependencies, like referring to a class memberdefined over 60tokens in the past.1 I NTRODUCTIONIntegrated development environments (IDEs) are essential tools for programmers. Especially when adeveloper is new to a codebase, one of their most useful features is code suggestion: given a piece ofcode as context, suggest a likely sequence of next tokens. Typically, the IDE suggests an identifier ora function call, including API calls. While extensive support exists for statically-typed languagessuch as Java, code suggestion for dynamic languages like Python is harder and less well supportedbecause of the lack of type annotations. Moreover, suggestion engines in modern IDEs do not proposeexpressions or multi-statement idiomatic code.Recently, methods from statistical natural language processing (NLP) have been used to train codesuggestion systems from code usage in large code repositories (Hindle et al., 2012; Allamanis &Sutton, 2013; Tu et al., 2014). To this end, usually an n-gram language model is trained to scorepossible completions. Neural language models for code suggestion (White et al., 2015; Das &Shah, 2015) have extended this line of work to capture more long-range dependencies. Yet, thesestandard neural language models are limited by the so-called hidden state bottleneck, i.e., all contextinformation has to be stored in a fixed-dimensional internal vector representation. This limitationrestricts such models to local phenomena and does not capture very long-range semantic relationshipslike suggesting calling a function that has been defined many tokens before.To address these issues, we create a large corpus of 41M lines of Python code by using a heuristic forcrawling high-quality code repositories from GitHub. We investigate, for the first time, the use ofattention (Bahdanau et al., 2014) for code suggestion and find that, despite a substantial improvement1Under review as a conference paper at ICLR 2017in accuracy, it still makes avoidable mistakes. Hence, we introduce a model that leverages long-rangePython dependencies by selectively attending over the introduction of identifiers as determinedby examining the Abstract Syntax Tree. The model is a form of pointer network (Vinyals et al.,2015a), and learns to dynamically choose between syntax-aware pointing for modeling long-rangedependencies and free form generation to deal with local phenomena, based on the current context.Our contributions are threefold: (i) We release a code suggestion corpus of 41M lines of Python codecrawled from GitHub, (ii) We introduce a sparse attention mechanism that captures very long-rangedependencies for code suggestion of this dynamic programming language efficiently, and (iii) Weprovide a qualitative analysis demonstrating that this model is indeed able to learn such long-rangedependencies.2 M ETHODSWe first revisit neural language models, before briefly describing how to extend such a languagemodel with an attention mechanism. Then we introduce a sparse attention mechanism for a pointernetwork that can exploit the Python abstract syntax tree of the current context for code suggestion.2.1 N EURAL LANGUAGE MODELCode suggestion can be approached by a language model that measures the probability of observinga sequence of tokens in a Python program. For example, for the sequence S=a1; :::; aN, the jointprobability of Sfactorizes according toP(S) =P(a1)NYt=2P(atjat1; :::; a 1) (1)where the parameters are estimated from a training corpus. Given a sequence of Python tokens, weseek to predict the next Mtokensat+1; :::; at+Mthat maximize Equation 1arg maxat+1;:::;a t+MP(a1; :::; at; at+1; :::; at+M): (2)In this work, we build upon neural language models using Recurrent Neural Networks (RNNs) andLong Short-Term Memory (LSTM, Hochreiter & Schmidhuber, 1997). This neural language modelestimates the probabilities in Equation 1 using the output vector of an LSTM at time step t(denotedhthere) according toP(at=jat1; :::; a 1) =exp (vTht+b)P0exp (vT0ht+b0)(3)wherevis a parameter vector associated with token in the vocabulary.Neural language models can, in theory, capture long-term dependencies in token sequences throughtheir internal memory. However, as this internal memory has fixed dimension and can be updated atevery time step, such models often only capture local phenomena. In contrast, we are interested invery long-range dependencies like referring to a function identifier introduced many tokens in thepast. For example, a function identifier may be introduced at the top of a file and only used nearthe bottom. In the following, we investigate various external memory architectures for neural codesuggestion.2.2 A TTENTIONA straight-forward approach to capturing long-range dependencies is to use a neural attention mech-anism (Bahdanau et al., 2014) on the previous Koutput vectors of the language model. Attentionmechanisms have been successfully applied to sequence-to-sequence tasks such as machine transla-tion (Bahdanau et al., 2014), question-answering (Hermann et al., 2015), syntactic parsing (Vinyalset al., 2015b), as well as dual-sequence modeling like recognizing textual entailment (Rockt ̈aschelet al., 2016). The idea is to overcome the hidden-state bottleneck by allowing referral back to previousoutput vectors. Recently, these mechanisms were applied to language modelling by Cheng et al.(2016) and Tran et al. (2016).2Under review as a conference paper at ICLR 2017Formally, an attention mechanism with a fixed memory Mt2RkKofKvectorsmi2Rkfori2[1;K], produces an attention distribution t2RKand context vector ct2Rkat each timesteptaccording to Equations 4 to 7. Furthermore, WM;Wh2Rkkandw2Rkare trainableparameters. Finally, note that 1Krepresents a K-dimensional vector of ones.Mt= [m1:::mK] 2RkK(4)Gt=tanh(WMMt+1TK(Whht)) 2RkK(5)t=softmax (wTGt) 2R1K(6)ct=MtTt 2Rk(7)For language modeling, we populate Mtwith a fixed window of the previous KLSTM outputvectors. To obtain a distribution over the next token we combine the context vector ctof the attentionmechanism with the output vector htof the LSTM using a trainable projection matrix WA2Rk2k.The resulting final output vector nt2Rkencodes the next-word distribution and is projected to thesize of the vocabulary jVj. Subsequently, we apply a softmax to arrive at a probability distributionyt2RjVjover the next token. This process is presented in Equation 9 where WV2RjVjkandbV2RjVjare trainable parameters.nt=tanhWAhtct2Rk(8)yt= softmax(WVnt+bV) 2RjVj(9)The problem of the attention mechanism above is that it quickly becomes computationally expensivefor largeK. Moreover, attending over many memories can make training hard as a lot of noise isintroduced in early stages of optimization where the LSTM outputs (and thus the memory Mt) aremore or less random. To alleviate these problems we now turn to pointer networks and a simpleheuristic for populating Mtthat permits the efficient retrieval of identifiers in a large history ofPython code.2.3 S PARSE POINTER NETWORKWe develop an attention mechanism that provides a filtered view of a large history of Python tokens.At any given time step, the memory consists of context representations of the previous Kidentifiersintroduced in the history. This allows us to model long-range dependencies found in identifier usage.For instance, a class identifier may be declared hundreds of lines of code before it is used. Given ahistory of Python tokens, we obtain a next-word distribution from a weighed average of the sparsepointer network for identifier reference and a standard neural language model. The weighting of thetwo is determined by a controller.Formally, at time-step t, the sparse pointer network operates on a memory Mt2RkKof only theKprevious identifier representations ( e.g.function identifiers, class identifiers and so on). In addition,we maintain a vector mt= [id1; :::; idK]2NKof symbol ids for these identifier representations(i.e.pointers into the large global vocabulary).As before, we calculate a context vector ctusing the attention mechanism (Equation 7), but on amemoryMtonly containing representations of identifiers that were declared in the history. Next, weobtain a pseudo-sparse distribution over the global vocabulary fromst[i] =t[j]ifmt[j] =iC otherwise(10)it=softmax (st) 2RjVj(11)whereCis a large negative constant ( e.g.1000 ). In addition, we calculate a next-word distributionfrom a standard neural language modelyt=softmax (WVht+bV) 2RjVj(12)3Under review as a conference paper at ICLR 2017Figure 1: Sparse pointer network for code suggestion on a Python code snippet, showing the next-word distributions of the language model and identifier attention and their weighted combinationthroughand we use a controller to calculate a distribution t2R2over the language model and pointernetwork for the final weighted next-word distribution ytviaht="htxtct#2R3k(13)t=softmax (Wht+b) 2R2(14)yt= [ytit]t 2RjVj(15)Here,xtis the representation of the input token, and W2R23kandb2R2a trainableweight matrix and bias respectively. This controller is conditioned on the input, output and contextrepresentations. This means for deciding whether to refer to an identifier or generate from the globalvocabulary, the controller has access to information from the encoded next-word distribution htofthe standard neural language model, as well as the attention-weighted identifier representations ctfrom the current history.Figure 1 overviews this process. In it, the identifier base_path appears twice, once as an argumentto a function and once as a member of a class (denoted by *). Each appearance has a different idin the vocabulary and obtains a different probability from the model. In the example, the modelcorrectly chooses to refer to the member of the class instead of the out-of-scope function argument,although, from a user point-of-view, the suggestion would be the same in both cases.3 L ARGE -SCALE PYTHON CORPUSPrevious work on code suggestion either focused on statically-typed languages (particularly Java)or trained on very small corpora. Thus, we decided to collect a new large-scale corpus of thedynamic programming language Python. According to the programming language popularity websitePypl (Carbonnelle, 2016), Python is the second most popular language after Java. It is also the3rd most common language in terms of number of repositories on the open-source code repositoryGitHub, after JavaScript and Java (Zapponi, 2016).We collected a corpus of 41M lines of Python code from GitHub projects. Ideally, we would like thiscorpus to only contain high-quality Python code, as our language model learns to suggest code fromhow users write code. However, it is difficult to automatically assess what constitutes high-qualitycode. Thus, we resort to the heuristic that popular code projects tend to be of good quality, There are4Under review as a conference paper at ICLR 2017Table 1: Python corpus statistics.Dataset #Projects #Files #Lines #Tokens V ocabulary SizeTrain 489 118 298 26 868 583 88 935 698 2 323 819Dev 179 26 466 5 804 826 18 147 341Test 281 43 062 8 398 100 30 178 356Total 949 187 826 41 071 509 137 261 395Figure 2: Example of the Python code normalization. Original file on the left and normalized versionon the right.two metrics on GitHub that we can use for this purpose, namely stars (similar to bookmarks) andforks (copies of a repository that allow users to freely experiment with changes without affecting theoriginal repository). Similar to Allamanis & Sutton (2013) and Allamanis et al. (2014), we selectPython projects with more than 100stars, sort by the number of forks descending, and take the top1000 projects. We then removed projects that did not compile with Python3, leaving us with 949projects. We split the corpus on the project level into train, dev, and test. Table 1 presents the corpusstatistics.3.1 N ORMALIZATION OF IDENTIFIERSUnsurprisingly, the long tail of words in the vocabulary consists of rare identifiers. To improvegeneralization, we normalize identifiers before feeding the resulting token stream to our models.That is, we replace every identifier name with an anonymous identifier indicating the identifier group(class, variable, argument, attribute or function) concatenated with a random number that makesthe identifier unique in its scope. Note that we only replace novel identifiers defined within a file.Identifier references to external APIs and libraries are left untouched. Consistent with previous corpuscreation for code suggestion ( e.g.Khanh Dam et al., 2016; White et al., 2015), we replace numericalconstant tokens with $NUM$ , remove comments, reformat the code, and replace tokens appearingless than five times with an $OOV$ (out of vocabulary) token.4 E XPERIMENTSAlthough previous work by White et al. (2015) already established that a simple neural languagemodel outperforms an n-gram model for code suggestion, we include a number of n-gram baselinesto confirm this observation. Specifically, we use n-gram models for n2f3;4;5;6gwith ModifiedKneser-Ney smoothing (Kneser & Ney, 1995) from the Kyoto Language Modelling Toolkit (Neubig,2012).We train the sparse pointer network using mini-batch SGD with a batch size of 30and truncatedbackpropagation through time (Werbos, 1990) with a history of 20identifier representations. We use5Under review as a conference paper at ICLR 2017Table 2: Perplexity (PP), Accuracy (Acc) and Accuarcy among top 5 predictions (Acc@5).Model Train PP Dev PP Test PP Acc [%] Acc@5 [%]All IDs Other All IDs Other3-gram 12.90 24.19 26.90 13.19 – – 50.81 – –4-gram 7.60 21.07 23.85 13.68 – – 51.26 – –5-gram 4.52 19.33 21.22 13.90 – – 51.49 – –6-gram 3.37 18.73 20.17 14.51 – – 51.76 – –LSTM 9.29 13.08 14.01 57.91 2.1 62.8 76.30 4.5 82.6LSTM w/ Attention 20 7.30 11.07 11.74 61.30 21.4 64.8 79.32 29.9 83.7LSTM w/ Attention 50 7.09 9.83 10.05 63.21 30.2 65.3 81.69 41.3 84.1Sparse Pointer Network 6.41 9.40 9.18 62.97 27.3 64.9 82.62 43.6 84.5an initial learning rate of 0:7and decay it by 0:9after every epoch. As additional baselines, we testa neural language model with LSTM units with and without attention. For the attention languagemodels, we experiment with a fixed-window attention memory of the previous 20and50tokensrespectively, and a batch size of 75. We found during testing that the baseline models performedworse with the same batch size as the sparse pointer network of 30. We therefore chose to report thestronger results obtained with a batch size of 75.All neural language models were developed in TensorFlow (Abadi et al., 2016) and trained usingcross-entropy loss. While processing a Python source code file, the last recurrent state of the RNNis fed as the initial state of the subsequent sequence of the same file and reset between files. Allmodels use an input and hidden size of 200, an LSTM forget gate bias of 1(Jozefowicz et al., 2015),gradient norm clipping of 5(Pascanu et al., 2013), and randomly initialized parameters in the interval(0:05;0:05). As regularizer, we use a dropout of 0:1on the input representations. Furthermore, weuse a sampled softmax (Jean et al., 2015) with a log-uniform sampling distribution and a sample sizeof 1000.5 R ESULTSWe evaluate all models using perplexity (PP), as well as accuracy of the top prediction (Acc) and thetop five predictions (Acc@5). The results are summarized in Table 2.We can confirm that for code suggestion neural models outperform n-gram language models by alarge margin. Furthermore, adding attention improves the results substantially ( 2:3lower perplexityand3:4percentage points increased accuracy). Interestingly, this increase can be attributed to asuperior prediction of identifiers, which increased from an accuracy of 2:1%to21:4%. An LSTMwith an attention window of 50gives us the best accuracy for the top prediction. We achieve furtherimprovements for perplexity and accuracy of the top five predictions by using a sparse pointer networkthat uses a smaller memory of the past 20identifier representations.5.1 Q UALITATIVE ANALYSISFigures 3a-d show a code suggestion example involving an identifier usage. While the LSTM baselineis uncertain about the next token, we get a sensible prediction by using attention or the sparse pointernetwork. The sparse pointer network provides more reasonable alternative suggestions beyond thecorrect top suggestion.Figures 3e-h show the use-case referring to a class attribute declared 67tokens in the past. Onlythe Sparse Pointer Network makes a good suggestion. Furthermore, the attention weights in 3idemonstrate that this model distinguished attributes from other groups of identifiers. We give a fullexample of a token-by-token suggestion of the Sparse Pointer Network in Figure 4 in the Appendix.6Under review as a conference paper at ICLR 2017(a) Code snippet for referencingvariable.(b) LSTM Model. (c) LSTM w/ Attention50.(d) Sparse Pointer Net-work.(e) Code snippet for referencing classmember.(f) LSTM Model. (g) LSTM w/ Attention50.(h) Sparse Pointer Net-work.(i) Sparse Pointer Network attention over memory of identifier representations.Figure 3: Code suggestion example involving a reference to a variable (a-d), a long-range dependency(e-h), and the attention weights of the Sparse Pointer Network (i).6 R ELATED WORKPrevious code suggestion work using methods from statistical NLP has mostly focused on n-grammodels. Much of this work is inspired by Hindle et al. (2012) who argued that real programs fallinto a much smaller space than the flexibility of programming languages allows. They were ableto capture the repetitiveness and predictable statistical properties of real programs using languagemodels. Subsequently, Tu et al. (2014) improved upon Hindle et al.’s work by adding a cachemechanism that allowed them to exploit locality stemming from the specialisation and decoupling ofprogram modules. Tu et al.’s idea of adding a cache mechanism to the language model is specificallydesigned to exploit the properties of source code, and thus follows the same aim as the sparse attentionmechanism introduced in this paper.While the majority of preceding work trained on small corpora, Allamanis & Sutton (2013) created acorpus of 352M lines of Java code which they analysed with n-gram language models. The size ofthe corpus allowed them to train a single language model that was effective across multiple differentproject domains. White et al. (2015) later demonstrated that neural language models outperformn-gram models for code suggestion. They compared various n-gram models (up to nine grams),including Tu et al.’s cache model, with a basic RNN neural language model. Khanh Dam et al.(2016) compared White et al.’s basic RNN with LSTMs and found that the latter are better at codesuggestion due to their improved ability to learn long-range dependencies found in source code. Ourpaper extends this line of work by introducing a sparse attention model that captures even longerdependencies.The combination of lagged attention mechanisms with language modelling is inspired by Chenget al. (2016) who equipped LSTM cells with a fixed-length memory tape rather than a single memorycell. They achieved promising results on the standard Penn Treebank benchmark corpus (Marcuset al., 1993). Similarly, Tran et al. added a memory block to LSTMs for language modelling ofEnglish, German and Italian and outperformed both n-gram and neural language models. Theirmemory encompasses representations of all possible words in the vocabulary rather than providing asparse view as we do. Attention mechanisms were previously applied to the study of source code byAllamanis et al. who used a convolutional neural network combined with an attention mechanism togenerate method names from bodies.7Under review as a conference paper at ICLR 2017An alternative to our purely lexical approach to code suggestion involves the use of probabilisticcontext-free grammars (PCFGs) which exploit the formal grammar specifications and well-defined,deterministic parsers available for source code. These were used by Allamanis & Sutton (2014)to extract idiomatic patterns from source code. A weakness of PCFGs is their inability to modelcontext-dependent rules of programming languages such as that variables need to be declared beforebeing used. Maddison & Tarlow (2014) added context-aware variables to their PCFG model in orderto capture such rules.Ling et al. (2016) recently used a pointer network to generate code from natural language descriptions.Our use of a controller for deciding whether to generate from a language model or copy an identifierusing a sparse pointer network is inspired by their latent code predictor. However, their inputs (textualdescriptions) are short whereas code suggestion requires capturing very long-range dependencies thatwe addressed by a filtered view on the memory of previous identifier representations.7 C ONCLUSIONS AND FUTURE WORKIn this paper, we investigated neural language models for code suggestion of the dynamically-typedprogramming language Python. We released a corpus of 41M lines of Python crawled from GitHuband compared n-gram, standard neural language models, and attention. By using attention, weobserved an order of magnitude more accurate prediction of identifiers. Furthermore, we proposed asparse pointer network that can efficiently capture long-range dependencies by only operating ona filtered view of a memory of previous identifier representations. This model achieves the lowestperplexity and best accuracy among the top five predictions. The Python corpus and the code for ourmodels is released at https://github.com/uclmr/pycodesuggest .The presented methods were only tested for code suggestion within the same Python file. We areinterested in scaling the approach to the level of entire code projects and collections thereof, as wellas integrating a trained code suggestion model into an existing IDE. Furthermore, we plan to work oncode completion, i.e., models that provide a likely continuation of a partial token, using characterlanguage models (Graves, 2013).ACKNOWLEDGMENTSThis work was supported by Microsoft Research through its PhD Scholarship Programme, an AllenDistinguished Investigator Award, and a Marie Curie Career Integration Award.REFERENCESMart ́ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin,Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, RajatMonga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A. Tucker, Vijay Vasudevan,Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zhang. Tensorflow: A system for large-scalemachine learning. CoRR , abs/1605.08695, 2016. URL http://arxiv.org/abs/1605.08695 .Miltiadis Allamanis and Charles Sutton. Mining idioms from source code. In Proceedings ofthe 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering ,FSE 2014, pp. 472–483, New York, NY , USA, 2014. ACM. ISBN 978-1-4503-3056-5. doi:10.1145/2635868.2635901. URL http://doi.acm.org/10.1145/2635868.2635901 .Miltiadis Allamanis and Charles A. Sutton. Mining source code repositories at massive scaleusing language modeling. In Thomas Zimmermann, Massimiliano Di Penta, and Sunghun Kim(eds.), MSR , pp. 207–216. IEEE Computer Society, 2013. ISBN 978-1-4673-2936-1. URLhttp://dblp.uni-trier.de/db/conf/msr/msr2013.html#AllamanisS13a .Miltiadis Allamanis, Earl T. Barr, Christian Bird, and Charles Sutton. Learning natural codingconventions. In Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundationsof Software Engineering , FSE 2014, pp. 281–293, New York, NY , USA, 2014. ACM. ISBN 978-1-4503-3056-5. doi: 10.1145/2635868.2635883. URL http://doi.acm.org/10.1145/2635868.2635883 .8Under review as a conference paper at ICLR 2017Miltiadis Allamanis, Hao Peng, and Charles A. Sutton. A convolutional attention network for extremesummarization of source code. In Proceedings of the 33nd International Conference on MachineLearning, ICML 2016, New York City, NY, USA, June 19-24, 2016 , pp. 2091–2100, 2016. URLhttp://jmlr.org/proceedings/papers/v48/allamanis16.html .Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. CoRR , abs/1409.0473, 2014. URL http://arxiv.org/abs/1409.0473 .Pierre Carbonnelle. Pypl popularity of programming language. http://pypl.github.io/PYPL.html , 2016. URL http://pypl.github.io/PYPL.html . [Online; accessed 30-August-2016].Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machinereading. In Proceedings of the 2016 Conference on Empirical Methods in Natural LanguageProcessing , pp. 551–561. Association for Computational Linguistics, 2016. URL http://aclweb.org/anthology/D16-1053 .Subhasis Das and Chinmayee Shah. Contextual code completion using machine learning. 2015.Alex Graves. Generating sequences with recurrent neural networks. CoRR , abs/1308.0850, 2013.URLhttp://arxiv.org/abs/1308.0850 .Karl Moritz Hermann, Tom ́as Kocisk ́y, Edward Grefenstette, Lasse Espeholt, Will Kay,Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and compre-hend. In Advances in Neural Information Processing Systems 28: Annual Confer-ence on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal,Quebec, Canada , pp. 1693–1701, 2015. URL http://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend .Abram Hindle, Earl T. Barr, Zhendong Su, Mark Gabel, and Premkumar Devanbu. On the naturalnessof software. In Proceedings of the 34th International Conference on Software Engineering , ICSE’12, pp. 837–847, Piscataway, NJ, USA, 2012. IEEE Press. ISBN 978-1-4673-1067-3. URLhttp://dl.acm.org/citation.cfm?id=2337223.2337322 .Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural Comput. , 9(8):1735–1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL http://dx.doi.org/10.1162/neco.1997.9.8.1735 .S ́ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large targetvocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of theAssociation for Computational Linguistics and the 7th International Joint Conference on NaturalLanguage Processing (Volume 1: Long Papers) , pp. 1–10, Beijing, China, July 2015. Associationfor Computational Linguistics. URL http://www.aclweb.org/anthology/P15-1001 .Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrentnetwork architectures. In David Blei and Francis Bach (eds.), Proceedings of the 32nd Inter-national Conference on Machine Learning (ICML-15) , pp. 2342–2350. JMLR Workshop andConference Proceedings, 2015. URL http://jmlr.org/proceedings/papers/v37/jozefowicz15.pdf .H. Khanh Dam, T. Tran, and T. Pham. A deep language model for software code. ArXiv e-prints ,August 2016.R. Kneser and H. Ney. Improved backing-off for m-gram language modeling. In Acoustics, Speech,and Signal Processing, 1995. ICASSP-95., 1995 International Conference on , volume 1, pp.181–184 vol.1, May 1995. doi: 10.1109/ICASSP.1995.479394.Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, Andrew Senior, FuminWang, and Phil Blunsom. Latent predictor networks for code generation. arXiv preprintarXiv:1603.06744 , 2016.Chris J Maddison and Daniel Tarlow. Structured generative models of natural source code. InInternational Conference on Machine Learning , 2014.9Under review as a conference paper at ICLR 2017Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a large annotatedcorpus of english: The penn treebank. COMPUTATIONAL LINGUISTICS , 19(2):313–330, 1993.Graham Neubig. Kylm - the kyoto language modeling toolkit. http://www.phontron.com/kylm/ , 2012. URL http://www.phontron.com/kylm/ . [Online; accessed 23-July-2016].Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neuralnetworks. In Proceedings of the 30th International Conference on Machine Learning, ICML2013, Atlanta, GA, USA, 16-21 June 2013 , pp. 1310–1318, 2013. URL http://jmlr.org/proceedings/papers/v28/pascanu13.html .Tim Rockt ̈aschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom.Reasoning about entailment with neural attention. In ICLR , 2016.Ke M. Tran, Arianna Bisazza, and Christof Monz. Recurrent memory networks for languagemodeling. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of theAssociation for Computational Linguistics: Human Language Technologies, San Diego California,USA, June 12-17, 2016 , pp. 321–331, 2016. URL http://aclweb.org/anthology/N/N16/N16-1036.pdf .Zhaopeng Tu, Zhendong Su, and Premkumar Devanbu. On the localness of software. In Proceedingsof the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering ,FSE 2014, pp. 269–280, New York, NY , USA, 2014. ACM. ISBN 978-1-4503-3056-5. doi:10.1145/2635868.2635875. URL http://doi.acm.org/10.1145/2635868.2635875 .Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in NeuralInformation Processing Systems , pp. 2692–2700, 2015a.Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton.Grammar as a foreign language. In Advances in Neural Information Processing Systems 28:Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Mon-treal, Quebec, Canada , pp. 2773–2781, 2015b. URL http://papers.nips.cc/paper/5635-grammar-as-a-foreign-language .Paul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of theIEEE , 78(10):1550–1560, 1990.Martin White, Christopher Vendome, Mario Linares-V ́asquez, and Denys Poshyvanyk. Towarddeep learning software repositories. In Proceedings of the 12th Working Conference on MiningSoftware Repositories , MSR ’15, pp. 334–345, Piscataway, NJ, USA, 2015. IEEE Press. URLhttp://dl.acm.org/citation.cfm?id=2820518.2820559 .Carlo Zapponi. Githut - programming languages and github. http://githut.info/ , 2016.URLhttp://githut.info/ . [Online; accessed 19-August-2016].10Under review as a conference paper at ICLR 2017APPENDIXFigure 4: Full example of code suggestion with a Sparse Pointer Network. Boldface tokens on the leftshow the first declaration of an identifier. The middle part visualizes the memory of representations ofthese identifiers. The right part visualizes the output of the controller, which is used for interpolatingbetween the language model (LM) and the attention of the pointer network (Att).11
BkJsCIcgl
Under review as a conference paper at ICLR 2017THEPREDICTRON :END-TO-ENDLEARNING AND PLANNINGDavid Silver*, Hado van Hasselt*, Matteo Hessel*, Tom Schaul*, Arthur Guez*, Tim Harley,Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, Thomas DegrisDeepMind, Londonfdavidsilver,hado,mtthss,schaul,aguez g@google.comABSTRACTOne of the key challenges of artificial intelligence is to learn models that are ef-fective in the context of planning. In this document we introduce the predictronarchitecture. The predictron consists of a fully abstract model, represented by aMarkov reward process, that can be rolled forward multiple “imagined” planningsteps. Each forward pass of the predictron accumulates internal rewards and val-ues over multiple planning depths. The predictron is trained end-to-end so as tomake these accumulated values accurately approximate the true value function.We applied the predictron to procedurally generated random mazes and a sim-ulator for the game of pool. The predictron yielded significantly more accuratepredictions than conventional deep neural network architectures.1 I NTRODUCTIONThe central idea of model-based reinforcement learning is to decompose the RL problem into twosubproblems: learning a model of the environment, and then planning with this model. The modelis typically represented by a Markov reward process (MRP) or decision process (MDP). The plan-ning component uses this model to evaluate and select among possible strategies. This is typicallyachieved by rolling forward the model to construct a value function that estimates cumulative re-ward. In prior work, the model is trained essentially independently of its use within the planner.As a result, the model is not well-matched with the overall objective of the agent. Prior deep rein-forcement learning methods have successfully constructed models that can unroll near pixel-perfectreconstructions (Oh et al., 2015; Chiappa et al., 2016); but are yet to surpass state-of-the-art model-free methods in challenging RL domains with raw inputs (e.g., Mnih et al., 2015; 2016; Lillicrapet al., 2016).In this paper we introduce a new architecture, which we call the predictron , that integrates learningand planning into one end-to-end training procedure. At every step, a model is applied to an internalstate, to produce a next state, reward, discount, and value estimate. This model is completely abstractand its only goal is to facilitate accurate value prediction. For example, to plan effectively in a game,an agent must be able to predict the score. If our model makes accurate predictions, then an optimalplan with respect to our model will also be an optimal plan for the underlying game – even if thatmodel uses a different state space (e.g., an abstract representation of enemy positions, ignoringtheir shapes and colours), action space (e.g., a high-level action to move away from an enemy),rewards (e.g., a single abstract step could have a higher value than any real reward), or even time-step (e.g., a single abstract step could “jump” the agent to the end of a corridor). All we requireis that trajectories through the abstract model produce scores that are consistent with trajectoriesthrough the real environment. This is achieved by training the predictron end-to-end, so as to makeits value estimates as accurate as possible.An ideal model could generalise to many different prediction tasks, rather than overfitting to a singletask; and could learn from a rich variety of feedback signals, not just a single extrinsic reward. Wetherefore train the predictron to predict a host of different value functions for a variety of pseudo-reward functions and discount factors. These pseudo-rewards can encode any event or aspect of theenvironment that the agent may care about, e.g., staying alive or reaching the next room.We focus upon the prediction task: estimating value functions in MRP environments with uncon-trolled dynamics. In this case, the predictron can be implemented as a deep neural network with an*Primary contributors1Under review as a conference paper at ICLR 2017MRP as a recurrent core. The predictron unrolls this core multiple steps and accumulates rewardsinto an overall estimate of value.We applied the predictron to procedurally generated random mazes, and a simulated pool domain,directly from pixel inputs. In both cases, the predictron significantly outperformed model-free al-gorithms with conventional deep network architectures; and was much more robust to architecturalchoices such as depth.2 B ACKGROUNDWe consider environments defined by an MRP with states s2S. The MRP is defined by a function,s0;r; =p(s;), wheres0is the next state, ris the reward, and is the discount factor, whichcan for instance represent the non-termination probability for this transition. The process may bestochastic, given IID noise .The return of an MRP is the cumulative discounted reward over a single trajectory, gt=rt+1+t+1rt+2+t+1t+2rt+3+:::, wheretcan vary per time-step. We consider a generalisation of theMRP setting that includes vector-valued rewards r, diagonal-matrix discounts , and vector-valuedreturns g; definitions are otherwise identical to the above. We use this bold font notation to closelymatch the more familiar scalar MRP case; the majority of the paper can be comfortably understoodby reading all rewards as scalars, and all discount factors as scalar and constant, i.e., t=.Thevalue function of an MRPpis the expected return from state s,vp(s) =Ep[gtjst=s]. Inthe vector case, these are known as general value functions (Sutton et al., 2011). We will say that a(general) value function v()isconsistent with environment pif and only if v=vpwhich satisfiesthe following Bellman equation (Bellman, 1957),vp(s) =Ep[r+vp(s0)js]: (1)In model-based reinforcement learning (Sutton and Barto, 1998), an approximation mpto theenvironment is learned. In the uncontrolled setting this model is normally an MRP s0;r;=m(s;)that maps from state sto subsequent state s0and additionally outputs rewards rand discounts ;the model may be stochastic given an IID source of noise . A (general) value function vm()isconsistent with model m(orvalid , (Sutton, 1995)), if and only if it satisfies a Bellman equationvm(s) =Em[r+vm(s0)js]with respect to model m. Conventionally, model-based RL methodsfocus on finding a value function vthat is consistent with a separately learned model m.3 P REDICTRON ARCHITECTUREThe predictron is composed of four main components. First, a state representation s=f(s)thatencodes raw input s(this could be a history of observations, in the partially observed setting, forexample when fis a recurrent network) into an internal (abstract, hidden) state s. Second, a models0;r;=m(s;)that maps from internal state sto subsequent internal state s0, internal rewards r,and internal discounts . Third, a value function vthat outputs internal values v=v(s)representingthe future, internal return from internal state sonwards. The predictron is applied by unrolling itsmodelmmultiple “planning” steps to produce internal rewards, discounts and values. We usesuperscriptskto indicate internal steps of the model (which have no necessary connection to timestepstof the environment). Finally, these internal rewards, discounts and values are combinedtogether by an accumulator into an overall estimate of value g. The whole predictron, from inputstatesto output g, may be viewed as a value function approximator for external targets (i.e. thereturns in the real environment). We consider both k-step and-weighted accumulators.Thek-step predictron rolls its internal model forward ksteps. Specifically, the k-step predictronreturn gk(henceforth abbreviated as preturn ) is the internal return obtained by accumulating kmodel steps, plus a final value vkfrom thekth step,gk=r1+1(r2+2(:::(rk1+k1(rk+kvk)):::)): (2)The 0-step preturn is simply the first value g0=v0. The 1-step preturn is g1=r1+1v1, and soon (see Fig. 1a).The-predictron combines together many k-step preturns. Specifically, it computes a diagonalweight matrix kfrom each internal state sk. The accumulator uses weights 0;:::;Kto aggregate2Under review as a conference paper at ICLR 2017a)k-step predictron b)-predictron.........22r22&&... s2 //OO99v2 //+1s2OO//99v212//+11r1%%r11&&... s1 //OO99v1 //+0s1OO//99+0s1OO//99v111//+00r0%%r0%%r00&&s0 //OO99v0 //+s0OO//99+s0OO//99+s0OO//99v010//+sOOg0sOOg1sOOg2sOOgFigure 1: a) The k-step predictron architecture. The first three columns illustrate 0, 1 and 2-steppathways through the predictron. The 0-step preturn reduces to standard model-free value functionapproximation; other preturns “imagine” additional steps with an internal model. Each pathwayoutputs ak-step preturn gkthat accumulates discounted rewards along with a final value estimate. Inpractice allk-step preturns are computed in a single forward pass. b) The -predictron architecture.The-parameters gate between the different preturns. The output is a -preturn gthat is a mixtureover thek-step preturns. For example, if 0=1;1=1;2=0then we recover the 2-step preturn,g=g2. Discount factors kand-parameters kare dependent on state sk; this dependence isnot shown in the figure.overk-step preturns g0;:::;gKand output a combined value that we call the -preturn g,g=KXk=0wkgkwhere wk=8><>:(1k)Qk1j=0jifk<KQK1j=0jotherwise.(3)where 1is the identity matrix. This -preturn is analogous to the -return in the forward-viewTD() algorithm (Sutton, 1988; Sutton and Barto, 1998). It may also be computed by a backwardaccumulation through intermediate steps gk;,gk;= (1k)vk+krk+1+k+1gk+1;; (4)where gK;=vK, and then using g=g0;. Computation in the -predictron operates in a sweep,iterating first through the model from k= 0:::K and then back through the accumulator fromk=K::: 0in a single “forward” pass of the network (see Figure 1b). Each kweight acts as agate on the computation of the -preturn: a value of k=0will truncate the -preturn at layer k,while a value of k=1will utilise deeper layers based on additional steps of the model m; the finalweight is always K=0. The individual kweights may depend on the corresponding abstractstateskand can differ per prediction. This enables the predictron to compute to an adaptive depth(Graves, 2016) depending on the internal state and learning dynamics of the network.4 P REDICTRON LEARNING UPDATESWe first consider updates that optimise the joint parameters of the state representation, model, andvalue function. We begin with the k-step predictron. We update the k-step predictron gktowardsa target outcome g, such as the Monte-Carlo return from the real environment, by minimising amean-squared error loss,Lk=12Ep[gjs]Emgkjs2:@lk@=ggk@gk@: (5)wherelk=12ggk2is the sample loss. We can use the gradient of the sample loss to updateparameters, e.g. by stochastic gradient descent. For stochastic models, two independent samples arerequired for gkand@gk@to get unbiased samples for the gradient of Lk.3Under review as a conference paper at ICLR 2017The-predictron combines together many k-step preturns. To update the joint parameters , we canuniformly average the losses on the individual preturns gk,L0:K=12KKXk=0Ep[gjs]Emgkjs2;@l0:K@=1KKXk=0ggk@gk@: (6)Alternative, we could weight each loss by the usage wkof the corresponding preturn, such that thegradient isPKk=0wkggk@gk@.The-predictron uses an accumulator with additional parameters that determine the relativeweighting of the k-step preturns. These weights are also updated so as to minimise a mean-squarederror lossL,L=12Ep[gjs]Emgjs2;@l@=gg@g@: (7)In summary, the joint parameters of the state representation f, the modelm, and the value functionvare updated to make each of the k-step preturns gkmore similar to the target g, and the parametersof the-accumulator are updated to make the aggregate -preturn gmore similar to the target g.4.1 C ONSISTENCY (SEMI-SUPERVISED ) LEARNING WITH THE -PREDICTRONIdeally, the predictron (f;m;v )learns preturns that are all equal in expectation to the true valuefunction of the environment, Emgkjs=Ep[gtjs] =vp(s), in which case the preturns mustbe equal in expectation, Emg0js=Emg1js=:::=EmgKjs. In addition, each k-steppreturn must then be equal in expectation to the -preturn, Emgkjs=Emgjs, for anyparameters. All these consistency relations between preturns give rise to additional constraints uponthe predictron. Specifically, we may adjust the parameters of the predictron to lead to consistentpreturns, even in the absence of labelled targets.Concretely, we can adjust each preturn gktowards the-preturn g; in other words, we can updateeach individual value estimate towards the best aggregated estimate by minimizingL=12KXk=0EmgjsEmgkjs2;@l@=KXk=0ggk@gk@:(8)Heregis considered fixed; the parameters are only updated to make gkmore similar to g, notvice versa. This consistency update does not require any labels gor samples from the environment.As a result, it can be applied to (potentially hypothetical) states that have no associated ‘real’ (e.g.Monte-Carlo) outcome: we update the value estimates to be self-consistent with each other. Notethe similarity with the semi-supervised setting, where we may have unlabelled inputs.5 E XPERIMENTSWe conducted experiments on two domains. The first domain consists of randomly generated 2020mazes in which each location either is empty or contains a wall. Two locations in a maze are consid-ered connected if they are both empty and we can reach one from the other by moving horizontallyor vertically through adjacent empty cells. The goal is to predict, for each of the locations on thediagonal from top-left to bottom-right of the maze, whether the bottom-right corner is connected tothat location, given the entire maze as an input image. Some of these predictions will be straightfor-ward, for instance for locations on the diagonal that contain a wall themselves and for locations closeto the bottom right. Many other predictive questions seem to require a simple algorithm, such assome form of a flood fill or search; our hypothesis is that an internal model can learn to emulate suchalgorithms, where naive approximation may struggle. A few example mazes are shown in Figure 2.Our second domain is a simulation of the game of pool, using four balls and four pockets. The simu-lator is implemented in the physics engine Mujoco (Todorov et al., 2012). We generate sequences ofRGB frames starting from a random arrangement of balls on the table. The goal is to simultaneouslylearn to predict future events for each of the four balls, given 5 RGB frames as input. These eventsinclude: collision with any other ball, collision with any boundary of the table, entering a quadrant(4, for each quadrant), being located in a quadrant ( 4, for each quadrant), and entering a pocket4Under review as a conference paper at ICLR 2017Figure 2: Left: Two sample mazes from the random-maze domain. Light blue cells are empty,darker blue cells contain a wall. One maze is connected from top-left to bottom-right (indicated inblack), the other is not. Right: An example trajectory in the pool domain (before downsampling).It was selected by maximising the prediction of pocketing balls, using the predictron.usage weightingrr, vweight sharingskipconnections(r, v, r)-predictronFeedforward netRecurrent netResNetRecurrent ResNetRecurrent net0 1M 2M 3M 4M 5M0.00010.0010.01RMSE on random mazes(log scale)Usage weighted0 1M 2M 3M 4M 5MUniformly weightedrecurrent netλ-predictron(r,γ)-predictron(r,γ,λ)-predictron0 500K 1MUpdates0.20.30.4RMSE on pool0 500K 1MUpdatesFigure 3: Exploring predictron variants. Aggregated prediction errors over all predictions (20for mazes, 280 for pool) for the eight predictron variants corresponding to the cube on the left (asdescribed in the main text), for both random mazes (top) and pool (bottom). Each line is the medianof RMSE over five seeds; shaded regions encompass all seeds. The full (r;; )-prediction ( red)consistently performed best.(4, for each pocket). Each of these 144events provides a binary pseudo-reward that we combinewith 5 different discount factors f0;0:5;0:9;0:98;1gand predict their cumulative discounted sumover various time spans. This yields a total of 280 general value functions. An example trajectory isshown in Figure 2. In both domains, inputs are presented as minibatches of i.i.d. samples with theirregression targets. Additional domain details are provided in Appendix E.5.1 E XPLORING THE PREDICTRON ARCHITECTUREOur first set of experiments examines three binary dimensions that differentiate the predictron fromstandard deep networks. We compare eight predictron variants corresponding to the corners of thecube on the left in Figure 3.The first dimension corresponds to whether or not the predictron architecture utilises the structure ofan MRP model. In the MRP case, labelled r;, internal rewards and discounts are both learned. Inthe non-r;case, which corresponds to a vanilla hidden-to-hidden neural network module, internalrewards and discounts are ignored by fixing their values to rk=0andk=1.The second dimension is whether a K-step accumulator or -accumulator is used to aggregate overpreturns. When a -accumulator is used, a -preturn is computed as described in Section 3. Other-wise, intermediate preturns are ignored by fixing their values to k= 1fork<K . In this case, theoverall output of the predictron is simply the maximum-depth preturn gK.The third dimension, labelled usage weighting, defines the loss that is used to update the parameters. On this dimension, we consider two options: the preturn losses can either be weighted uniformly(see Equation 6), or the update for each preturn gkcan be weighted according to the weight wkthatdetermines how much it is used in the -predictron’s overall output. We call the latter loss ‘usageweighted‘. Note that for architectures without a -accumulator, wk= 0fork <K , andwK= 1,thus usage weighting then implies backpropagating only the loss on the final preturn gK.All variants utilise a convolutional core with 2 intermediate hidden layers (see Appendix A); param-eters were updated by supervised learning (see Appendix B for more details). Root mean squaredprediction errors for each architecture, aggregated over all predictions, are shown in Figure 3. The5Under review as a conference paper at ICLR 2017rr, vweight sharingskipconnections(r, v, r)-predictronConvNetrecurrent ConvNetResNetrecurrent ResNetusage weighting0 1M 2M 3M 4M 5M0.00010.0010.01RMSE on random mazes(log scale)Shared coredeep netdeep net with skips(r,γ,λ)-predictron(r,γ,λ)-predictron with skips0 1M 2M 3M 4M 5MUnshared cores0 500K 1MUpdates0.20.30.4RMSE on pool0 500K 1MUpdatesFigure 4: Comparing predictron to baselines. Aggregated prediction errors on random mazes(top) and pool (bottom) over all predictions for the eight architectures corresponding to the cube onthe left. Each line is the median of RMSE over five seeds; shaded regions encompass all seeds. Thefull(r;; )-predictron ( red), consistently outperformed conventional deep network architectures(black ), with and without skips and with and without weight sharing.top row corresponds to the random mazes and the bottom row to the pool domain. The main con-clusion is that learning an MRP model improved performance greatly. The inclusion of weightshelped as well, especially on pool. Usage weighting further improved performance.5.2 C OMPARING THE PREDICTRON TO OTHER DEEPNETWORKSOur second set of experiments compares the predictron to feedforward and recurrent deep learningarchitectures, with and without skip connections. We compare the corners of a new cube, as depictedon the left in Figure 4, based on three different binary dimensions.The first dimension of this second cube is whether we use a predictron, or a (non- , non-r;) deepnetwork that does not have an internal model and does not output or learn from intermediate predic-tions. We use the most effective predictron from the previous section, i.e., the (r;; )-predictronwith usage weighting.The second dimension is whether weights are shared between all cores (as in a recurrent network),or whether each core uses separate weights (as in a feedforward network). We note that the non-, non-r;variants of the predictron then correspond to standard (convolutional) feedforward and(unrolled) recurrent neural networks respectively.The third dimension is whether we include skip connections. This is equivalent to defining the modelstep to output a change to the current state, s, and then defining sk+1=h(sk+ sk), wherehis the non-linear function—in our case a ReLU, h(x) = max(0;x). The deep network with skipconnections is a variant of ResNet (He et al., 2015).Root mean squared prediction errors for each architecture are shown in Figure 4. All (r;; )-predictrons (red lines) outperformed the corresponding feedforward or recurrent neural networkbaselines (black lines) both in the random mazes and in pool. We also investigated the effect ofchanging the depth of the networks (see Appendix C). The predictron outperformed the correspond-ing feedforward or recurrent baselines for all depths, with and without skip connections.5.3 S EMI-SUPERVISED LEARNING BY CONSISTENCYWe now consider how to use the predictron for semi-supervised learning, training the model ona combination of labelled and unlabelled random mazes. Semi-supervised learning is importantbecause a common bottleneck in applying machine learning in the real world is the difficulty ofcollecting labelled data, whereas often large quantities of unlabelled data exist.We trained a full (r;; )-predictron by alternating standard supervised updates with consistencyupdates, obtained by stochastically minimizing the consistency loss (8), on the unlabelled samples.For each supervised update we apply either 0, 1, or 9 consistency updates. Figure 5 shows that6Under review as a conference paper at ICLR 2017the performance improved monotonically with the number of consistency updates, measured as afunction of the number of labelled samples consumed.0 100K 200K 300K 400K 500KNumber of labels0.0010.0030.010.03RMSE on random mazes(log scale)Shared core0 consistency updates1 consistency update9 consistency updates0 100K 200K 300K 400K 500KNumber of labelsUnshared coresFigure 5: Semi-supervised learning. Prediction errors of the (r;; )-predictrons (shared core, noskips) using 0, 1, or 9 consistency updates for every update with labelled data, plotted as function ofthe number of labels consumed. Learning performance improves with more consistency updates.5.4 A NALYSIS OF ADAPTIVE DEPTHIn principle, the predictron can adapt its depth to ‘think more’ about some predictions than others,perhaps depending on the complexity of the underlying target. We investigate this by looking atqualitatively different prediction types in pool: ball collisions, rail collisions, pocketing balls, andentering or staying in quadrants. For each prediction type we consider several different time-spans(determined by the real-world discount factors associated with each pseudo-reward). Figure 6 showsdistributions of depth for each type of prediction. The ‘depth’ of a predictron is here defined as theeffective number of model steps. If the predictron relies fully on the very first value (i.e., 0= 0),this counts as 0 steps. If, instead, it learns to place equal weight on all rewards and on the finalvalue, this counts as 16 steps. Concretely, the depth dcan be defined recursively as d=d0wheredk=k(1 +kdk+1)anddK=0. Note that even for the same input state, each prediction has aseparate depth.The depth distributions exhibit three properties. First, different types of predictions used differentdepths. Second, depth was correlated with the real-world discount for the first four prediction types.Third, the distributions are not strongly peaked, which implies that the depth can differ per inputeven for a single real-world discount and prediction type. In a control experiment (not shown) weused a scalar shared among all predictions, which reduced performance in all scenarios, indicatingthat the heterogeneous depth is a valuable form of flexibility.5.5 V ISUALIZING THE PREDICTIONS IN THE POOL DOMAINWe test the quality of the predictions in the pool domain to evaluate whether they are well-suited tomaking decisions. For each sampled pool position, we consider a set Iof different initial conditions(different angles and velocity of the white ball), and ask which is more likely to lead to pocketingcoloured balls. For each initial condition s2I, we apply the (r;; )-predictron (shared cores, 16model steps, no skip connections) to obtain predictions g. We sum the predictions that correspond00.5 0.90.98 1Real-world discounts0246810121416Depthcollision00.5 0.90.98 1Real-world discounts0246810121416rails00.5 0.90.98 1Real-world discounts0246810121416enter00.5 0.90.98 1Real-world discounts0246810121416pocket00.5 0.90.98 1Real-world discounts0246810121416stayFigure 6: Thinking depth. Distributions of thinking depth on pool for different types of predictionsand for different real-world discounts.7Under review as a conference paper at ICLR 2017to pocketing any ball except the white ball, and to real-world discounts = 0:98and= 1. Weselect the condition sthat maximises this sum.We then roll forward the pool simulator from sand log the number of pocketing events. Figure 2shows a sampled rollout, using the predictron to pick s. When providing the choice of 128an-gles and two velocities for initial conditions ( jIj= 256 ), this procedure resulted in pocketing 27coloured balls in 50 episodes. Using the same procedure with an equally deep convolutional net-work only resulted in 10pocketing events. These results suggest that the lower loss of the learned(r;; )-predictron translated into meaningful improvements when informing decisions. A video ofthe rollouts selected by the predictron is available here: https://youtu.be/BeaLdaN2C3Q .6 R ELATED WORKLee et al. (2015) introduced a neural network architecture where classifications branch off interme-diate hidden layers. An important difference with respect to the -predictron, is that the weights arehand-tuned as hyper-parameters, whereas in the predictron the weights are learnt and, more im-portantly, conditional on the input. Another difference is that the loss on the auxiliary classificationsis used to speed up learning, but the classifications themselves are not combined into an aggregateprediction; the output of the model itself is the deepest prediction.Graves (2016) introduced an architecture with adaptive computation time (ACT), with a discrete(but differentiable) decision on when to halt, and aggregating over the outputs at each ponderingstep. This is related to our weights, but obtains depth in a different way; one notable difference isthat the-predictron can choose different pondering depths for each of its predictions.Value iteration networks (VINs) (Tamar et al., 2016) also learn value functions end-to-end using aninternal model, similar to the (non- ) predictron. However, VINs plan via convolutional operationsover the full input state space; whereas the predictron plans via imagined trajectories through anabstract state space. This may allow the predictron architecture to scale much more effectively indomains that do not have a natural two-dimensional encoding of the state space.The notion of learning about many predictions of the future relates to work on predictive staterepresentations (PSRs; Littman et al., 2001), general value functions (GVFs; Sutton et al., 2011),and nexting (Modayil et al., 2012). Such predictions have been shown to be useful as representa-tions (Schaul and Ring, 2013) and for transfer (Schaul et al., 2015). So far, however, none of thesehave been considered for learning abstract models.Schmidhuber (2015) discusses learning abstract models, but maintains separate losses for the modeland a controller, and suggests training the model unsupervised to compactly encode the entire historyof observations, through predictive coding. The predictron’s abstract model is instead trained end-to-end to obtain accurate values.7 C ONCLUSIONThe predictron is a single differentiable architecture that rolls forward an internal model to estimateexternal values. This internal model may be given both the structure and the semantics of tradi-tional reinforcement learning models. But unlike most approaches to model-based reinforcementlearning, the model is fully abstract: it need not correspond to the real environment in any humanunderstandable fashion, so long as its rolled-forward “plans” accurately predict outcomes in the trueenvironment.The predictron may be viewed as a novel network architecture that incorporates several separableideas. First, the predictron outputs a value by accumulating rewards over a series of internal planningsteps. Second, each forward pass of the predictron outputs values at multiple planning depths. Third,these values may be combined together, also within a single forward pass, to output an overallensemble value. Finally, the different values output by the predictron may be encouraged to beself-consistent with each other, to provide an additional signal during learning. Our experimentsdemonstrate that these differences result in more accurate predictions of value, in reinforcementlearning environments, than more conventional network architectures.We have focused on value prediction tasks in uncontrolled environments. However, these ideas maytransfer to the control setting, for example by using the predictron as a Q-network (Mnih et al.,2015). Even more intriguing is the possibility of learning an internal MDP with abstract internalactions, rather than the MRP considered in this paper. We aim to explore these ideas in future work.8Under review as a conference paper at ICLR 2017REFERENCESR. Bellman. Dynamic programming . Princeton University Press, 1957.S. Chiappa, S. Racaniere, D. Wierstra, and S. Mohamed. Recurrent environment simulators. 2016.X. Glorot, A. Bordes, and Y . Bengio. Deep sparse rectifier neural networks. In Aistats , volume 15,page 275, 2011.A. Graves. Adaptive computation time for recurrent neural networks. CoRR , abs/1603.08983, 2016.URLhttp://arxiv.org/abs/1603.08983 .K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprintarXiv:1512.03385 , 2015.S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducinginternal covariate shift. arXiv preprint arXiv:1502.03167 , 2015.D. P. Kingma and J. Ba. A method for stochastic optimization. In International Conference onLearning Representation , 2015.Y . LeCun, L. Bottou, Y . Bengio, and P. Haffner. Gradient-based learning applied to documentrecognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.C.-Y . Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. In AISTATS , volume 2,page 6, 2015.T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y . Tassa, D. Silver, and D. Wierstra. Continuouscontrol with deep reinforcement learning. In ICLR , 2016.M. L. Littman, R. S. Sutton, and S. P. Singh. Predictive representations of state. In NIPS , volume 14,pages 1555–1561, 2001.V . Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Ried-miller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King,D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforce-ment learning. Nature , 518(7540):529–533, 2015.V . Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu.Asynchronous methods for deep reinforcement learning. In International Conference on MachineLearning , 2016.J. Modayil, A. White, and R. S. Sutton. Multi-timescale nexting in a reinforcement learning robot.InInternational Conference on Simulation of Adaptive Behavior , pages 299–309. Springer, 2012.J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Action-conditional video prediction using deepnetworks in atari games. In Advances in Neural Information Processing Systems , pages 2863–2871, 2015.T. Schaul and M. B. Ring. Better Generalization with Forecasts. In Proceedings of the InternationalJoint Conference on Artificial Intelligence (IJCAI) , Beijing, China, 2013.T. Schaul, D. Horgan, K. Gregor, and D. Silver. Universal Value Function Approximators. InInternational Conference on Machine Learning (ICML) , 2015.J. Schmidhuber. On learning to think: Algorithmic information theory for novel combina-tions of reinforcement learning controllers and recurrent neural world models. arXiv preprintarXiv:1511.09249 , 2015.R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning , 3:9–44, 1988.R. S. Sutton. TD models: Modeling the world at a mixture of time scales. In Proceedings of theTwelfth International Conference on Machine Learning , pages 531–539, 1995.9Under review as a conference paper at ICLR 2017R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction . The MIT press, CambridgeMA, 1998.R. S. Sutton, J. Modayil, M. Delp, T. Degris, P. M. Pilarski, A. White, and D. Precup. Horde: A scal-able real-time architecture for learning knowledge from unsupervised sensorimotor interaction.InThe 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2 ,pages 761–768. International Foundation for Autonomous Agents and Multiagent Systems, 2011.A. Tamar, Y . Wu, G. Thomas, S. Levine, and P. Abbeel. Value iteration networks. In NeuralInformation Processing Systems (NIPS) , 2016.E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In 2012IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5026–5033. IEEE,2012.10Under review as a conference paper at ICLR 2017Figure 7: The predictron core used in our experiments.A A RCHITECTUREThe state representation fis a two-layer convolutional neural network (LeCun et al., 1998). Thereis acorec, again based on convolutions, that combines both MRP model and -network into asingle repeatable module, such that sk+1;rk+1;k+1;k=c(sk). This core is deterministic, andis duplicated Ktimes in the predictron with shared weights. (The predictron with unshared weightshasKdistinct cores.) Finally, the value network vis a fully connected neural network that computesvk=v(sk).Concretely, the core (Figure 7) consists first of a convolutional layer that maps into an intermediate(hidden) layer. From this layer, another two convolutions compute the next abstract state of thepredictron. Additionally, this same hidden layer is flattened and fed into three separate networks,with two fully connected layers each. The outputs of these three networks represent the internalrewards, discounts, and lambdas. A similar small network also hangs off the internal states, inaddition to the core, and computes the values. All convolutions use 3 3 filters and a stride of one,and use padding to retain the size of the feature maps. All feature maps have 32 channels. Thehidden layers within the MLPs have 32 hidden units.In Figure 7 the convolutional layers are schematically drawn with three channels, flattening is repre-sented by curly brakets, while the arrows represent the small multi-layer perceptrons which computevalues, rewards, discounts and lambdas.We allow up to 16 model steps in our experiments, resulting in 52-layer deep networks—two convo-lutional layers for the state representations, 316 = 48 convolutional layers for the core steps, andtwo fully-connected layers for the values on top of the final state. Between each two layers we ap-ply batch normalization (Ioffe and Szegedy, 2015) followed by a ReLU non-linearity (Glorot et al.,2011). The value and reward networks end with a linear layer, whereas the discount and -networksadditionally add a sigmoid non-linearity to ensure that these quantities are in [0;1].B T RAININGAll experiments used the supervised (Monte-Carlo) update described in Section 4 except for thesemi-supervised experiment which used the consistency update described in Section 4.1. We updateall parameters by applying the Adam optimiser (Kingma and Ba, 2015) to stochastic gradients ofthe corresponding loss functions. Each return is normalised by dividing it by its standard deviation(as measured, prior to the experiment, on a set of 20,000 episodes). In all experiments, the learningrate was 0.001, and the other parameters of the Adam optimiser were 1= 0:9,2= 0:999, and= 108. We used mini-batches of 100 samples.C C OMPARING ARCHITECTURES OF DIFFERENT DEPTHSWe investigated the effect of changing the depth of the networks, with and without skip connec-tions. Figure 8 in shows that skip connections (dashed lines) make the conventional architectures11Under review as a conference paper at ICLR 20170 1M 2M 3M 4M 5M0.00010.0010.01RMSE on random mazes(log scale)Shared corerecurrent net, no skiprecurrent net, skip(r,γ,λ)-predictron, no skip(r,γ,λ)-predictron, skip0 1M 2M 3M 4M 5MUnshared cores0 500K 1MUpdates0.20.30.4RMSE on pool0 500K 1MUpdatesFigure 8: Comparing depths. Comparing the (r;; )-predictron ( red) against more conventionaldeep networks ( black ) for various depths (2, 4, 8, or 16 model steps, corresponding to 10, 16, 28, or52 total layers of depth). Lighter colours correspond to shallower networks. Dashed lines correspondto networks with skip connections.(black/grey lines) more robust to the depth (i.e., the black/grey dashed lines almost overlap, es-pecially on pool), and that the predictron outperforms the corresponding feedforward or recurrentbaselines for all depths, with and without skips.D C APACITY COMPARISONSIn this section, we present some additional experiments comparing the predictron to more con-ventional deep networks. The purposes of these experiments are 1) to show that the conclusionsobtained above do not depend on the precise architecture used, and 2) to show that the structureof the network—whether we use a predictron or not—is more important than the raw number ofparameters.Specifically, we again consider the same 20 by 20 random mazes, and the pool task described in themain text. As described in Section A, for the results in the paper we used an encoder that preservedthe size of the input plans, 2020for the mazes and 2828for pool. Each convolution had 32channels and therefore the abstract states were 202032for the mazes and 282832forpool.We now consider a different architecture, where we no longer pad the convolutions used in theencoder. For the mazes, we still use two layers of 33stride- 1convolutions, which means theplanes reduce in size to 1616. This means that the abstract states are about one third smaller. Forpool, we use three 55stride- 1convolutions, which bring us from 2828down to 1616as well.So, the abstract states are now of equal size for both experiments. For pool, this is approximately atwo-thirds reduction, which helps reduce the compute needed to run the model.Most of the parameters in the predictron are in the fully connected layers. Previously, the first fullyconnected layer for each of the internal values, rewards, discounts, and -parameters would take aflattened abstract state, and then go into 32 hidden nodes. This means the number of parameters inthis layer were 20203232 = 409;600for the mazes and 28283232 = 802;816forpool. The predictron with shared core would have four of these layers, one for each of the internalvalues, rewards, discounts, and s, compared to one for the deep network which only has values. Wechange this in two ways. First, we add a 11convolution with a stride of 1and 8 channels beforethe first fully connected layer for each of these outputs. This reduces the number of channels, andtherefore the number of parameters in the subsequent fully-connected layer, by one fourth. Second,we tested three different numbers of hidden nodes: 32, 128, or 512.12Under review as a conference paper at ICLR 2017Figure 9: Comparing depths. Comparing the (r;; )-predictron ( red) against more conventionaldeep networks ( blue) for different numbers hidden nodes in the fully connected layers, and thereforedifferent total numbers of parameters. The deep networks with 32, 128, and 512 nodes respectivelyhave 381,416, 1,275,752, and 4,853,096 parameters in total. The predictrons with 32 and 128 nodesrespectively have 1,275,752, and 4,853,096 parameters in total. Note that the number of parametersfor the 32 and 128 node predictrons are exactly equal to the number of parameters for the 128 and512 node deep networks.The deep network with 128 hidden nodes for its values has the exact same number of parametersas the (r;; )-predictron with 32 hidden nodes for each of its outputs. Before, the deep networkhad fewer parameters, because we kept this number fixed at 32 across experiments. This opens thequestion of whether the improved performance of the predictron was not just an artifact of havingmore parameters. We tested this hypothesis, and the results are shown in Figure 9.Figure 9 shows that in each setting—on the mazes and pool, and with or without shared cores—both. The predictrons always performed better than all the deep networks. This includes the 32node predictron (darkest red) compared to the 512 node deep network (lightest blue), even thoughthe latter has approximately 4 times as many parameters (1.27M vs 4.85M). This means that thenumber of parameters mattered less than whether or not we use a predictron.E A DDITIONAL DOMAIN DETAILSFigure 10: Pool input frame. An ex-ample of a 28x28 RGB input frame inthe pool domain.We now provide some additional details of domains.E.1 P OOLTo generate sequences in the Pool domain, the initial loca-tions of 4 balls of different colours are sampled at random.The white ball is the only one moving initially. Its velocityhas a norm sampled uniformly between 7 and 14. The ini-tial angle is sampled uniformly in the range (0, 2 ). Fromthe initial condition, the Mujoco simulation is run forwarduntil all balls have stopped moving; sequences that lastmore than 151 frames are rejected, and a new one is gen-erated as replacement. Each frame is rendered by Mujocoas a 280x280 RGB image, and subsequently downsam-pled through bilinear interpolation to a 28x28 RGB input(see Figure 10 for an example). Since the 280 signals de-scribed in Section 6.1 as targets for the Pool experimentshave very different levels of sparsity, resulting in values13Under review as a conference paper at ICLR 2017with very different scales, we have normalised the pseudo returns. The normalization procedureconsisted in dividing all targets by their standard deviation, as empirically measured across an initialset of 20,000 sequences.E.2 R ANDOM MAZESTo generate mazes we first determine, with a stochastic line search, a number of walls so that the top-left corner is connected to the bottom-right corner (both always forced to be empty) in approximately50% of the mazes. We then shuffle the walls uniformly randomly. For 20 by 20 mazes this means70% of locations are empty and 30% contain walls. More than a googol different such 20-by-20mazes exist (as398120>10100).14
rJo9n9Feg
Under review as a conference paper at ICLR 2017CHESS GAME CONCEPTS EMERGE UNDER WEAK SU-PERVISION : A C ASE STUDY OF TIC-TAC-TOEHao Zhao& Ming LuDepartment of Electronic EngineeringTsinghua UniversityBeijing, Chinafzhao-h13,lu-m13 g@mails.tsinghua.edu.cnAnbang Yao & Yurong ChenCognitive Computing LaboratoryIntel Labs ChinaBeijing, Chinafanbang.yao,yurong.chen g@intel.comLi ZhangDepartment of Electronic EngineeringTsinghua UniversityBeijing, Chinafchinazhanglig@mail.tsinghua.edu.cnABSTRACTThis paper explores the possibility of learning chess game concepts under weaksupervision with convolutional neural networks, which is a topic that has not beenvisited to the best of our knowledge. We put this task in three different back-grounds: (1) deep reinforcement learning has shown an amazing capability tolearn a mapping from visual inputs to most rewarding actions, without know-ing the concepts of a video game. But how could we confirm that the networkunderstands these concepts or it just does not? (2) cross-modal supervision forvisual representation learning has drawn much attention recently. Is this method-ology still applicable when it comes to the domain of game concepts and actions?(3) class activation mapping is widely recognized as a visualization technique tohelp us understand what a network has learnt. Is it possible for it to activate atnon-salient regions? With the simplest chess game tic-tac-toe, we report inter-esting results as answers to those three questions mentioned above. All codes,pre-processed datasets and pre-trained models will be released.1 I NTRODUCTION1.1 A PPLICATION BACKGROUNDDeep reinforcement learning (DRL) has drawn quite much attention since the publication of influ-ential work Mnih et al. (2015). A convolutional neural network (CNN) is used to bridge the gapbetween video game screen frames and the most rewarding actions. An amazing feature of this kindof systems is that they do not need to know the concepts of these games (e.g. DRL learns to playBreakout without knowing there is a paddle or a ball in Fig 1a). However, how could we confirmthat this network really understands these concepts or it just learns a mapping from patterns in thevisual inputs to the best actions? This is the first question we are trying to answer here.Mnih et al. (2015) provides some unsupervised analysis results for visualization, showing that per-ceptually dissimilar frames may produce close rewards, yet this does not answer the question. Wechoose another visualization technique called class activation mapping as described in Zhou et al.(2016), which can reveal where the CNN’s attention is. However, directly applying it in tasks likeBreakout still cannot answer the question. Imagine one modifies the network described in Mnihet al. (2015) into another version as Zhou et al. (2016) does. The CNN’s attention may be fixed onthe ball but it is still not enough to support that the network understands the concept of a ball.This work was done when Hao Zhao was an intern at Intel Labs China, supervised by Anbang Yao who isresponsible for correspondence.1Under review as a conference paper at ICLR 2017Figure 1: We raise three questions from application, methodology and technique perspectives re-spectively and provide our answers with a case study of the simplest chess game tic-tac-toe.We propose to use a simple chess game called tic-tac-toe for case study. In order to answer thequestion, we propose a protocol as this: to place a piece where the CNN’s attention is, and examinewhether it is the right move. Of course, the training has to be done under weak supervision, or say,without telling the network what exactly a right move is. We think if this experiment succeeds wecan claim that the network figures out the concepts of: (1) a chess board grid; (2) the winning rule;(3) two sides. Detailed analysis about these three concepts are provided later.1.2 M ETHODOLOGY BACKGROUNDThere have been some works about representation learning with cross-modal supervision recently.Owens et al. (2016) clusters sound statistics into several categories, and uses them as labels to learnvisual representation from images corresponding to these sounds. It quantitatively shows that visualrepresentation learnt in this way is capable of handling challenging computer vision tasks and qual-itatively shows that visual and sound representations are consistent (e.g. babies’ faces correspondto baby cry sound samples). Castrej ́on et al. (2016) goes even further by learning representationsacross five modalities: RGB images, clip art pictures, sketches, texts and spatial texts. Gupta et al.(2016) learns depth image representation with mid-level features extracted from RGB images assupervision, and reports improved RGB-D object detection performance.What is the common point among these works? They generate weak supervision from one modalityand use it to learn representation from another (e.g. to learn what a train looks like from what atrain sounds like or to learn what a chair looks like in depth images from what a chair looks like inRGB images ). During training phase, no concepts about a train or a chair are explicitly modeled.Although there are many other modalities not visited by this methodology, we think the basic ideasbehind these works are same: an abstract concept like a train can be observed in different modalitiesand different representations can be connected.Here comes the question: is this methodology still applicable when it goes beyond the problem oflearning representations from different observations of a same concept? Albanie & Vedaldi (2016)is an example, which tries to relate facial expressions with what happened in a TV show (e.g. if acharacter earns a lot of money, she will be very happy). Although in Albanie & Vedaldi (2016) whathappened is explicitly defined, it still can be regarded as a weak supervision for what this expressionis.Although with the same methodology, the problem studied in this paper addresses even higher se-mantics: to learn what to do under the weak supervision of what will happen (Fig 1b). This is sub-stantially different from cross-modal supervision works mentioned above because there is no longera certain abstract concept of object or attribute observed in different modalities. Instead, figuringout the relationship between what to do andwhat will happen needs a higher level of intelligence.2Under review as a conference paper at ICLR 20171.3 T ECHNIQUE BACKGROUNDThe core technique used in this paper is class activation mapping (CAM) as described in Zhou et al.(2016). So leaving out all the backgrounds about playing a chess game or cross-modal supervision,what do our experiments say more than its inventors’? We think we show that CAM can also activateat non-salient regions. CAM helps us to understand where contributes the most to a classificationresult. As Fig 1c shows, the heatmap reveals that the face contributes the most to the result that thenetwork claims it as a person .As has already been shown by Krizhevsky et al. (2012), kernels of lower layers of a CNN capturegradients in an image. Existing CAM experiments tend to activate at salient regions, and this isvery reasonable because there are more gradients and therefore more information (e.g. the face inFig 1c). Here comes the question: could CAM activate at non-salient regions like the empty spaceson a chess board? Our answer is positive as the results (Fig 1d) show that in order to predict whatwill happen in the future, the CNN’s attention is fixed upon texture-free regions.Since we render chessboards as visual inputs without adding noise, those empty spaces are com-pletely empty meaning that: (1) if we take out the activated patch in Fig 1d, all pixels in this patchhave exactly the same value. (2) If we evaluate this patch with quantitative information metric likeentropy, there is no information here. Thus the only reason why these regions are activated is thatthe network collects enough information from these regions’ receptive fields. We argue that this ex-periment (CAM can activate at non-salient regions) testifies (again) CNN’s ability to hierarchicallycollect information from visual inputs.1.4 W HAT THISPAPER IS ABOUTAfter introducing those three backgrounds, we describe our work briefly as: to classify renderedtic-tac-toe chessboards with weak labels and to visualize that the CNN’s attention automaticallyreveals where the next piece should be placed. Learnt representation shows that: (1) the networkknows some concepts of the game that it is not told of; (2) this level of supervision for representationlearning is possible; (3) the technique of class activation mapping can activate at non-salient regions.2 R ELATED WORKS2.1 C ONCEPT LEARNINGConcept learning has different meanings in different contexts, and how to confirm a concept is learntremains an open question. In Jia et al. (2013), a concept is learnt if a generative model is learnt froma small number of positive samples. In Lake et al. (2015), a concept is learnt if a model learnt fromonly one instance can generalize to various tasks. Higgins et al. (2016) claims a concept is learntwhen a model can predict unseen objects’ sizes and positions. To summarize, they evaluate whethera concept is learnt through a model’s generalization ability. In even earlier works like Zhu et al.(2010);Yang et al. (2010), concept learning means a object/attribute classification task dealing withappearance variations, in which a concept is actually already pre-defined.Unlike these works, we investigate the concepts of game rules instead of object/attribute. UnlikeJia et al. (2013);Lake et al. (2015);Higgins et al. (2016), we claim a concept is learnt through anovel testing protocol instead of generalization ability. Why generalization ability could show aconcept is learnt? We think the reason is that a model understands a concept if it can use it in morecases. To this end, we argue that our protocol could also show a concept is learnt because the learntrepresentations in our experiments can be used to decide what to do though no rule about what needto be done is provided.2.2 C ROSS -MODAL SUPERVISIONThe literature of cross-model supervision and the differences between this paper and existing onesare already covered in last section. Here we re-claim it briefly: Owens et al. (2016);Castrej ́on et al.(2016);Gupta et al. (2016) learn representations across modalities because actually they are differentobservations of a same (object or attribute) concept. Whether this methodology is applicable for3Under review as a conference paper at ICLR 2017Figure 2: 18 different types of chessboard states and corresponding labels.higher-level concepts like game rules remains an open question and we provide positive answers tothis question.2.3 C LASS ACTIVATION MAPPINGBefore the technique of class activation mapping is introduced by Zhou et al. (2016), pioneeringworks like Simonyan et al. (2014);Zhou et al. (2015) have already shown CNN’s ability to localizeobjects with image-level labels. Although with different techniques, Simonyan et al. (2014);Zhouet al. (2015)’s activation visualization results also focus on salient regions. Unlike these works,we show that class activation mapping can activate at non-salient regions, or say more specifically,completely texture-free regions. Since the activated patch itself provides no information, all dis-criminative information comes from its context. This is another strong evidence to prove CNN’scapability to collect information from receptive fields, as a hierarchical visual model.3 E XPERIMENT I: G AME ENDS IN NEXT MOVEA tic-tac-toe chessboard is a 33grid, and there are two players (black and white in our case). Dueto duality, we generate all training samples assuming the black side takes the first move. The statespace of tic-tac-toe is small consisting of totally 39= 19683 combinations. Among them, manycombinations are illegal such as the one in which all 9 pieces are black. We exhaustively search overthe space according to a recursive simulation algorithm, in which: (1) the chessboard state is denotedby an integer smaller than 19683. (2) every state corresponds to a 9-d vector, with each element cantake a value from this set f0-illegal, 1-black win, 2-white win, 4-tie, 5-uncertain g. We call this 9-dvector a state transfer vector, denoting what will happen if the next legal piece placement happensat according location. (3) generated transfer vectors can predict the existence of a critical move thatwill finish the game in advance. We will release this simulation code.After pruning out illegal states, we collect 4486 possible states in total. Among these samples, wefurther take out 1029 states that a certain side is going to win in the next move. We then transformthese chessboard states into visual representations (gray-scale images at resolution (180 ;180) ). Eachof these 1029 samples is assigned a label according to the state transfer vectors. There are totally 18different labels illustrating 2(sides)9(locations). As demonstrated by Fig 2, we randomly pick asample for each label. As mentioned before black side takes the first move, thus if the numbers of4Under review as a conference paper at ICLR 2017Figure 3: Class activation mapping results on our dataset.black and white pieces are equal the next move will be black side’s and if there are one more blackpiece the next move will be white side’s.Although the concepts of two sides and nine locations are coded into the labels, this kind of super-vision is still weak supervision. Because what we are showing to the algorithm is just 18 abstractcategories as Fig 2 shows. Could an algorithm figure out what it needs to do by observing thesevisual inputs? We think even for a human baby it is difficult because no concepts like this is a gameoryou need to find out how to win are provided. In the setting of deep reinforcement learning thereis at least an objective of getting higher score to pursue.As mentioned before, the method we exploit is to train a classification network on this rendereddataset (Fig 2) and analyze learnt representations with the technique of class activation mapping.As Zhou et al. (2016) suggests, we add one global average pooling layer after the last convolutionallayer of a pre-trained AlexNet model. All fully connected layers of the AlexNet model are discarded,and a new fully connected layer is added after the global average pooling layer. After the newclassification network is fine-tuned on our dataset, a CAM visualization is generated by weightingthe outputs of the last convolutional layer with parameters from the added fully connected layer. OurCAM implementation is built upon Marvin and it will be released.Due to the simplicity of this classification task, the top one classification accuracy is 100% (notsurprisingly). Class activation mapping results are provided in Fig 3 and here we present the reasonswhy we claim concepts are learnt: (1) We provide 18 abstract categories, but in order to classifyvisual inputs into these 18 categories the network’s attention is roughly fixed upon chessboard grids.5Under review as a conference paper at ICLR 2017Figure 4: Class activation mapping results after grid lines are added.This means the concept of grid emerges in the learnt representation. (2) If we place a piece at themost activated location in Fig 3, that will be the right (and legal) move to finish the game. Onone hand, this means the concept of winning rule emerges in the learnt representation. On theother hand, this means this learnt concept can be used to deal with un-taught task (analogous to Jiaet al. (2013);Lake et al. (2015);Higgins et al. (2016) who use generalization ability to illustrate thatconcepts are learnt). (3) As Fig 3cehijnpq show, both sides can win in the next move if we violatethe take-turns rule. However, the network pays attention to the right location that is consistent tothe rule. For example, in Fig 3j, it seems that placing a black piece at the left-top location will alsoend the game. However, this move will violate the rule because there are already more black piecesthan white pieces meaning that this is the white side’s turn. This means that the concept of two sidesemerges in learnt representation.Except for learnt concepts, we analyze what this experiment provides for the remaining two ques-tions. To the second question: results in Fig 3 show that the methodology of generating labels fromone modality (state transfer vectors in our case) to supervise another modality is still applicable.More importantly, we use images as inputs yet the learnt visual representations contain not onlyvisual saliency information but also untold chess game concepts. To the third question: as Fig 3shows, most activated regions are empty spaces on the chessboard.4 E XPERIMENT II: A DDING GRIDLINESSince we claim complicated concepts emerge in learnt visual representations, a natural questionwill be: if the chessboard’s and pieces’ appearances are changed does this experiment still work?Thus we design this experiment by adding grid lines to the chessboards when rendering syntheticdata (Fig 4). The intentions behind this design is three-folded: (1) in this case, the chessboard’sappearance is changed. (2) after these lines are added, the concept that there is a chessboard grid isactually implied. Still, we do not think these lines directly provide the concept of chessboard gridthus we use the word imply . Whether the network can figure out what these lines mean still remain6Under review as a conference paper at ICLR 2017Figure 5: Class activation mapping results after piece appearance is changed.uncertain. (3) those locations that are completely empty in Experiment I are no longer empty fromthe perspective of information (still empty from the perspective of game rule).We train the same network on the newly rendered dataset with grid lines and calculate CAM resultsin the same way. The results are demonstrated by Fig 4. Generally speaking, the grid lines allow thenetwork to better activate at the location of right move, making them stands out more on the heatmap.What does this mean to the three intentions mentioned in last paragraph? (1) Firstly, it shows that ourexperiment is robust to chess board appearance variance. (2) Secondly, after implying the conceptthat there is a chessboard grid, the network performs better at paying attention to the location ofright move. Again we compare this phenomenon against how a human baby learns. Although notsupported by phycological experiment, we think with a chessboard grid a human baby is more easyto figure out the game rule than without. (3) Thirdly, heatmap changes in Fig 4 is not surprising,because after adding those lines, the empty (from the perspective of game rule) regions containmore gradients for lower layers of a CNN to collect. However, again it supports that activating atnon-salient regions isNOT trivial.5 E XPERIMENT III: P IECE APPEARANCE CHANGEIn this experiment we change the appearance of the piece by: (1) replacing black boxes with whitecircles; (2) replacing white boxes with black crosses. Note that in this case the white side movesfirst. Again we train the same network and visualize with CAM. The results comparison is providedin Fig 6. Further we add grid lines to the cross/circle chessboard.6 E XPERIMENT IV: M ODEL BEHAVIOR OVER TIMEIn order to further demonstrate the non-triviality of the model behaviors, we design this experiment.We train on the dataset in Experiment I with 1000 iterations and snap-shotted the parameters at 500thiteration. The classification accuracy is 100% at 1000th iteration and 53.13% at 500th iteration. The7Under review as a conference paper at ICLR 2017Figure 6: Class activation mapping results on true positive samples at 500 iterations (left, 53.13%accuracy) and 1000 iterations (right, 100% accuracy).Figure 7: We propose two quantitative evaluation protocols: (a) by selecting the most activatedpatch, we calculate how frequent the representation fire at the correct location; (b) we correlate therepresentation with an ideal activation map.CAM results are shown by Fig 5 in which all samples are true positives. We think it shows thatthere are two ways to achieve this classification task: (1) by paying attention to the visual patternsformed by the existing pieces; (2) by paying attention to where the next piece should be placed. Thisexperiment shows that at an earlier stage of learning the model’s behavior is consistent to the firsthypothesis and after the training is completely done the network can finally fire at correct location.7 Q UANTITATIVE EVALUATIONWe propose two different quantitative evaluation protocols. The first one is representation accuracy(RAC), for which we select the most activated patch and examine whether it is the correct locationto end the game. The second one is representation consistency (RCO), which correlates the normal-ized representation and a normalized ideal activation map. The quantitative comparisons are shownin Table 1, in which NAC stands for network classification accuracy. These results quantitativelysupport that: (1) learnt representation can be used to predict the right move at an over 70% accuracy.(2) adding grid lines (implying the concept of a chessboard) dramatically improves localization.8 C ONCLUSIONThe core experiment in this paper is to train a classification CNN on rendered chessboard imagesunder weak labels. After class activation mapping visualization, we analyse and interpret the results8Under review as a conference paper at ICLR 2017Experiment I II III III IVoriginal grid piece piece+grid 500thNAC (%) 100.00 100.00 100.00 100.00 53.13RAC (%) 71.82 97.25 83.77 99.00 27.87RCO ( 103) -8.096 -5.115 -7.751 -4.9321 -10.610Table 1: Quantitative results.in three different backgrounds. Although simple, we argue that our results are enough to show that:(1) a CNN can automatically figure out complicated game rule concepts in this case. (2) cross-modalsupervision for representation learning is still applicable in this case of higher-level semantics. (3)the technique of CAM can activate at non-salient regions, testifying CNN’s capability to collectinformation from context in an extreme case (only context has information).REFERENCESSamuel Albanie and Andrea Vedaldi. Learning grimaces by watching tv. In BMVC , 2016.Lluıs Castrej ́on, Yusuf Aytar, Carl V ondrick, Hamed Pirsiavash, and Antonio Torralba. Learningaligned cross-modal representations from weakly aligned data. In CVPR , 2016.Saurabh Gupta, Judy Hoffman, and Jitendra Malik. Cross modal distillation for supervision transfer.InCVPR , 2016.Irina Higgins, Loic Matthey, Xavier Glorot, Arka Pal, Benigno Uria, Charles Blundell, Shakir Mo-hamed, and Alexander Lerchner. Early visual concept learning with unsupervised deep learning.arXiv:1606.05579 , 2016.Yangqing Jia, Joshua T Abbott, Joseph Austerweil, Thomas Griffiths, and Trevor Darrell. Visualconcept learning: Combining machine vision and bayesian generalization on concept hierarchies.InNIPS , 2013.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-lutional neural networks. In NIPS , 2012.Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learningthrough probabilistic program induction. In Science , 2015.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle-mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-levelcontrol through deep reinforcement learning. In Nature , 2015.Andrew Owens, Jiajun Wu, Josh H McDermott, William T Freeman, and Antonio Torralba. Ambientsound provides supervision for visual learning. In ECCV , 2016.Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks:Visualising image classification models and saliency maps. 2014.Jingjing Yang, Yuanning Li, Yonghong Tian, Ling-Yu Duan, and Wen Gao. Per-sample multiplekernel approach for visual concept learning. In Journal on Image and Video Processing , 2010.Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object detectorsemerge in deep scene cnns. In ICLR , 2015.Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deepfeatures for discriminative localization. In CVPR , 2016.Shiai Zhu, Gang Wang, Chong-Wah Ngo, and Yu-Gang Jiang. On the sampling of web images forlearning visual concept classifiers. In Proceedings of the ACM International Conference on Imageand Video Retrieval , 2010.9
SkhU2fcll
Published as a conference paper at ICLR 2017DEEPMULTI -TASK REPRESENTATION LEARNING :A T ENSOR FACTORISATION APPROACHYongxin Yang, Timothy M. HospedalesQueen Mary, University of Londonfyongxin.yang, t.hospedales g@qmul.ac.ukABSTRACTMost contemporary multi-task learning methods assume linear models. This set-ting is considered shallow in the era of deep learning. In this paper, we presenta new deep multi-task representation learning framework that learns cross-tasksharing structure at every layer in a deep network . Our approach is based ongeneralising the matrix factorisation techniques explicitly or implicitly used bymany conventional MTL algorithms to tensor factorisation, to realise automaticlearning of end-to-end knowledge sharing in deep networks. This is in contrastto existing deep learning approaches that need a user-defined multi-task sharingstrategy. Our approach applies to both homogeneous and heterogeneous MTL.Experiments demonstrate the efficacy of our deep multi-task representation learn-ing in terms of both higher accuracy and fewer design choices.1 I NTRODUCTIONThe paradigm of multi-task learning is to learn multiple related tasks simultaneously so that knowl-edge obtained from each task can be re-used by the others. Early work in this area focused on neuralnetwork models (Caruana, 1997), while more recent methods have shifted focus to kernel methods,sparsity and low-dimensional task representations of linear models (Evgeniou & Pontil, 2004; Ar-gyriou et al., 2008; Kumar & Daum ́e III, 2012). Nevertheless given the impressive practical efficacyof contemporary deep neural networks (DNN)s in many important applications, we are motivated torevisit MTL from a deep learning perspective.While the machine learning community has focused on MTL for shallow linear models recently, ap-plications have continued to exploit neural network MTL (Zhang et al., 2014; Liu et al., 2015). Thetypical design pattern dates back at least 20 years (Caruana, 1997): define a DNN with shared lowerrepresentation layers, which then forks into separate layers and losses for each task. The sharingstructure is defined manually: full-sharing up to the fork, and full separation after the fork. Howeverthis complicates DNN architecture design because the user must specify the sharing structure: Howmany task specific layers? How many task independent layers? How to structure sharing if there aremany tasks of varying relatedness?In this paper we present a method for end-to-end multi-task learning in DNNs. This contributioncan be seen as generalising shallow MTL methods (Evgeniou & Pontil, 2004; Argyriou et al., 2008;Kumar & Daum ́e III, 2012) to learning how to share at every layer of a deep network; or as learningthe sharing structure for deep MTL (Caruana, 1997; Zhang et al., 2014; Spieckermann et al., 2014;Liu et al., 2015) which currently must be defined manually on a problem-by-problem basis.Before proceeding it is worth explicitly distinguishing some different problem settings, which haveall been loosely referred to as MTL in the literature. Homogeneous MTL: Each task correspondsto asingle output. For example, MNIST digit recognition is commonly used to evaluate MTL algo-rithms by casting it as 10 binary classification tasks (Kumar & Daum ́e III, 2012). HeterogeneousMTL: Each task corresponds to a unique set of output(s) (Zhang et al., 2014). For example, onemay want simultaneously predict a person’s age (task one: multi-class classification or regression)as well as identify their gender (task two: binary classification) from a face image.In this paper, we propose a multi-task learning method that works on all these settings. The key ideais to use tensor factorisation to divide each set of model parameters (i.e., both FC weight matrices,1Published as a conference paper at ICLR 2017and convolutional kernel tensors) into shared andtask-specific parts. It is a natural generalisationof shallow MTL methods that explicitly or implicitly are based on matrix factorisation (Evgeniou &Pontil, 2004; Argyriou et al., 2008; Kumar & Daum ́e III, 2012; Daum ́e III, 2007). As linear methods,these typically require pre-engineered features. In contrast, as a deep network, our generalisationcan learn directly from raw image data, determining sharing structure in a layer-wise fashion. Forthe simplest NN architecture – no hidden layer, single output – our method reduces to matrix-basedones, therefore matrix-based methods including (Evgeniou & Pontil, 2004; Argyriou et al., 2008;Kumar & Daum ́e III, 2012; Daum ́e III, 2007) are special cases of ours.2 R ELATED WORKMulti-Task Learning Most contemporary MTL algorithms assume that the input and model arebothD-dimensional vectors. The models of Ttasks can then be stacked into a DTsized matrixW. Despite different motivations and implementations, many matrix-based MTL methods workby placing constrains on W. For example, posing an `2;1norm onWto encourage low-rank W(Argyriou et al., 2008). Similarly, (Kumar & Daum ́e III, 2012) factorises WasW=LS, i.e., itassigns a lower rank as a hyper-parameter. An earlier work (Evgeniou & Pontil, 2004) proposesthat the linear model for each task tcan be written as wt= ^wt+ ^w0. This is the factorisationL= [ ^w0;^w1;:::; ^wT]andS= [11T;IT]. In fact, such matrix factorisation encompasses manyMTL methods. E.g., (Xue et al., 2007) assumes S;i(theith column of S) is a unit vector generatedby a Dirichlet Process and (Passos et al., 2012) models Wusing linear factor analysis with IndianBuffet Process (Griffiths & Ghahramani, 2011) prior on S.Tensor Factorisation In deep learning, tensor factorisation has been used to exploit factorisedtensors’ fewer parameters than the original (e.g., 4-way convolutional kernel) tensor, and thus com-press and/or speed up the model, e.g., (Lebedev et al., 2015; Novikov et al., 2015). For shallow linearMTL, tensor factorisation has been used to address problems where tasks are described by multipleindependent factors rather than merely indexed by a single factor (Yang & Hospedales, 2015). HeretheD-dimensional linear models for all unique tasks stack into a tensor W, of e.g.DT1T2in the case of two task factors. Knowledge sharing is then achieved by imposing tensor norms onW(Romera-paredes et al., 2013; Wimalawarne et al., 2014). Our framework factors tensors for thedifferent reason that for DNN models, parameters include convolutional kernels ( N-way tensors) orD1D2FC layer weight matrices ( 2-way tensors). Stacking up these parameters for many tasksresults inD1DNTtensors within which we share knowledge through factorisation.Heterogeneous MTL and DNNs Some studies consider heterogeneous MTL, where tasks mayhave different numbers of outputs (Caruana, 1997). This differs from the previously discussed stud-ies (Evgeniou & Pontil, 2004; Argyriou et al., 2008; Bonilla et al., 2007; Jacob et al., 2009; Kumar& Daum ́e III, 2012; Romera-paredes et al., 2013; Wimalawarne et al., 2014) which implicitly as-sume that each task has a single output. Heterogeneous MTL typically uses neural networks withmultiple sets of outputs and losses. E.g., Huang et al. (2013) proposes a shared-hidden-layer DNNmodel for multilingual speech processing, where each task corresponds to an individual language.Zhang et al. (2014) uses a DNN to find facial landmarks (regression) as well as recognise facialattributes (classification); while Liu et al. (2015) proposes a DNN for query classification and in-formation retrieval (ranking for web search). A key commonality of these studies is that they allrequire a user-defined parameter sharing strategy. A typical design pattern is to use shared layers(same parameters) for lower layers of the DNN and then split (independent parameters) for the toplayers. However, there is no systematic way to make such design choices, so researchers usually relyon trial-and-error, further complicating the already somewhat dark art of DNN design. In contrast,our method learns where and how much to share representation parameters across the tasks, hencesignificantly reducing the space of DNN design choices.Parametrised DNNs Our MTL approach is a parameterised DNN (Sigaud et al., 2015), in thatDNN weights are dynamically generated given some side information – in the case of MTL, giventhe task identity. In a related example of speaker-adaptive speech recognition (Tan et al., 2016) theremay be several clusters in the data (e.g., gender, acoustic conditions), and each speaker’s modelcould be a linear combination of these latent task/clusters’ models. They model each speaker i’sweight matrix W(i)as a sum of Kbase models ~W, i.e.,W(i)=PKk=1(i)p~W(p). The differencebetween speakers/tasks comes from and the base models are shared. An advantage of this is that,2Published as a conference paper at ICLR 2017when new data come, one can choose to re-train parameters only, and keep ~Wfixed. This willsignificantly reduce the number of parameters to learn, and consequently the required training data.Beyond this, Yang & Hospedales (2015) show that it is possible to train another neural network topredict thosevalues from some abstract metadata. Thus a model for an unseen task can be gener-ated on-the-fly with notraining instances given an abstract description of the task. The techniquesdeveloped here are compatible with both these ideas of generating models with minimal or no effort.3 M ETHODOLOGY3.1 P RELIMINARIESWe first recap some tensor factorisation basics before explaining how to factorise DNN weighttensors for multi-task representation learning. An N-way tensorWwith shapeD1D2DNis anN-dimensional array containingQNn=1Dnelements. Scalars, vectors, and matrices can beseen as 0,1, and 2-way tensors respectively, although the term tensor is usually used for 3-way orhigher. A mode- nfibre ofWis aDn-dimensional vector obtained by fixing all but the nth index.The mode-nflatteningW(n)ofWis the matrix of size DnQi:nDiconstructed by concatenatingall of theQi:nDimode-nfibres along columns.The dot product of two tensors is a natural extension of matrix dot product, e.g., if we have a tensorAof sizeM1M2Pand a tensorBof sizePN1N2:::, the tensor dot product AB willbe a tensor of size M1M2N1N2by matrix dot product AT(1)B(1)and reshaping1.More generally, tensor dot product can be performed along specified axes, A B(i;j)=AT(i)B(j)and reshaping. Here the subscripts indicate the axes of AandBat which dot product is performed.E.g., whenAis of sizeM1PM3MIandBis of sizeN1N2PNJ, thenA B(2;3)is a tensor of size M1M3MIN1N2NJ.Matrix-based Knowledge Sharing Assume we have Tlinear models (tasks) parametrised by D-dimensional weight vectors, so the collection of all models forms a size DTmatrixW. Onecommonly used MTL approach (Kumar & Daum ́e III, 2012) is to place a structure constraint on W,e.g.,W=LS, whereLis aDKmatrix andSis aKTmatrix. This factorisation recovers ashared factorLand a task-specific factorS. One can see the columns of Las latent basis tasks, andthe modelw(i)for theith task is the linear combination of those latent basis tasks with task-specificinformation S;i.w(i):=W;i=LS;i=KXk=1L;kSk;i (1)From Single to Multiple Outputs Consider extending this matrix factorisation approach to thecase of multiple outputs. The model for each task is then a D1D2matrix, forD1input andD2output dimensions. The collection of all those matrices constructs a D1D2Ttensor. Astraightforward extension of Eq. 1 to this case isW(i):=W;;i=KXk=1L;;kSk;i (2)This is equivalent to imposing the same structural constraint on WT(3)(transposed mode- 3flatteningofW). It is important to note that this allows knowledge sharing across the tasks only. I.e., knowl-edge sharing is only across-tasks not across dimensions within a task. However it may be that theknowledge learned in the mapping to one output dimension may be useful to the others within onetask. E.g., consider recognising photos of handwritten and print digits – it may be useful to shareacross handwritten-print; as well as across different digits within each. In order to support generalknowledge sharing across both tasks and outputs within tasks, we propose to use more general tensorfactorisation techniques. Unlike for matrices, there are multiple definitions of tensor factorisation,and we use Tucker (Tucker, 1966) and Tensor Train (TT) (Oseledets, 2011) decompositions.1We slightly abuse ‘-1’ referring to the last axis of the tensor.3Published as a conference paper at ICLR 20173.2 T ENSOR FACTORISATION FOR KNOWLEDGE SHARINGTucker Decomposition Given anN-way tensor of size D1D2DN, Tucker decompositionoutputs a core tensor Sof sizeK1K2KN, andNmatricesU(n)of sizeDnKn, suchthat,Wd1;d2;:::;d N=K1Xk1=1K2Xk2=1KNXkN=1Sk1;k2;:::;k NU(1)d1;k1U(2)d2;k2U(N)dN;kN(3)W=SU(1)(1;2)U(2)(1;2)U(N)(1;2)(4)Tucker decomposition is usually implemented by an alternating least squares (ALS) method (Kolda& Bader, 2009). However (Lathauwer et al., 2000) treat it as a higher-order singular value decom-position (HOSVD), which is more efficient to solve: U(n)is exactly the Umatrix from the SVD ofmode-nflatteningW(n)ofW, and the core tensor Sis obtained by,S=WU(1)(1;1)U(2)(1;1)U(N)(1;1)(5)Tensor Train Decomposition Tensor Train (TT) Decomposition outputs 2matricesU(1)andU(N)of sizeD1K1andKN1DNrespectively, and (N2) 3-way tensorsU(n)of sizeKn1DnKn. The elements ofWcan be computed by,Wd1;d2;:::;d N=K1Xk1=1K2Xk2=1KN1XkN1=1U(1)d1;k1U(2)k1;d2;k2U(3)k2;d3;k3U(N)kN1;dN(6)=U(1)d1;U(2);d2;U(3);d3;U(d);dN(7)W=U(1)U(2)U(N)(8)whereU(n);dn;is a matrix of size Kn1Knsliced fromU(n)with the second axis fixed at dn. TheTT decomposition is typically realised with a recursive SVD-based solution (Oseledets, 2011).Knowledge Sharing If the final axis of the input tensor above indexes tasks, i.e. if DN=Tthenthe last factor U(N)in both decompositions encodes a matrix of task specific knowledge, and theother factors encode shared knowledge.3.3 D EEPMULTI -TASK REPRESENTATION LEARNINGTo realise deep multi-task representation learning (DMTRL), we learn one DNN per-task each withthe same architecture2. However each corresponding layer’s weights are generated with one of theknowledge sharing structures in Eq. 2, Eq. 4 or Eq. 8. It is important to note that we apply these‘right-to-left’ in order to generate weight tensors with the specified sharing structure, rather thanactually applying Tucker or TT to decompose an input tensor. In the forward pass, we synthesiseweight tensorsWand perform inference as usual, so the method can be thought of as tensor com-position rather than decomposition.Our weight generation (construct tensors from smaller pieces) does not introduce non-differentiableterms, so our deep multi-task representation learner is trainable via standard backpropagation.Specifically, in the backward pass over FC layers, rather than directly learning the 3-way tensorW, our methods learn either fS;U1;U2;U3g(DMTRL-Tucker, Eq. 4), fU1;U2;U3g(DMTRL-TT,Eq. 8), or in the simplest case fL;Sg(DMTRL-LAF3, Eq. 2). Besides FC layers, contemporary2Except heterogeneous MTL, where the output layer is necessarily unshared due to different dimensionality.3LAF refers to Last Axis Flattening.4Published as a conference paper at ICLR 2017 ..........................................HomogeneousMTL(Shallow)HeterogeneousMTL............STLMTLUDMTLDMTRLSTL.....................HomogeneousMTL(Deep)............UDMTLDMTRLSTLFigure 1: Illustrative example with two tasks corresponding to two neural networks in homogeneous(single output) and heterogeneous (different output dimension) cases. Weight layers grouped bysolid rectangles are tied across networks. Weight layers grouped by dashed rectangles are softlyshared across networks with our method. Ungrouped weights are independent.Homogeneous MTL Shallow: Left is STL (two independent networks); right is MTL. In the caseof vector input and no hidden layer, our method is equivalent to conventional matrix-based MTLmethods. Homogeneous MTL Deep: STL (Left) is independent networks. User-defined-MTL (UD-MTL) selects layers to share/separate. Our DMTRL learns sharing at every layer. HeterogeneousMTL: UD-MTL selects layers to share/separate. DMTRL learns sharing at every shareable layer.DNN designs often exploit convolutional layers. Those layers usually contain kernel filter parame-ters that are 3-way tensors of size HWC, (whereHis height,Wis width, and Cis the numberof input channels) or 4-way tensors of size HWCM, whereMis the number of filters in thislayer (i.e., the number of output channels). The proposed methods naturally extend to convolutionlayers as convolution just adds more axes on the left-hand side. E.g., the collection of parametersfrom a given convolutional layer of Tneural networks forms a tensor of shape HWCMT.These knowledge sharing strategies provide a way to softly share parameters across the correspond-ing layers of each task’s DNN: where, what, and how much to share are learned from data. This isin contrast to the conventional Deep-MTL approach of manually selecting a set of layers to undergohard parameter sharing: by tying weights so each task uses exactly the same weight matrix/tensorfor the corresponding layer (Zhang et al., 2014; Liu et al., 2015); and a set of layers to be completelyseparate: by using independent weight matrices/tensors. In contrast our approach benefits from:(i) automatically learning this sharing structure from data rather than requiring user trial and error,and (ii) smoothly interpolating between fully shared and fully segregated layers, rather than a hardswitching between these states. An illustration of the proposed framework for different problemsettings can be found in Fig. 1.4 E XPERIMENTSImplementation Details Our method is implemented with TensorFlow (Abadi et al., 2015). Thecode is released on GitHub4. For DMTRL-Tucker, DMTRL-TT, and DMTRL-LAF, we need toassign the rank of each weight tensor. The DNN architecture itself may be complicated and socan benefit from different ranks at different layers, but grid-search is impractical. However, since4https://github.com/wOOL/DMTRL5Published as a conference paper at ICLR 2017both Tucker and TT decomposition methods have SVD-based solutions, and vanilla SVD is directlyapplicable to DMTRL-LAF, we can initialise the model and set the ranks as follows: First train theDNNs independently in single task learning mode. Then pack the layer-wise parameters as the inputfor tensor decomposition. When SVD is applied, set a threshold for relative error so SVD will pickthe appropriate rank. Thus our method needs only a single hyper parameter of max reconstructionerror (we set to = 10% throughout) that indirectly specifies the ranks of every layer. Note thattraining from random initialisation also works, but the STL-based initialisation makes rank selectioneasy and transparent. Nevertheless, like (Kumar & Daum ́e III, 2012) the framework is not sensitiveto rank choice so long as they are big enough. If random initialisation is desired to eliminate thepre-training requirement, good practice is to initialise parameter tensors by a suitable random weightdistribution first, then do decomposition, and use the decomposed values for initialising the factors(thereallearnable parameters in our framework). In this way, the resulting re-composed tensors willhave approximately the intended distribution. Our sharing is applied to weight parameters only, biasterms are not shared. Apart from initialisation, decomposition is not used anywhere.4.1 H OMOGENEOUS MTLDataset, Settings and Baselines We use MNIST handwritten digits. The task is to recognise digitimages zero to nine. When this dataset is used for the evaluation of MTL methods, ten 1-vs-allbinary classification problems usually define ten tasks (Kumar & Daum ́e III, 2012). The dataset hasa given train (60,000 images) and test (10,000 images) split. Each instance is a monochrome imageof size 28281.We use a modified LeNet (LeCun et al., 1998) as the CNN architecture. The first convolutional layerhas32filters of size 55, followed by 22max pooling. The second convolutional layer has 64filters of size 44, and again a 22max pooling. After these two convolutional layers, two fullyconnected layers with 512and1output(s) are placed sequentially. The convolutional and first FClayer use RELU f(x) = max(x;0)activation function. We use hinge loss, `(y) = max(0;1^yy),wherey21is the true label and ^yis the output of each task’s neural network.Conventional matrix-based MTL methods (Evgeniou & Pontil, 2004; Argyriou et al., 2008; Kumar& Daum ́e III, 2012; Romera-paredes et al., 2013; Wimalawarne et al., 2014) are linear models takingvector input only, so they need a preprocessing that flattens the image into a vector, and typicallyreduce dimension by PCA. As per our motivation for studying Deep MTL, our methods decisivelyoutperform such shallow linear baselines. Thus to find a stronger MTL competitor, we instead searchuser defined architectures for Deep-MTL parameter sharing (cf (Zhang et al., 2014; Liu et al., 2015;Caruana, 1997)). In all of the four parametrised layers (pooling has no parameters), we set the firstN(1N3) to be hard shared5. We then use cross-validation to select among the three user-defined MTL architectures and the best option is N= 3, i.e., the first three layers are fully shared(we denote this model UD-MTL). For our methods, all four parametrised layers are softly sharedwith the different factorisation approaches. To evaluate different MTL methods and a baseline ofsingle task learning (STL), we take ten different fractions of the given 60K training split, train themodel, and test on the 10K testing split. For each fraction, we repeat the experiment 5times withrandomly sampled training data. We report two performance metrics: (1) the mean error rate of theten binary classification problems and (2) the error rate of recognising a digit by ranking each task’s1-vs-all output (multi-class classification error).Results As we can see in Fig. 2, all MTL approaches outperform STL, and the advantage is moresignificant when the training data is small. The proposed methods, DMTRL-TT and DMTRL-Tucker outperform the best user-defined MTL when the training data is very small, and their perfor-mance is comparable when the training data is large.Further Discussion For a slightly unfair comparison, in the case of binary classification with 1000training data, shallow matrix-based MTL methods with PCA feature (Kang et al., 2011; Kumar &Daum ́e III, 2012) reported 14:0%/13:4%error rate. With the same amount of data, our methods5This is not strictly all possible user-defined sharing options. For example, another possibility is the firstconvolutional layer and the first FC layer could be fully shared, with the second convolutional layer being in-dependent (task specific). However, this is against the intuition that lower/earlier layers are more task agnostic,and later layers more task specific. Note that sharing the last layer is technically possible but not intuitive, andin any case not meaningful unless at least one early layer is unshared, as the tasks are different.6Published as a conference paper at ICLR 201710-210-1100Fraction of Training Data00.020.040.060.080.10.12Error RateBinary ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTL10-210-1100Fraction of Training Data00.050.10.150.2Error RateMulti-class ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTLFigure 2: Homogeneous MTL: digit recognition on MNIST dataset. Each digit provides a task.have error rate below 6%. This shows the importance of our deep end-to-end multi-task represen-tation learning contribution versus conventional shallow MTL. Since the error rates in (Kang et al.,2011; Kumar & Daum ́e III, 2012) were produced on a private subset of MNIST dataset with PCArepresentations only, to ensure a direct comparison, we implement several classic MTL methods andcompare them in Appendix A.For readers interested in the connection to model capacity (number of parameters), we present fur-ther analysis in Appendix B.4.2 H ETEROGENEOUS MTL: F ACE ANALYSISDataset, Settings and Baselines The AdienceFaces (Eidinger et al., 2014) is a large-scale faceimages dataset with the labels of each person’s gender and age group. We use this dataset forthe evaluation of heterogeneous MTL with two tasks: (i) gender classification (two classes) and(ii) age group classification (eight classes). Two independent CNN models for this benchmark areintroduced in (Levi & Hassncer, 2015). The two CNNs have the same architecture except for thelast fully-connected layer, since the heterogeneous tasks have different number of outputs (two /eight). We take these CNNs from (Levi & Hassncer, 2015) as the STL baseline. We again searchfor the best possible user-defined MTL architecture as a strong competitor: the proposed CNN hassix layers – three convolutional and three fully-connected layers. The last fully-connected layer hasnon-shareable parameters because they are of different size. To search the MTL design-space, wetry setting the first N(1N5) layers to be hard shared between the tasks. Running 5-foldcross-validation on the train set to evaluate the architectures, we find the best choice is N= 5(i.e.,all layers fully shared before the final heterogeneous outputs). For our proposed methods, all thelayers before the last heterogeneous dimensionality FC layers are softly shared.We select increasing fractions of the AdienceFaces train split randomly, train the model, and evaluateon the same test set. For reference, there are 12245 images with gender labelled for training, 4007ones for testing, and 11823 images with age group labelled for training, and 4316 ones for testing.Results Fig. 3 shows the error rate for each task. For the gender recognition task, we find that:(i) User-defined MTL is not consistently better than STL, but (ii) our methods, esp., DMTRL-Tucker, consistently outperform both STL and the best user-defined MTL. For the harder age groupclassification task, our methods generally improve on STL. However UD-MTL does not consistentlyimprove on STL, and even reduces performance when the training set is bigger. This is the negativetransfer phenomenon (Rosenstein et al., 2005), where using a transfer learning algorithm is worsethan not using it. This difference in outcomes is attributed to sufficient data eventually providingsome effective task-specific representation. Our methods can discover and exploit this, but UD-MTL’s hard switch between sharing and not sharing can not represent or exploit such increasingtask-specificity of representation.7Published as a conference paper at ICLR 201710-210-1100Fraction of Training Data0.20.250.30.350.40.45Error RateGender Classification10-210-1100Fraction of Training Data0.50.550.60.650.70.75Error RateAge Group ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTLFigure 3: Heterogeneous MTL: Age and Gender recognition in AdienceFace dataset.4.3 H ETEROGENEOUS MTL: M ULTI -ALPHABET RECOGNITIONDataset, Settings and Baselines We next consider the task of learning to recognise handwrittenletters in multiple languages using the Omniglot (Lake et al., 2015) dataset. Omniglot containshandwritten characters in 50 different alphabets (e.g., Cyrillic, Korean, Tengwar), each with its ownnumber of unique characters ( 1455). In total, there are 1623 unique characters, and each hasexactly 20 instances. Here each task corresponds to an alphabet, and the goal is to recognise itscharacters. MTL has a clear motivation here, as cross-alphabet knowledge sharing is likely to beuseful as one is unlikely to have extensive training data for a wide variety of less common alphabets.The images are monochrome of size 105105. We design a CNN with 3convolutional and 2FClayers. The first conv layer has 8filters of size 55; the second conv layer has 12filters of size33, and the third convolutional layer has 16filters of size 33. Each convolutional layer isfollowed by a 22max-pooling. The first FC layer has 64neurons, and the second FC layer hassize corresponding to the number of unique classes in the alphabet. The activation function is tanh .We use a similar strategy to find the best user-defined MTL model: the CNN has 5parametrisedlayers, of which 4layers are potentially shareable. So we tried hard-sharing the first N(1N4)layers. Evaluating these options by 5-fold cross-validation, the best option turned out to be N= 3,i.e., the first three layers are hard shared. For our methods, all four shareable layers are softly shared.Since there is no standard train/test split for this dataset, we use the following setting: We repeat-edly pick at random 5;:::90% of images per class for training. Note that 5%is the minimum,corresponding to one-shot learning. The remaining data are used for evaluation.Results Fig. 4 reports the average error rate across all 50tasks (alphabets). Our proposed MTLmethods surpass the STL baseline in all cases. User-defined MTL does not work well when thetraining data is very small, but does help when training fraction is larger than 50%.Measuring the Learned Sharing Compared to the conventional user-defined sharing architec-tures, our method learns how to share from data. We next try to quantify the amount of sharingestimated by our model on the Omniglot data. Returning to the key factorisation W=LS, wecan find that S-like matrix appears in all variants of proposed method. It is Sin DMTRL-LAF, thetransposedU(N)in DMTRL-Tucker, and U(N)in DMTRL-TT ( Nis the last axis ofW).Sis aKTsize matrix, where Tis the number of tasks, and Kis the number of latent tasks (Kumar& Daum ́e III, 2012) or the dimension of task coding (Yang & Hospedales, 2015). Each columnofSis a set of coefficients that produce the final weight matrix/tensor by linear combination. Ifwe put STL and user-defined MTL (for a certain shared layer) in this framework, we see that STLis to assign (rather than learn )Sto be an identity matrix IT. Similarly, user-defined MTL (for acertain shared layer) is to assign Sto be a matrix with all zeros but one particular row is all ones,e.g.,S= [11T;0]. Between these two extremes, our method learns the sharing structure in S. Wepropose the following equation to measure the learned sharing strength:=1T2Xi<j(S;i;S;j) =2T(T1)Xi<j(S;i;S;j) (9)8Published as a conference paper at ICLR 2017Conv1Conv2Conv3 FC1 FC2Layers00.20.40.60.81Sharing StrengthSharing Strength at Each LayerDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTL0.05 0.10 0.20 0.50 0.60 0.70 0.80 0.90Fraction of Training Data0.30.40.50.60.7Error RateAlphabet ClassificationSTLDMTRL-LAFDMTRL-TuckerDMTRL-TTUD-MTLFigure 4: Results of multi-task learning of multilingual character recognition (Omniglot dataset).Below: Illustration of the language pairs estimated to be the most related (left - Georgian Mkhedruliand Inuktitut) and most unrelated (right - Balinese and ULOG) character recognition tasks.Here (a;b)is a similarity measure for two vectors aandband we use cosine similarity. is theaverage on all combinations of column-wise similarity. So measures how much sharing is encodedbySbetween= 0for STL (nothing to share) and = 1for user-defined MTL (completely shared).SinceSis a real-valued matrix in our scenario, we normalise it before applying Eq. 9: First we takeabsolute values, because large either positive or negative value suggests a significant coefficient.Second we normalise each column of Sby applying a softmax function, so the sum of every columnis1. The motivation behind the second step is to make a matched range of our SwithS=ITorS= [11T;0], as for those two cases, the sum of each column is 1and the range is [0;1].For the Omniglot experiment, we plot the measured sharing amount for training fraction 10%. Fig. 4reveals that three proposed methods tend to share more for bottom layers (‘Conv1’, ‘Conv2’, and‘Conv3’) and share less for top layer (‘FC1’). This is qualitatively similar to the best user-definedMTL, where the first three layers are fully shared ( = 1) and the 4th layer is completely not shared(= 0). However, our methods: (i) learn this structure in a purely data-driven way and (ii) benefitsfrom the ability to smoothly interpolate between high and low degrees of sharing as depth increases.As an illustration, Fig. 4 also shows example text from the most and least similar language pairs asestimated at our multilingual character recogniser’s FC1 layer (the result can vary across layers).5 C ONCLUSIONIn this paper, we propose a novel framework for end-to-end multi-task representation learning incontemporary deep neural networks. The key idea is to generalise matrix factorisation-based multi-task ideas to tensor factorisation, in order to flexibly share knowledge in fully connected and convo-lutional DNN layers. Our method provides consistently better performance than single task learningand comparable or better performance than the best results from exhaustive search of user-definedMTL architectures. It reduces the design choices and architectural search space that must be ex-plored in the workflow of Deep MTL architecture design (Caruana, 1997; Zhang et al., 2014; Liuet al., 2015), relieving researchers of the need to decide how to structure layer sharing/segregation.Instead sharing structure is determined in a data-driven way on a layer-by-layer basis that moreoverallows a smooth interpolation between sharing and not sharing in progressively deeper layers.Acknowledgements This work was supported by EPSRC (EP/L023385/1), and the EuropeanUnion’s Horizon 2020 research and innovation program under grant agreement No 640891.9Published as a conference paper at ICLR 2017REFERENCESMart ́ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, AndrewHarp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, ManjunathKudlur, Josh Levenberg, Dan Man ́e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-cent Vanhoucke, Vijay Vasudevan, Fernanda Vi ́egas, Oriol Vinyals, Pete Warden, Martin Watten-berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learningon heterogeneous systems, 2015. URL http://tensorflow.org/ . Software available fromtensorflow.org.Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Convex multi-task feature learn-ing. Machine Learning , 2008.Edwin V Bonilla, Kian M Chai, and Christopher Williams. Multi-task gaussian process prediction.InNeural Information Processing Systems (NIPS) , 2007.Rich Caruana. Multitask learning. Machine Learning , 1997.Hal Daum ́e III. Frustratingly easy domain adaptation. In ACL, 2007.Eran Eidinger, Roee Enbar, and Tal Hassner. Age and gender estimation of unfiltered faces. IEEETransactions on Information Forensics and Security , 2014.Theodoros Evgeniou and Massimiliano Pontil. Regularized multi–task learning. In KnowledgeDiscovery and Data Mining (KDD) , 2004.Thomas L. Griffiths and Zoubin Ghahramani. The indian buffet process: An introduction and review.Journal of Machine Learning Research (JMLR) , 2011.Jui-Ting Huang, Jinyu Li, Dong Yu, Li Deng, and Yifan Gong. Cross-language knowledge transferusing multilingual deep neural network with shared hidden layers. In International Conferenceon Acoustics, Speech, and Signal Processing (ICASSP) , 2013.Laurent Jacob, Jean-philippe Vert, and Francis R Bach. Clustered multi-task learning: A convexformulation. In Neural Information Processing Systems (NIPS) , 2009.Zhuoliang Kang, Kristen Grauman, and Fei Sha. Learning with whom to share in multi-task featurelearning. In International Conference on Machine Learning (ICML) , 2011.Tamara G. Kolda and Brett W. Bader. Tensor decompositions and applications. SIAM Review , 2009.Abhishek Kumar and Hal Daum ́e III. Learning task grouping and overlap in multi-task learning. InInternational Conference on Machine Learning (ICML) , 2012.Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learningthrough probabilistic program induction. Science , 2015.Lieven De Lathauwer, Bart De Moor, and Joos Vandewalle. A multilinear singular value decompo-sition. SIAM Journal on Matrix Analysis and Applications , 2000.Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan V . Oseledets, and Victor S. Lempitsky.Speeding-up convolutional neural networks using fine-tuned cp-decomposition. In InternationalConference on Learning Representations (ICLR) , 2015.Y . LeCun, L. Bottou, Y . Bengio, and P. Haffner. Gradient-based learning applied to documentrecognition. Proceedings of the IEEE , 1998.G. Levi and T. Hassncer. Age and gender classification using convolutional neural networks. InComputer Vision and Pattern Recognition Workshops (CVPRW) , 2015.Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. Representa-tion learning using multi-task deep neural networks for semantic classification and informationretrieval. NAACL , 2015.10Published as a conference paper at ICLR 2017Alexander Novikov, Dmitry Podoprikhin, Anton Osokin, and Dmitry Vetrov. Tensorizing neuralnetworks. In Neural Information Processing Systems (NIPS) , 2015.I. V . Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing , 2011.Alexandre Passos, Piyush Rai, Jacques Wainer, and Hal Daum ́e III. Flexible modeling of latent taskstructures in multitask learning. In International Conference on Machine Learning (ICML) , 2012.Bernardino Romera-paredes, Hane Aung, Nadia Bianchi-berthouze, and Massimiliano Pontil. Mul-tilinear multitask learning. In International Conference on Machine Learning (ICML) , 2013.Michael T. Rosenstein, Zvika Marx, Leslie Pack Kaelbling, and Thomas G. Dietterich. To transferor not to transfer. In In NIPS Workshop, Inductive Transfer: 10 Years Later , 2005.Olivier Sigaud, Clement Masson, David Filliat, and Freek Stulp. Gated networks: an inventory.arXiv , 2015.Sigurd Spieckermann, Steffen Udluft, and Thomas Runkler. Data-effiicient temporal regression withmultitask recurrent neural networks. In NIPS Workshop on Transfer and Multi-Task Learning ,2014.Tian Tan, Yanmin Qian, and Kai Yu. Cluster adaptive training for deep neural network based acousticmodel. IEEE/ACM Trans. Audio, Speech & Language Processing , 24(3):459–468, 2016.L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika , 1966.Kishan Wimalawarne, Masashi Sugiyama, and Ryota Tomioka. Multitask learning meets tensorfactorization: task imputation via convex optimization. In Neural Information Processing Systems(NIPS) , 2014.Ya Xue, Xuejun Liao, Lawrence Carin, and Balaji Krishnapuram. Multi-task learning for classifica-tion with dirichlet process priors. Journal of Machine Learning Research (JMLR) , 2007.Yongxin Yang and Timothy M. Hospedales. A unified perspective on multi-domain and multi-tasklearning. In International Conference on Learning Representations (ICLR) , 2015.Zhanpeng Zhang, Ping Luo, Chen Change Loy, and Xiaoou Tang. Facial landmark detection bydeep multi-task learning. In European Conference on Computer Vision (ECCV) , 2014.11Published as a conference paper at ICLR 2017A C OMPARISON WITH CLASSIC (SHALLOW ) MTL METHODSWe provide a comparison with classic (shallow, matrix-based) MTL methods for the first experiment(MNIST, binary one-vs-rest classification, 1%training data, mean of error rates for 10-fold CV). Asubtlety in making this comparison is what feature should the classic methods use? Conventionallythey use a PCA feature (obtained by flattening the image, then dimension reduction by PCA). How-ever for visual recognition tasks, performance is better with deep features – a key motivation for ourfocus on deep approaches to MTL. We therefore also compare the classic methods when using afeature extracted from the penultimate layer of the CNN network used in our experiment.Model PCA Feature CNN FeatureSingle Task Learning 16.89 11.52Evgeniou & Pontil (2004) 15.27 10.32Argyriou et al. (2008) 15.64 9.56Kumar & Daum ́e III (2012) 14.08 9.41DMTRL-LAF - 8.25DMTRL-Tucker - 9.24DMTRL-TT - 7.31UD-MTL - 9.34Table 1: Comparison with classic MTL methods. MNIST binary classification error rate (%).As expected, the classic methods improve on STL, and they perform significantly better with CNNthan PCA features. However, our DMTRL methods still outperform the best classic methods, evenwhen they are enhanced by CNN features. This is due to soft (cf hard) sharing of the feature extrac-tion layers and the ability of end-to-end training of both the classifier and feature extractor. Finally,we note that more fundamentally, the classic methods are restricted to binary problems (due to theirmatrix-based nature) and so, unlike our tensor-based approach, they are unsuitable for multi-classproblems like omniglot and age-group classification.B M ODEL CAPACITY AND PERFORMANCEWe list the number of parameters for each model in the first experiment (MNIST, binary one-vs-restclassification) and the performance ( 1%training data, mean of error rate for 10-fold CV).Model Error Rate (%) Number of parameters RatioSTL 11.52 4351K 1.00DMTRL-LAF 8.25 1632K 0.38DMTRL-Tucker 9.24 1740K 0.40DMTRL-TT 7.31 2187K 0.50UD-MTL 9.34 436K 0.10UD-MTL-Large 9.39 1644K 0.38Table 2: Comparison of deep models: Error rate and number of parameters.The conventional hard-sharing method (UD-MTL) design is to share all layers except the top layer.Its number of parameter is roughly 10% of the single task learning method (STL), as most parametersare shared across the 10 tasks corresponding to 10 digits. Our soft-sharing methods also significantlyreduce the number of parameters compared to STL, but are larger than UD-MTL’s hard sharing.To compare our method to UD-MTL, while controlling for network capacity, we expanded UD-MDL by adding more hidden neurons so its number of parameter is close to our methods (denotedUD-MTL-Large). However UD-MDL performance does not increase. This is evidence that ourmodel’s good performance is not simply due to greater capacity than UD-MTL.12
BJKYvt5lg
Published as a conference paper at ICLR 2017PIXEL VAE: A L ATENT VARIABLE MODEL FORNATURAL IMAGESIshaan Gulrajani1, Kundan Kumar1,2, Faruk Ahmed1, Adrien Ali Taiga1,3,Francesco Visin1,5, David Vazquez1,4, Aaron Courville1,61Montreal Institute for Learning Algorithms, Universit ́e de Montr ́eal2Department of Computer Science and Engineering, IIT Kanpur3CentraleSup ́elec4Computer Vision Center & Universitat Autonoma de Barcelona5Politecnico di Milano6CIFAR FellowABSTRACTNatural image modeling is a landmark challenge of unsupervised learning. Varia-tional Autoencoders (V AEs) learn a useful latent representation and model globalstructure well but have difficulty capturing small details. PixelCNN models de-tails very well, but lacks a latent code and is difficult to scale for capturing largestructures. We present PixelV AE, a V AE model with an autoregressive decoderbased on PixelCNN. Our model requires very few expensive autoregressive lay-ers compared to PixelCNN and learns latent codes that are more compressed thana standard V AE while still capturing most non-trivial structure. Finally, we ex-tend our model to a hierarchy of latent variables at different scales. Our modelachieves state-of-the-art performance on binarized MNIST, competitive perfor-mance on 6464ImageNet, and high-quality samples on the LSUN bedroomsdataset.1 I NTRODUCTIONBuilding high-quality generative models of natural images has been a long standing challenge. Al-though recent work has made significant progress (Kingma & Welling, 2014; van den Oord et al.,2016a;b), we are still far from generating convincing, high-resolution natural images.Many recent approaches to this problem are based on an efficient method for performing amor-tized, approximate inference in continuous stochastic latent variables: the variational autoencoder(V AE) (Kingma & Welling, 2014) jointly trains a top-down decoder generative neural network witha bottom-up encoder inference network. V AEs for images typically use rigid decoders that modelthe output pixels as conditionally independent given the latent variables. The resulting model learnsa useful latent representation of the data and effectively models global structure in images, but hasdifficulty capturing small-scale features such as textures and sharp edges due to the conditional inde-pendence of the output pixels, which significantly hurts both log-likelihood and quality of generatedsamples compared to other models.PixelCNNs (van den Oord et al., 2016a;b) are another state-of-the-art image model. Unlike V AEs,PixelCNNs model image densities autoregressively, pixel-by-pixel. This allows it to capture finedetails in images, as features such as edges can be precisely aligned. By leveraging carefully con-structed masked convolutions (van den Oord et al., 2016b), PixelCNNs can be trained efficiently inparallel on GPUs. Nonetheless, PixelCNN models are still very computationally expensive. Unliketypical convolutional architectures they do not apply downsampling between layers, which meansthat each layer is computationally expensive and that the depth of a PixelCNN must grow linearlywith the size of the images in order for it to capture dependencies between far-away pixels. Pix-elCNNs also do not explicitly learn a latent representation of the data, which can be useful fordownstream tasks such as semi-supervised learning.Corresponding author; igul222@gmail.com1Published as a conference paper at ICLR 2017Figure 1: Samples from hierarchical PixelV AE on the LSUN bedrooms dataset.Our contributions are as follows:We present PixelV AE, a latent variable model which combines the largely complementaryadvantages of V AEs and PixelCNNs by using PixelCNN-based masked convolutions in theconditional output distribution of a V AE.We extend PixelV AE to a hierarchical model with multiple stochastic layers and autore-gressive decoders at each layer. This lets us autoregressively model not only the outputpixels but also higher-level latent feature maps.On MNIST, we show that PixelV AE: (1) establishes a new state-of-the-art likelihood, (2)performs comparably to PixelCNN using far fewer computationally expensive autoregres-sive layers, (3) learns more compressed latent codes than a standard V AE while still ac-counting for most non-trivial structure, and (4) learns a latent code which separates digitsbetter than a standard V AE.We evaluate hierarchical PixelV AE on two challenging natural image datasets ( 6464ImageNet and LSUN bedrooms). On 6464ImageNet, we report likelihood competitivewith the state of the art at significantly less computational cost. On LSUN bedrooms,we generate high-quality samples and show that hierarchical PixelV AE learns to modeldifferent properties of the scene with each of its multiple layers.2 R ELATED WORKThere have been many recent advancements in generative modeling of images. We briefly discusssome of these below, especially those that are related to our approach.The Variational Autoencoder (V AE) (Kingma & Welling, 2014) is a framework to train neural net-works for generation and approximate inference jointly by optimizing a variational bound on thedata log-likelihood. The use of normalizing flows (Rezende & Mohamed, 2015) improves the flex-ibility of the V AE approximate posterior. Based on this, Kingma et al. (2016) develop an efficientformulation of an autoregressive approximate posterior model using MADE (Germain et al., 2015).In our work, we avoid the need for such flexible inference models by using autoregressive priors.The idea of using autoregressive conditional likelihoods in V AEs has been explored in the context oflanguage modeling in (Bowman et al., 2016), however in that work the use of latent variables failsto improve likelihood over a purely autoregressive model.2Published as a conference paper at ICLR 2017. concatImageEncoderLatentVariablesDecoder PixelCNN layersReconstructionORSampleGeneration: Autoregressive samplingTraining: Teacher forcingORORFigure 2: Our proposed model, PixelV AE, makes use of PixelCNN to model an autoregressive de-coder for a V AE. V AEs, which assume (conditional) independence among pixels, are known to sufferfrom blurry samples, while PixelCNN, modeling the joint distribution, produces sharp samples, butlack a latent representation that might be more useful for downstream tasks. PixelV AE combines thebest of both worlds, providing a meaningful latent representation, while producing sharp samples.Simultaneously to our work, Chen et al. (2016) present a V AE model for images with an an autore-gressive output distribution. In constrast to Chen et al. (2016), who focus on models with a singlelayer of latent variables, we also investigate models with a hierarchy of latent variables (and cor-responding autoregressive priors) and show that they enable us to scale our model to challengingnatural image datasets.Another promising recent approach is Generative Adversarial Networks (GANs) (Goodfellow et al.,2014), which pit a generator network and a discriminator network against each other. Recent workhas improved training stability (Radford et al., 2015; Salimans et al., 2016) and incorporated in-ference networks into the GAN framework (Dumoulin et al., 2016; Donahue et al., 2016). GANsgenerate compelling samples compared to our work, but still exhibit unstable training dynamics andare known to underfit by ignoring modes of the data distribution (Dumoulin et al., 2016). Further, itis difficult to accurately estimate the data likelihood in GANs.3 P IXEL VAE M ODELLike a V AE, our model jointly trains an “encoder” inference network, which maps an image xto aposterior distribution over latent variables z, and a “decoder” generative network, which models adistribution over xconditioned on z. The encoder and decoder networks are composed of a seriesof convolutional layers, respectively with strided convolutions for downsampling in the encoder andtransposed convolutions for upsampling in the decoder.As opposed to most V AE decoders that model each dimension of the output independently (forexample, by modeling the output as a Gaussian with diagonal covariance), we use a conditionalPixelCNN in the decoder. Our decoder models xas the product of each dimension xiconditionedon all previous dimensions and the latent variable z:p(xjz) =Yip(xijx1;:::;x i1;z)We first transform zthrough a series of convolutional layers into feature maps with the same spatialresolution as the output image and then concatenate the resulting feature maps with the image.The resulting concatenated feature maps are then further processed by several PixelCNN maskedconvolutional layers and a final PixelCNN 256-way softmax output.Unlike typical PixelCNN implementations, we use very few PixelCNN layers in our decoder, relyingon the latent variables to model the structure of the input at scales larger than the combined receptive3Published as a conference paper at ICLR 2017Figure 3: We generate top-down through a hierarchical latent space decomposition. The inferencenetwork generates latent variables by composing successive deterministic functions to compute pa-rameters of the stochastic random variables. Dotted lines denote contributions to the cost.field of our PixelCNN layers. As a result of this, our architecture captures global structure at a muchlower computational cost than a standard PixelCNN implementation.3.1 H IERARCHICAL ARCHITECTUREThe performance of V AEs can be improved by stacking them to form a hierarchy of stochastic latentvariables: in the simplest configuration, the V AE at each level models a distribution over the latentvariables at the level below, with generation proceeding downward and inference upward througheach level (i.e. as in Fig. 3). In convolutional architectures, the intermediate latent variables aretypically organized into feature maps whose spatial resolution decreases toward higher levels.Our model can be extended in the same way. At each level, the generator is a conditional PixelCNNover the latent features in the level below. This lets us autoregressively model not only the outputdistribution over pixels but also the prior over each set of latent feature maps. The higher-levelPixelCNN decoders use diagonal Gaussian output layers instead of 256-way softmax, and modelthe dimensions within each spatial location (i.e. across feature maps) independently. This is donefor simplicity, but is not a limitation of our model.The output distributions over the latent variables for the generative and inference networks decom-pose as follows (see Fig. 3).p(z1;;zL) =p(zL)p(zL1jzL)p(z1jz2)q(z1;;zLjx) =q(z1jx)q(zLjx)We optimize the negative of the evidence lower bound (sum of data negative log-likelihood andKL-divergence of the posterior over latents with the prior).L(x;q;p ) =Ez1q(z1jx)logp(xjz1) +DKL(q(z1;zLjx)jjp(z1;;zL))=Ez1q(z1jx)logp(xjz1) +Zz1;;zLLYj=1q(zjjx)LXi=1logq(zijx)p(zijzi+1)dz1:::dzL=Ez1q(z1jx)logp(xjz1) +LXi=1Zz1;;zLLYj=1q(zjjx) logq(zijx)p(zijzi+1)dz1:::dzL=Ez1q(z1jx)logp(xjz1) +LXi=1Zzi;zi+1q(zi+1jx)q(zijx) logq(zijx)p(zijzi+1)dzidzi+14Published as a conference paper at ICLR 2017=Ez1q(z1jx)logp(xjz1) +LXi=1Ezi+1q(zi+1jx)DKL(q(zijx)jjp(zijzi+1))Note that when specifying an autoregressive prior over each latent level zi, we can leverage maskedconvolutions (van den Oord et al., 2016b) and samples drawn independently from the approximateposteriorq(zijx)(i.e. from the inference network) to train efficiently in parallel on GPUs.4 E XPERIMENTS4.1 MNISTModel NLL TestDRAW (Gregor et al., 2016) 80.97Discrete V AE (Rolfe, 2016) =81.01IAF V AE (Kingma et al., 2016) 79.88PixelCNN (van den Oord et al., 2016a) =81.30PixelRNN (van den Oord et al., 2016a) =79.20VLAE (Chen et al., 2016) =79.03Convolutional V AE 87.41PixelV AE 80.64Gated PixelCNN (our implementation) =80.10Gated PixelV AE 79.48 (80.02)Gated PixelV AE without upsampling 78.96 (79.58)Table 1: We compare performance of different models on binarized MNIST. “PixelCNN” is themodel described in van den Oord et al. (2016a). Our corresponding latent variable model is “Pixel-V AE”. “Gated PixelCNN” and “Gated PixelV AE” use the gated activation function in van den Oordet al. (2016b). In “Gated PixelV AE without upsampling”, a linear transformation of latent variableconditions the (gated) activation in every PixelCNN layer instead of using upsampling layers.We evaluate our model on the binarized MNIST dataset (Salakhutdinov & Murray, 2008; Lecunet al., 1998) and report results in Table 1. We also experiment with a variant of our model in whicheach PixelCNN layer is directly conditioned on a linear transformation of latent variable, z(ratherthan transforming zfirst through several upsampling convolutional layers) (as in (van den Oord et al.,2016b) and find that this further improves performance, achieving an NLL upper bound comparablewith the current state of the art. We estimate the marginal likelihood of our MNIST model usingthe importance sampling technique in Burda et al. (2015), which computes a lower bound on thelikelihood whose tightness increases with the number of importance samples per datapoint. We useN= 5000 samples per datapoint (higher values don’t appear to significantly affect the likelihoodestimate) and achieve state-of-the-art likelihood.4.1.1 U SING FEWPIXEL CNN L AYERSThe masked convolutional layers in PixelCNN are computationally expensive because they operateat the full resolution of the image and in order to cover the full receptive field of the image, PixelCNNtypically needs a large number of them. One advantage of our architecture is that we can achievestrong performance with very few PixelCNN layers, which makes training and sampling from ourmodel significantly faster than PixelCNN. To demonstrate this, we compare the performance of ourmodel to PixelCNN as a function of the number of PixelCNN layers (Fig. 4a). We find that withfewer than 10 autoregressive layers, our PixelV AE model performs much better than PixelCNN.This is expected since with few layers, the effective receptive field of the PixelCNN output units istoo small to capture long-range dependencies in the data.We also observe that adding even a single PixelCNN layer has a dramatic impact on the NLL boundof PixelV AE. This is not surprising since the PixelCNN layer helps model local characteristics which5Published as a conference paper at ICLR 20170 2 4 6 8 10 12 14#PixelCNN layers80828486889092949698Negative Log-likelihoodGated PixelVAE NLL boundGated PixelCNN NLL(a)NLL Upper Bound (b)Figure 4: (a) Comparison of Negative log-likelihood upper bound of PixelV AE and NLL for Pixel-CNN as a function of the number of PixelCNN layers used. (b) Cost break down into KL divergenceand reconstruction cost.are complementary to the global characteristics which a V AE with a factorized output distributionmodels.4.1.2 L ATENT VARIABLE INFORMATION CONTENTBecause the autoregressive conditional likelihood function of PixelV AE is expressive enough tomodel some properties of the image distribution, it isn’t forced to account for those propertiesthrough its latent variables as a standard V AE is. As a result, we can expect PixelV AE to learnlatent representations which are invariant to textures, precise positions, and other attributes whichare more efficiently modeled by the autoregressive decoder. To empirically validate this, we trainPixelV AE models with different numbers of autoregressive layers (and hence, different PixelCNNreceptive field sizes) and plot the breakdown of the NLL bound for each of these models into thereconstruction term logp(xjz)and the KL divergence term DKL(q(zjx)jjp(z))(Fig. 4b). The KLdivergence term can be interpreted as a measure of the information content in the posterior distri-butionq(zjx)(in the sense that in expectation, samples from q(zjx)requireKL(qjjp)fewer bits tocode under a code optimized for qthan under one optimized for p(Burnham & Anderson, 2003))and hence, models with smaller KL terms encode less information in their latent variables.We observe a sharp drop in the KL divergence term when we use a single autoregressive layercompared to no autoregressive layers, indicating that the latent variables have been freed from havingto encode small-scale details in the images. Since the addition of a single PixelCNN layer allows thedecoder to model interactions between pixels which are at most 2 pixels away from each other (sinceour masked convolution filter size is 55), we can also say that most of the non-trivial (long-range)structure in the images is still encoded in the latent variables.4.1.3 L ATENT REPRESENTATIONSOn MNIST, given a sufficiently high-dimensional latent space, V AEs have already been shown tolearn representations in which digits are well-separated (Sønderby et al., 2016). However, this taskbecomes more challenging as the capacity of the latent space is decreased. PixelV AE’s flexibleoutput distribution should allow it to learn a latent representation which is invariant to small detailsand thus better models global factors of variation given limited capacity.To test this, we train a PixelV AE with a two-dimensional latent space, and an equivalent V AE.We visualize the distribution of test set images in latent space and observe that PixelV AE’s latentrepresentation separates digits significantly better than V AE (Figure 5). To quantify this difference,we train a K-nearest neighbors classifier in the latent space of each model and find that PixelV AE6Published as a conference paper at ICLR 2017(a) (b)Figure 5: Visualization of the MNIST test set in the latent space of (a) convolutional V AE and (b)PixelV AE with two latent dimensions. PixelV AE separates classes more completely than V AE.Figure 6: We visually inspect the variation in image features captured by the different levels ofstochasticity in our model. For the two-level latent variable model trained on 6464LSUN bed-rooms, we vary only the top-level sampling noise (top) while holding the other levels constant,vary only the middle-level noise (middle) , and vary only the bottom (pixel-level) noise (bottom) .It appears that the top-level latent variables learn to model room structure and overall geometry,the middle-level latents model color and texture features, and the pixel-level distribution modelslow-level image characteristics such as texture, alignment, shading.significantly outperforms V AE, achieving a test error of 7.2% compared to V AE’s 22.9%. We alsonote that unlike V AE, PixelV AE learns a representation in which digit identity is largely disentangledfrom other generative factors.4.2 LSUN B EDROOMSTo evaluate our model’s performance with more data and complicated image distributions, we per-form experiments on the LSUN bedrooms dataset (Yu et al., 2015). We use the same preprocessingas in Radford et al. (2015) to remove duplicate images in the dataset. For quantitative experimentswe use a 3232downsampled version of the dataset, and we present samples from a model trainedon the 6464version.We train a two-level PixelV AE with latent variables at 11and88spatial resolutions. We find thatthis outperforms both a two-level convolutional V AE with diagonal Gaussian output and a single-level PixelV AE in terms of log-likelihood and sample quality. We also try replacing the PixelCNNlayers at the higher level with a diagonal Gaussian decoder and find that this hurts log-likelihood,which suggests that multi-scale PixelV AE uses those layers effectively to autoregressively modellatent features.7Published as a conference paper at ICLR 2017Figure 7: Samples from hierarchical PixelV AE on the 64x64 ImageNet dataset.4.2.1 F EATURES MODELED AT EACH LAYERTo see which features are modeled by each of the multiple layers, we draw multiple samples whilevarying the sampling noise at only a specific layer (either at the pixel-wise output or one of thelatent layers) and visually inspect the resulting images (Fig. 6). When we vary only the pixel-level sampling (holding z1andz2fixed), samples are almost indistinguishable and differ only inprecise positioning and shading details, suggesting that the model uses the pixel-level autoregressivedistribution to model only these features. Samples where only the noise in the middle-level (8 8) latent variables is varied have different objects and colors, but appear to have similar basic roomgeometry and composition. Finally, samples with varied top-level latent variables have diverse roomgeometry.4.3 6464IMAGE NETThe6464ImageNet generative modeling task was introduced in (van den Oord et al., 2016a) andinvolves density estimation of a difficult, highly varied image distribution. We trained a heirarchicalPixelV AE model (with a similar architecture to the model in section 4.2) on 6464ImageNet andreport validation set likelihood in Table 2. Our model achieves a likelihood competitive with van denOord et al. (2016a;b), despite being substantially less computationally complex. A visual inspectionof ImageNet samples from our model (Fig. 7) also reveals them to be significantly more globallycoherent than samples from PixelRNN.Model NLL Validation (Train) FLOPsConvolutional DRAW (Gregor et al., 2016) 4.10 (4.04) —Real NVP (Dinh et al., 2016) =4.01 (3.93) —PixelRNN (van den Oord et al., 2016a) =3.63 (3.57) 154109Gated PixelCNN (van den Oord et al., 2016b) =3.57 (3.48) 134109Hierarchical PixelV AE 3.62 (3.55) 63109Table 2: Model performance on 6464ImageNet. We achieve competitive NLL at a fraction of thecomputational complexity of other leading models.8Published as a conference paper at ICLR 20175 C ONCLUSIONSIn this paper, we introduced a V AE model for natural images with an autoregressive decoder thatachieves strong performance across a number of datasets. We explored properties of our model,showing that it can generate more compressed latent representations than a standard V AE and that itcan use fewer autoregressive layers than PixelCNN. We established a new state-of-the-art on bina-rized MNIST dataset in terms of likelihood on 6464ImageNet and demonstrated that our modelgenerates high-quality samples on LSUN bedrooms.The ability of PixelV AE to learn compressed representations in its latent variables by ignoring thesmall-scale structure in images is potentially very useful for downstream tasks. It would be interest-ing to further explore our model’s capabilities for semi-supervised classification and representationlearning in future work.ACKNOWLEDGMENTSThe authors would like to thank the developers of Theano (Theano Development Team, 2016) andBlocks and Fuel (van Merri ̈enboer et al., 2015). We acknowledge the support of the followingagencies for research funding and computing support: Ubisoft, Nuance Foundation, NSERC, Cal-cul Quebec, Compute Canada, CIFAR, MEC Project TRA2014-57088-C2-1-R, SGR project 2014-SGR-1506 and TECNIOspring-FP7-ACCI grant.REFERENCESSamuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Ben-gio. Generating sentences from a continuous space. 2016.Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXivpreprint arXiv:1509.00519 , 2015.Kenneth P. Burnham and David R. Anderson. Model selection and multi-model inference, 2nd ed.A Practical information-theoretic approach. Springer-Verlag , pp. 78, 2003.Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, IlyaSutskever, and Pieter Abbeel. Variational Lossy Autoencoder. arXiv.org , November 2016.Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP.arXiv.org , May 2016.Jeff Donahue, Philipp Kr ̈ahenb ̈uhl, and Trevor Darrell. Adversarial feature learning. CoRR ,abs/1605.09782, 2016. URL http://arxiv.org/abs/1605.09782 .Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi-etro, and Aaron Courville. Adversarially learned inference. CoRR , abs/1606.00704, 2016.Matthieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoderfor distribution estimation. CoRR , abs/1502.03509, 2015. URL https://arxiv.org/abs/1502.03509 .Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. TowardsConceptual Compression. arXiv.org , April 2016.Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. International Conferenceon Learning Representations (ICLR) , 2014.Diederik P. Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverseautoregressive flow. CoRR , abs/1606.04934, 2016.9Published as a conference paper at ICLR 2017Yann Lecun, Lon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. In Proceedings of the IEEE , pp. 2278–2324, 1998.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. CoRR , abs/1511.06434, 2015.Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. InInternational Conference on Machine Learning (ICML) , 2015.Jason Tyler Rolfe. Discrete variational autoencoders. arXiv preprint arXiv:1609.02200 , 2016.Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In InProceedings of the 25th international conference on Machine learning , 2008.Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. CoRR , abs/1606.03498, 2016.Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. LadderVariational Autoencoders. arXiv.org , February 2016.Theano Development Team. Theano: A Python framework for fast computation of mathematicalexpressions. arXiv e-prints , abs/1605.02688, May 2016. URL http://arxiv.org/abs/1605.02688 .A ̈aron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.InInternational Conference on Machine Learning (ICML) , 2016a.A ̈aron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and KorayKavukcuoglu. Conditional image generation with pixelcnn decoders. CoRR , abs/1606.05328,2016b. URL http://arxiv.org/abs/1606.05328 .Bart van Merri ̈enboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning.arXiv preprint , abs/1506.00619, 2015. URL http://arxiv.org/abs/1506.00619 .Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. LSUN: construction of alarge-scale image dataset using deep learning with humans in the loop. CoRR , abs/1506.03365,2015.10Published as a conference paper at ICLR 2017A LSUN B EDROOMS AND 6464 I MAGE NET RECONSTRUCTIONS(a) (b)Figure 8: Reconstructions for (a) LSUN Bedrooms and (b) 64 64 ImageNet. Left-most columnsare images from the test set, and the following 5 columns are top-down generations from the highestlevel of latent variables. We see that the reconstructions capture high-level semantic properties ofthe original images while varying in most of the details. We also visualized similar reconstructionsby generations from the lower level of latent variables, and in this case the reconstructions werevisually indistinguishable from the original images.11Published as a conference paper at ICLR 2017B MNIST S AMPLESFigure 9: Samples from a PixelV AE with a receptive field of 7 pixels (left), a PixelCNN with an11-pixel receptive field (middle; roughly the same computational complexity as the PixelV AE), anda PixelCNN with a 7-pixel receptive field (right).C MNIST R ECONSTRUCTIONSFigure 10: Reconstructions from the MNIST test set. Alternate columns are original (left) andreconstructed images (right).12Published as a conference paper at ICLR 2017D M ORE SAMPLES FOR HIERARCHICAL LATENT SPACE VISUALIZATIONSFigure 11: More examples for visualizations of the variation in image features captured at differentlevels of stochasticity. Holding the other levels constant, we vary only the top-level sampling noise(top) , only the middle-level noise (middle) , and only the bottom (pixel-level) noise (bottom) .E M ODEL ARCHITECTUREE.1 MNISTFor our quantitative MNIST experiments, the architectures of our encoder and decoder are as fol-lows. Unless otherwise specified, all convolutional layers use ReLU nonlinearity. We also makean open-source implementation of this model available at https://github.com/igul222/PixelVAE .13Published as a conference paper at ICLR 2017Encoderx!(;)Kernel size Stride Output channelsConvolution 3x3 1 32Convolution 3x3 2 32Convolution 3x3 1 32Convolution 3x3 2 64Pad 77 feature maps to 8 8Convolution 3x3 1 64Convolution 3x3 2 64Convolution 3x3 1 64Convolution 3x3 1 64Convolution 3x3 1 64FlattenLinear - - 2latent dimensionalityDecoderz!xKernel size Stride Output channelsLinear - - 4464Reshape to (64, 4, 4)Convolution 3x3 1 64Convolution 3x3 1 64Transposed convolution 3x3 2 64Convolution 3x3 1 64Crop 88 feature maps to 7 7Transposed convolution 3x3 2 32Convolution 3x3 1 32Transposed convolution 3x3 2 32Convolution 3x3 1 32PixelCNN gated residual block 7x7 1 32PixelCNN gated residual block(s) [ 5x5 ]N 1 32PixelCNN gated convolution 1x1 1 32PixelCNN gated convolution 1x1 1 32Convolution 1x1 1 1E.2 LSUN B EDROOMS AND 6464 I MAGE NETThe LSUN and ImageNet models use the same architecture: all encoders and decoders are residualnetworks; we use pre-activation residual blocks with two 33convolutional layers each and ELUnonlinearity. Some residual blocks perform downsampling, using a 22stride in the second con-volutional layer, or upsampling, using subpixel convolution in the first convolutional layer. Weightnormalization is used in masked convolutional layers; in all other layers, batch normalization isused. We optimize using Adam with learning rate 1e-3. Training proceeds for 400K iterations usingbatch size 48.For further architectural details, please refer to our open-source implementation at https://github.com/igul222/PixelVAE .14Published as a conference paper at ICLR 2017Bottom-level Encoder x!h1Kernel size Resample Output channelsEmbedding - - 48Convolution 1x1 - 192Residual block [ 3x3 ]2- 192Residual block [ 3x3 ]2Down2256Residual block [ 3x3 ]2- 256Residual block [ 3x3 ]2Down2512Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2- 512Bottom-level Decoder z1!xKernel size Resample Output channelsConvolution 1x1 1 512Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2Up2 256Residual block [ 3x3 ]2- 256Residual block [ 3x3 ]2Up2 192Residual block [ 3x3 ]2- 192Embedding - - 48PixelCNN gated residual block [ 3x3 ]2- 384PixelCNN gated residual block [ 3x3 ]2- 384PixelCNN gated residual block [ 3x3 ]2- 384Top-level Encoder h1!h2Kernel size Resample Output channelsResidual block [ 3x3 ]2- 512Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2Down2512Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2Down2512Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2- 51215Published as a conference paper at ICLR 2017Top-level Decoder z2!z1Kernel size Resample Output channelsLinear - - 44512Reshape to (512, 4, 4)Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2Up2 512Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2Up2 512Residual block [ 3x3 ]2- 512Residual block [ 3x3 ]2- 512PixelCNN convolution 5x5 - 512PixelCNN gated residual block [ 3x3 ]2- 512PixelCNN gated residual block [ 3x3 ]2- 512PixelCNN gated residual block [ 3x3 ]2- 512Convolution 1x1 - 25616
ryCcJaqgl
Under review as a conference paper at ICLR 2017TRENET: H YBRID NEURAL NETWORKS FOR LEARN -ING THE LOCAL TREND IN TIMESERIESTao Lin, Tian Guo& Karl AbererSchool of Computer and Communication SciencesEcole polytechnique federale de LausanneLausanne, Switzerlandftao.lin, tian.guo, karl.aberer g@epfl.chABSTRACTLocal trends of time series characterize the intermediate upward and downwardpatterns of time series. Learning and forecasting the local trend in time series dataplay an important role in many real applications, ranging from investing in thestock market, resource allocation in data centers and load schedule in smart grid.Inspired by the recent successes of neural networks, in this paper we proposeTreNet, a novel end-to-end hybrid neural network that predicts the local trendof time series based on local and global contextual features. TreNet leveragesconvolutional neural networks (CNNs) to extract salient features from local rawdata of time series. Meanwhile, considering long-range dependencies existing inthe sequence of historical local trends, TreNet uses a long-short term memoryrecurrent neural network (LSTM) to capture such dependency. Furthermore, forpredicting the local trend, a feature fusion layer is designed in TreNet to learnjoint representation from the features captured by CNN and LSTM. Our pro-posed TreNet demonstrates its effectiveness by outperforming conventional CNN,LSTM, HMM method and various kernel based baselines on real datasets.1 I NTRODUCTIONTime series, which is a sequence of data points in time order, is being generated in a wide spectrum ofdomains, such as daily fluctuation of the stock market, power consumption records of households,performance monitoring data of clusters in data centres, and so on. In many applications, usersare interested in understanding the evolving trend in time series and forecasting the trend, sincethe conventional prediction on specific data points could deliver very little information about thesemantics and dynamics of the underlying process generating the time series. For instance, timeseries in Figure 1 are from the household power consumption dataset1. Figure 1(a) shows some rawdata points of time series. Though point AandBhave approximately the same value, the underlyingsystem is likely to be in two different states when it outputs AandB, becauseAis in an upwardtrend whileBis in a downward trend (Wang et al., 2011; Matsubara et al., 2014). On the other hand,even when two points with the similar value are both in the upward trend, e.g, point AandC, thedifferent slopes and durations of the trends where point AandClocate, could also indicate differentstates of the underlying process.Particularly, in this paper we are interested in the local trend of time series which measures the in-termediate local behaviour, i.e., upward or downward pattern of time series that characterized by theslope and duration (Wang et al., 2011). For instance, in Figure 1(b) the linear segments over raw datapoints of time series represent the local trends extracted from a real household power consumptiontime series. For the ease of presentation, we will use the term trend and local trend interchangeablyin the rest of the paper. Learning and forecasting local trends are quite useful in a wide range ofapplications. For instance, in the stock market, due to its high volatility and noisy environment,in reality predicting stock price trends is preferred over the prediction of the stock market absolutevalues (Atsalakis & Valavanis, 2009). Predicting the local trend of stock price time series empowersThese two authors contributed equally.1https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption1Under review as a conference paper at ICLR 2017traders to design profitable trading strategies (Chang et al., 2012b; Atsalakis & Valavanis, 2009).In the smart energy domain, knowing the predictive local trend of power consumption time se-ries enables energy providers to schedule power supply and maximize energy utilization (Zhao &Magoul `es, 2012).Meanwhile, in recent years neural networks have shown the dramatical power in a wide spectrum ofdomains, e.g., natural language processing, computer vision, speech recognition, time series anal-ysis, etc. (Wang et al., 2016b; Sutskever et al., 2014; Yang et al., 2015; Lipton et al., 2015). Fortime series data, two mainstream architectures, convolutional neural network (CNN) and recurrentneural network (RNN) have been exploited in different time series related tasks, e.g., RNN in timeseries classification (Lipton et al., 2015) and CNN in activity recognition and snippet learning (Liuet al., 2015; Yang et al., 2015). RNN is powerful in discovering the dependency in sequence data(Jain et al., 2014; Graves, 2012) and particularly the Long Short-Term Memory (LSTM) RNN workswell on sequence data with long-term dependencies (Chung et al., 2014; Hochreiter & Schmidhuber,1997) due to the internal memory mechanism. CNN excels in exacting effective representation oflocal salience from raw data of time series by enforcing a local connectivity between neurons. (Yanget al., 2015; Hammerla et al., 2016).Figure 1: (a) Time series of household power consumption. (b) Local trends in time series. (c)Effect of local raw data on the trend forecasting.In this paper, we focus on learning and forecasting the local trends in time series via neural networks.This involves learning different aspects of the data. On one hand, the sequence of historical localtrends describes the long-term contextual information of time series and thus naturally affects theevolution of the following local trend. On the other hand, the recent raw data points of time series(Wang et al., 2011; Batal et al., 2012), which represent the local variation and behaviour of timeseries, affect the evolving of the following trend as well and have particular predictive power forabruptly changing local trends (Wang et al., 2011). For instance, in Figure 1(c), trend 1,2and3present a continuous upward pattern. Then when we aim at predicting the subsequent trend oftime series at the end of the third local trend, the previous three successive upward trends outline aprobable increasing trend afterwards. However, the local data around the end of the third trend, e.g.,data points in the red circle, indicate that time series could stabilize and even decrease. The datapoints after the third trend indeed present a decreasing trend indicated by the red dotted segment. Inthis case, the subsequent trend has more dependency on the local data points. Therefore, it is highlydesired to develop a systematic way to model such various hidden and complementary dependenciesin time series for the local trend forecasting problem.To this end, we propose a end-to-end hybrid neural network, referred to as TreNet. In particular,it consists of a LSTM recurrent neural network to capture the long dependency in historical localtrends, a convolutional neural network to extract local features from local raw data of time series,and a feature fusion layer to learn joint representation to take advantage of both features drawn fromCNN and LSTM. Such joint representation is used for the local trend forecasting. The experimentalanalysis on real datasets demonstrates that TreNet outperforms individual recurrent neural network,convolutional neural network and a variety of baselines in term of local trend prediction accuracy.The rest of the paper is organized as follows. Section 2 presents related work, while Section 3 definesthe problem to be solved and introduces the notations. In Section 4, we present the proposed TreNet.Section 5 demonstrates the performance of our method and baselines on real datasets. Finally, thepaper is concluded in Section 6. Refer to Section 7 and Section 8 for more experiment results anddiscussion.2Under review as a conference paper at ICLR 20172 R ELATED WORKTraditional learning approaches over local trends of time series mainly make use of Hidden MarkovModels (HMMs) (Wang et al., 2011; Matsubara et al., 2014). HMMs maintain short-term state de-pendences, i.e., the memoryless Markov property and predefined number of states, which requiressignificant task specific knowledge. RNNs instead use high dimensional, distributed hidden statesthat could take into account long-term dependencies in sequence data. Previous time series seg-mentation approaches (Keogh et al., 2001; Matsubara et al., 2014; Yuan, 2015) focus on achievinga meaningful segmentation and finding patterns, rather than modeling the relation in segments andtherefore are not suitable for forecasting local trends. Multi-step ahead prediction is another wayto realize local trend prediction by fitting the predicted values to estimate the local trend. However,multi-step ahead prediction is a non-trivial problem itself (Chang et al., 2012a). In this paper, weconcentrate on directly learning local trends through neural networks.RNNs have recently shown promising results in a variety of applications, especially when there ex-ist sequential dependencies in data (Lyu & Zhu, 2014; Chung et al., 2014; Sutskever et al., 2014).Long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997; Lyu & Zhu, 2014; Chunget al., 2014), a class of recurrent neural networks with sophisticated recurrent hidden and gatedunits, are particularly successful and popular due to its ability to learn hidden long-term sequentialdependencies. (Lipton et al., 2015) uses LSTMs to recognize patterns in multivariate time series,especially for multi-label classification of diagnoses. (Chauhan & Vig, 2015; Malhotra et al., 2015)evaluate the ability of LSTMs to detect anomalies in ECG time series. Bidirectional LSTM (Graves& Schmidhuber, 2005) is usually intended for speech processing rather than time series forecastingproblems. Our paper focuses on using LSTM to capture the dependency in the sequence of histor-ical local trends and meanwhile the hidden states in LSTM are further used to learn joint featurerepresentations for the local trend forecasting.CNN is often used to learn effective representation of local salience from raw data (Vinyals et al.,2015; Donahue et al., 2015; Karpathy et al., 2014). (Hammerla et al., 2016; Yang et al., 2015; Leaet al., 2016) make use of CNNs to extract features from raw time series data for activity/actionrecognition. (Liu et al., 2015) focuses on the prediction of periodical time series values by usingCNN and embedding time series with the potential neighbors in the temporal domain. Our proposedTreNet will combine the strengths of both LSTM and CNN and form a novel and unified neuralnetwork architecture for local trend forecasting.Hybrid neural networks, which combines the strengths of various neural networks, are receiving in-creasing interest in the computer vision domain, such as image captioning (Mao et al., 2014; Vinyalset al., 2015; Donahue et al., 2015), image classification (Wang et al., 2016a), protein structure pre-diction (Li & Yu, 2016), action recognition (Ballas et al., 2015; Donahue et al., 2015) and so on.But efficient exploitation of such hybrid architectures has not been well studied for time series data,especially the trend forecasting problem. (Li & Yu, 2016; Ballas et al., 2015) utilize CNNs over im-ages in cascade of RNNs in order to capture the temporal features for classification. (Bashivan et al.,2015) transforms EEG data into a sequence of topology-preserving multi-spectral images and thentrains a cascaded convolutional-recurrent network over such images for EEG classification. (Wanget al., 2016a; Mao et al., 2014) propose the CNN-RNN framework to learn a shared representationfor image captioning and classification problems. In our proposed TreNet, LSTM and CNN firstrespectively learn the trend evolution and local raw data of time series and then TreNet fuses thefeatures captured by LSTM and CNN to predict the trend.3 P ROBLEM FORMULATIONIn this section, we provide the formal definition of the trend learning and forecasting problem in thispaper.We define time series as a sequence of data points X=fx1;:::;x Tg, where each data point xtisreal-valued and subscript trepresents the time instant. The corresponding local trend sequence ofXis a series of piecewise linear representations of X, denoted byT=fh`k;skig. Each elementofT, e.g.,h`k;skidescribes a linear function over a certain subsequence (or segment) of Xandcorresponds to a local trend in X. Such local trends in Tare extracted from Xby time seriessegmentation and fitting a linear function w.r.t. timetover each segment (Keogh et al., 2001; Wang3Under review as a conference paper at ICLR 2017et al., 2011). `kandskrespectively represent the duration and slope of trend k.`kis measured interms of the time range covered by trend k. Local trends inTare time ordered and non-overlapping.The durations of all the local trends in TaddressPk`k=T. In addition, a local trend sequenceending by time tis denoted byT(t) =fh`k;skijPk`ktg.Meanwhile, as we discussed in Section 1, local raw data of time series affects the varying of trendas well and thus we define the local data w.r.t. a certain time instant tas a sequence of data pointsin a window of size w, denoted byL(t) =fxtw;:::;x tg.At certain time t, trend forecasting is meant to predict the duration and slope of the following trendbased on a given sequence of historical trends T(t)and local data setL(t). The predicted durationand slope at time tare denoted by ^`tand^st. Our proposed TreNet can be trained for predictingeither ^`tor^st. For simplicity, we use ^ytto represent the predicted value of TreNet throughout thepaper.Therefore, given the training dataset D=X[T , we aim to propose a neural network based approachto learn a function ^yt=f(T(t);L(t))for the trend forecasting. In this paper, we focus on univariatetime series. The proposed method can be naturally generalized to multivariate time series as well byaugmenting the input to the neural network. Refer to Section 8 for more discussion.4 H YBRID NEURAL NETWORKS FOR TREND LEARNING AND FORECASTINGIn this section, we first present an overview about the proposed TreNet for the trend forecasting.Then we will detail the components of TreNet.Overview.The idea of our TreNet is to combine CNN with LSTM to utilize their representation abilities ondifferent aspects of training data D(D=X[T ) and then to learn a joint feature for the trend pre-diction. Technically, TreNet is designed to learn a predictive function ^yt=f(R(T(t));C(L(t))).R(T(t))is derived by training the LSTM over sequence Tto capture the dependency in the trendevolving, while C(L(t))corresponds to local features extracted by CNN from L(t). The long-termand local features captured by LSTM and CNN, i.e., R(T(t))andC(L(t))convey complementaryinformation pertaining to the trend varying. Therefore, the feature fusion layer is supposed to takeadvantages of both features to produce a fused representation for improved performance. Finally,the trend prediction is realized by the function f(;), which corresponds to the feature fusion andoutput layers in Figure 2.Figure 2: Illustration of the hybrid architecture of TreNet. (best viewed in colour)Learning the dependency in the trend sequence.During the training phase, the duration `kand slopeskof each local trend kin sequenceTare fedinto the LSTM layer of TreNet. Each j-th neuron in the LSTM layer maintains a memory cjkat stepk. The output hjkor the activation of this neuron is then expressed as (Hochreiter & Schmidhuber,4Under review as a conference paper at ICLR 20171997; Chung et al., 2014):hjk=ojktanh(cjk) (1)whereojkis an output gate and calculated as:ojk=(Wo[`ksk] +Uohk1+Vock)j(2)where [`ksk]is the concatenation of the duration and slope of the trend k,hk1andckare thevectorization of the activations of fhjk1gandfcjkg, andis a logistic sigmoid function. Then,the memory cell cjkis updated through partially forgetting the existing memory and adding a newmemory content ~cjk:cjk=fjkcjk1+ijk~cjk;~cjk=tanh(Wc[`ksk] +Uchk1)j(3)The extent to which the existing memory is forgotten is modulated by a forget gate fjk, and thedegree to which the new memory content is added to the memory cell is modulated by an input gateijk. Then, such gates are computed byfjk=(Wf[`ksk] +Ufhk1+Vfck1)j(4)ijk=(Wi[`ksk] +Uihk1+Vick1)j(5)At each step k, the hidden activation hkis the output to the feature fusion layer. Specifically, givenaT(t)containingnlocal trends (i.e.,jT(t)j=n), the output of R(T(t))isR(T(t)) =hn.Learning features from the local raw data of time series.When thek-th trend inTis fed to LSTM, the corresponding local raw time series data input tothe CNN part of TreNet is L(t), wheret=kPi=1`i. CNN consists of Hstacked layers of 1-dconvolutional, activation and pooling operations. Denote by aithe input signal of layer iand thusat the first layer a1=L(t). Each layer has a specified number of filters niof a specified filter sizedi. Each filter on a layer sweeps through the entire input signal to exact local features as follows:vi;jm=(bi;j+m+di=2Xz=mdi=2Wi;jzaiz);8m= 1;:::;jaij (6)wherevi;jmis the activation of j-th filter of layer ionmposition of the input signal. Here is theLeaky Rectified Linear Unit, which is shown to perform better (Xu et al., 2015). Then the max-pooling is performed over the vi;jmof each filter.Finally, the output of CNN in TreNet is the concatenation of max-pooling of each filter on the lastlayerH, namely:C(L(t)) = [p1;:::;pnH]; pj= [ max1zq(fvH;jm+zg)];8j= 1;:::;nH(7)whereqis the pooling size.Feature fusion and output layers.The feature fusion layer combines the representations R(T(t))andC(L(t)), to form a joint feature.Then, such joint feature is fed to the output layer to provide the trend prediction. Particularly, we firstmapR(T(t))andC(L(t))to the same feature space and add them together to obtain the activationof the feature fusion layer (Mao et al., 2014). The output layer is a fully-connect layer following thefeature fusion layer. Mathematically, the prediction of TreNet is expressed as:^yt=f(R(T(t)); C(L(t))) =Wo(WrR(T(t)) +WcC(L(t)))| {z }feature fusion+bo(8)where()is element-wise leaky ReLU activation function and +denotes the element-wise addi-tion.Woandboare the weights and bias of the output layer.5Under review as a conference paper at ICLR 2017To train TreNet, we adopt the squared error function plus a regularization term as:J(W;b;T;X) =1jTjjTjXk=1(^ykyk)2+kWk2 (9)whereWandbrepresent the weight and bias parameters in TreNet, is a hyperparameter for theregularization term and ykis the true value of trend slope or duration.The cost function is differentiable and the architecture of TreNet allows the gradients from the lossfunction (9) to be backpropagated to both LSTM and CNN parts. TreNet can be trained respectivelyfor the slope and duration of local trends using TandX. When performing forecasting, T(t)andL(t)are fed to TreNet and the prediction value ^ykcould be either the slope or duration dependingon the training target.5 E XPERIMENTAL ANALYSISIn this section, we conduct extensive experiments to demonstrate the prediction performance ofTreNet by comparing to a variety of baselines. Due to the page limit, refer to Section 7 for moreexperiment results.5.1 E XPERIMENT SETUPDataset: We test our method and baselines on three real time series datasets.Daily Household Power Consumption (HousePC). This dataset2contains measurementsof electric power consumption in one household with a one-minute sampling rate over aperiod of almost 4 years. Different electrical quantities and some sub-metering values areavailable. We use the voltage time series throughout the experiments.Gas Sensor (GasSensor). This dataset3contains the recordings of chemical sensors ex-posed to dynamic gas mixtures at varying concentrations. The measurement was con-structed by the continuous acquisition of the sensor array signals for a duration of about 12hours without interruption. We mainly use the gas mixture time series regarding Ethyleneand Methane in air.Stock Transaction (Stock): This dataset is extracted from Yahoo Finance and contains thedaily stock transaction information in New York Stock Exchange from 1950-10 to 2016-4.All datasets are preprocessed by (Keogh et al., 2001) to extract local trends. Alternative time seriessegmentation and local trend extraction approaches can be used as well. We choose (Keogh et al.,2001) here due to its high efficiency. Totally, we obtain 42591 ,4720 and1316 local trends respec-tively from above datasets. For the ease of experimental result interpretation, the slope of extractedlocal trends is represented by the angle of the corresponding linear function and thus in a boundedvalue range [90;90]. The duration of local trends is measured by the number of data points withinthe local trend. Then, the obtained trend sequences and the set of local data are split into training(80%), validation ( 10%) and test ( 10%) datasets.Baselines: We compare TreNet with the following six baselines:CNN . This baseline method predicts the trend by only using CNN over the set of local rawdata of time series to learn features for the forecasting. The size of local data is set at wasis defined in Section 3.LSTM . This method uses LSTM to learn dependencies in the trend sequence Tand pre-dicts the trend only using the trained LSTM.Support Vector Regression (SVR) . A family of support vector regression based ap-proaches with different kernel methods is used for the trend forecasting. We consider three2https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption3https://archive.ics.uci.edu/ml/datasets/Gas+sensor+array+under+dynamic+gas+mixtures6Under review as a conference paper at ICLR 2017Dataset Model RMSE @ Duration RMSE @ SlopeHousePCCNN 27.51 13.56LSTM 27.27 13.27SVRBF 31.81 12.94SVPOLY 31.81 12.93SVSIG 31.80 12.93pHMM 34.06 26.00Naive 39.68 21.17CLSTM 25.97 13.77TreNet 25.89 12.89StockCNN 18.87 12.78LSTM 11.07 8.40SVRBF 11.38 7.40SVPOLY 11.40 7.42SVSIG 11.49 7.41pHMM 36.37 8.70Naive 11.36 8.58CLSTM 9.26 7.31TreNet 8.86 6.84GasSensorCNN 53.99 11.51LSTM 55.77 11.22SVRBF 62.81 10.21SVPOLY 70.91 10.95SVSIG 85.69 11.92pHMM 111.62 13.07Naive 53.76 10.57CLSTM 54.20 14.86TreNet 52.28 9.57Table 1: RMSE of the prediction of local trend duration and slope on each dataset.7Under review as a conference paper at ICLR 2017commonly used kernels (Liu et al., 2015), i.e., Radial Basis kernel ( SVRBF ), Polynomialkernel ( SVPOLY ), Sigmoid kernel ( SVSIG ). The trend sequence and the correspondingset of local time series data are concatenated as the input features to such SVR approaches.Pattern-based Hidden Markov Model (pHMM) . (Wang et al., 2011) proposed a pattern-based hidden Markov model (HMM), which segments the time series and models the de-pendency in segments via HMM. The derived HMM model is used to predict the state oftime series and then to estimate the trend based on the state.Naive . This is the naive approach which takes the duration and slope of the last trend asthe prediction for the next one.ConvNet+LSTM(CLSTM) . It is based on the cascade structure of ConvNet and LSTMin (Bashivan et al., 2015) which feeds the features learnt by ConvNet over time series to aLSTM and obtains the prediction from the LSTM.Evaluation metric: We evaluate the predictive performance of TreNet and baselines in terms ofRoot Mean Square Error (RMSE). The lower the RMSE, the more accurate the predictions.Training: The training procedure of TreNet and baselines in our paper follows the schema below.The CNN and LSTM components in TreNet share the same network structure (e.g., number of lay-ers, neurons in each layer) as CNN and LSTM baselines. CNN has two stacked convolutional layers,which have 32filters of size 2and4. The number of memory cells in LSTM is 600. For baselineCNN and LSTM, we tune the learning rate for each approach from f101;102;103;104;105g(Sutskever et al., 2013), in order to achieve the least prediction errors and then fix the learning rate.For TreNet, in addition to the learning rate, the number of neurons in the feature fusion layer ischosen from the range f300;600;900;1200gto achieve the best performance. We use dropout andL2 regularization to control the capacity of neural networks to prevent overfitting, and set the valuesto0:5and5104respectively for all datasets (Mao et al., 2014). The Adam optimizer (Kingma& Ba, 2014) is chosen to learn the weights in neural networks.Regarding the SVR based approaches, we carefully tune the parameters c(error penalty), d(degreeof kernel function), and (kernel coefficient) for kernels. Each parameter is selected from the setsc2f105;104;:::; 1;:::; 104;105g,d2f1;2;3g,2f105;104;:::; 1;:::; 105grespec-tively. We iterate through candidate values of each combination of c,dandto train our model,and keep the parameters that generate the lowest RMSE on the validation set, and then use them topredict on the test set.The training datasets of SVR and pHMM baselines are consistent as that of TreNet. Likewise, CNNand LSTM baselines are respectively fed by the set of local data and the trend sequence of the samesize as TreNet. In addition, since the window size of local data is tunable, we vary the windowsize of local data, i.e. w, from the rangef100;300;500;700;900g, so as to investigate how the sizeof local data influences the predication performance. The results will be presented in Section 5.2.The model’s performance on the validation set will be evaluated after each epoch of training. Eachmodel is trained for at least 50epochs. Meanwhile, the training process adopts early stopping if nofurther improvement in the performance of validation shows up after 50epochs.5.2 E XPERIMENT RESULTSTable 1 studies the prediction performances of TreNet and baselines. For each dataset, the windowsize of local data is constant for approaches (i.e., CNN, SVRBF, SVPOLY , SVSIG, pHMM andTreNet) that take local data as input. Then, the results of each approach are obtained by tuning thecorresponding parameter as described in Section 5.1.In Table 1, we observe that TreNet consistently outperforms baselines on the duration and slopeprediction by achieving around 30% less errors at the maximum. It verifies that the hybrid architec-ture of TreNet can improve the performance by utilizing the information captured by both CNN andLSTM. Specifically, pHMM method performs worse due to the limited representation capability ofHMM. On the slope prediction, SVR based approaches can get comparable results as TreNet.In the following group of experiments, we investigate the effect of local data size (i.e., w) on theprediction. In particular, we tune the value of local data size for the approaches whose input fea-8Under review as a conference paper at ICLR 2017tures contains local data and observe the prediction errors. Such approaches include CNN, SVRBF,SVPOLY , SVSIG, pHMM and TreNet. LSTM only consumes the trend sequence and thus is notincluded. Due to the page limit, we report the results on the HousePC dataset in Table 2 and Table 3.The results on Stock and GasSensor datasets can be referred to Section 7.Baseline Naive has no original time series data as input CLSTM works on the whole time series andhas no local data. Thus they are excluded from this set of experiments.In Table 2, we observe that compared to baselines TreNet has the lowest errors on the duration pre-diction across different window sizes. pHMM requires sufficient data points to model the relationsof segments and fails to work on 100size. As the window size increases and more local data pointsare fed to the training process, the prediction errors of CNN and TreNet decrease or nearly stabilize.This could be because only the certain amount of local data has predictive power. The filtering andpooling mechanism enables CNN to focus on the certain local data having strong predictive powerand thus giving more local data only gives rise to marginal improvements. Such similar phenomenonis observed on the slope prediction as is shown in Table 3. For more results and discussion, pleaserefer to Section 7.Window Size CNN SVRBF SVPOLY SVSIG pHMM TreNet100 29.37 31.48 31.96 31.88 - 25.93300 27.33 31.17 31.61 31.66 30.03 25.94500 27.51 31.81 31.81 31.80 34.06 25.89700 27.41 31.10 31.09 31.11 27.37 25.72900 27.42 31.28 31.27 31.27 28.45 25.62Table 2: RMSE of the duration predictions w.r.t. different sizes of local data in HousePC datasetWindow Size CNN SVRBF SVPOLY SVSIG pHMM TreNet100 13.68 12.93 12.9352 12.9346 - 13.14300 13.60 12.93 12.9346 12.9345 27.75 13.15500 13.56 12.94 12.9342 12.9346 26.00 12.89700 13.52 12.93 12.9345 12.9345 35.32 12.86900 13.60 12.94 12.9350 12.9346 37.60 12.96Table 3: RMSE of the slope predictions w.r.t. different sizes of local data in HousePC dataset6 C ONCLUSIONIn this paper we propose TreNet, a novel hybrid neural network to learn and predict the local trendbehaviour of time series. The experimental results demonstrate that such a hybrid framework canindeed utilize complementary information extracted by CNN and LSTM to enhance the predictionperformance. Moreover, such architecture is generic and extendible in that additional exogenoustime series can be fed to TreNet, so as to boost the performance and investigate the effect of differentdata sources on the trend evolving.REFERENCESGeorge S Atsalakis and Kimon P Valavanis. Forecasting stock market short-term trends using aneuro-fuzzy based methodology. Expert Systems with Applications , 36(7):10696–10707, 2009.Nicolas Ballas, Li Yao, Chris Pal, and Aaron Courville. Delving deeper into convolutional networksfor learning video representations. arXiv preprint arXiv:1511.06432 , 2015.Pouya Bashivan, Irina Rish, Mohammed Yeasin, and Noel Codella. Learning representations fromeeg with deep recurrent-convolutional neural networks. arXiv preprint arXiv:1511.06448 , 2015.Iyad Batal, Dmitriy Fradkin, James Harrison, Fabian Moerchen, and Milos Hauskrecht. Miningrecent temporal patterns for event detection in multivariate time series data. In Proceedings of9Under review as a conference paper at ICLR 2017the 18th ACM SIGKDD international conference on Knowledge discovery and data mining , pp.280–288. ACM, 2012.Li-Chiu Chang, Pin-An Chen, and Fi-John Chang. Reinforced two-step-ahead weight adjustmenttechnique for online training of recurrent neural networks. IEEE transactions on neural networksand learning systems , 23(8):1269–1278, 2012a.Pei-Chann Chang et al. A novel model by evolving partially connected neural network for stockprice trend forecasting. Expert Systems with Applications , 39(1):611–620, 2012b.Sucheta Chauhan and Lovekesh Vig. Anomaly detection in ecg time signals via deep long short-term memory networks. In Data Science and Advanced Analytics (DSAA), 2015. 36678 2015.IEEE International Conference on , pp. 1–7. IEEE, 2015.Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation ofgated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 , 2014.Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venu-gopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visualrecognition and description. In Proceedings of the IEEE Conference on Computer Vision andPattern Recognition , pp. 2625–2634, 2015.A. Graves. Supervised Sequence Labelling with Recurrent Neural Networks . Studies in Computa-tional Intelligence. Springer, 2012.Alex Graves and J ̈urgen Schmidhuber. Framewise phoneme classification with bidirectional lstmand other neural network architectures. Neural Networks , 18(5):602–610, 2005.Nils Y Hammerla, Shane Halloran, and Thomas Ploetz. Deep, convolutional, and recurrent modelsfor human activity recognition using wearables. arXiv preprint arXiv:1604.08880 , 2016.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Lakhmi C Jain, Manjeevan Seera, Chee Peng Lim, and P Balasubramaniam. A review of onlinelearning in supervised neural networks. Neural Computing and Applications , 25(3-4):491–509,2014.Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In Proceedings of theIEEE conference on Computer Vision and Pattern Recognition , pp. 1725–1732, 2014.Eamonn Keogh, Selina Chu, David Hart, and Michael Pazzani. An online algorithm for segmentingtime series. In Data Mining, 2001. ICDM 2001, Proceedings IEEE International Conference on ,pp. 289–296. IEEE, 2001.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Colin Lea, Rene Vidal, Austin Reiter, and Gregory D Hager. Temporal convolutional networks: Aunified approach to action segmentation. arXiv preprint arXiv:1608.08242 , 2016.Zhen Li and Yizhou Yu. Protein secondary structure prediction using cascaded convolutional andrecurrent neural networks. arXiv preprint arXiv:1604.07176 , 2016.Zachary C Lipton, David C Kale, Charles Elkan, and Randall Wetzell. Learning to diagnose withlstm recurrent neural networks. arXiv preprint arXiv:1511.03677 , 2015.Jiajun Liu, Kun Zhao, Brano Kusy, Ji-rong Wen, and Raja Jurdak. Temporal embedding in convolu-tional neural networks for robust learning of abstract snippets. arXiv preprint arXiv:1502.05113 ,2015.Qi Lyu and Jun Zhu. Revisit long short-term memory: An optimization perspective. In Advances inneural information processing systems workshop on deep Learning and representation Learning ,2014.10Under review as a conference paper at ICLR 2017Pankaj Malhotra, Lovekesh Vig, Gautam Shroff, and Puneet Agarwal. Long short term memorynetworks for anomaly detection in time series. In European Symposium on Artificial NeuralNetworks , volume 23, 2015.Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan Yuille. Deep captioning withmultimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632 , 2014.Yasuko Matsubara, Yasushi Sakurai, and Christos Faloutsos. Autoplait: Automatic mining of co-evolving time sequences. In Proceedings of the 2014 ACM SIGMOD international conference onManagement of data , pp. 193–204. ACM, 2014.Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initial-ization and momentum in deep learning. In Proceedings of the 30th international conference onmachine learning (ICML-13) , pp. 1139–1147, 2013.Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.InAdvances in neural information processing systems , pp. 3104–3112, 2014.Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neuralimage caption generator. In Proceedings of the IEEE Conference on Computer Vision and PatternRecognition , pp. 3156–3164, 2015.Jiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, and Wei Xu. Cnn-rnn: A unifiedframework for multi-label image classification. arXiv preprint arXiv:1604.04573 , 2016a.Linlin Wang, Zhu Cao, Yu Xia, and Gerard de Melo. Morphological segmentation with windowlstm neural networks. In Thirtieth AAAI Conference on Artificial Intelligence , 2016b.Peng Wang, Haixun Wang, and Wei Wang. Finding semantics in time series. In Proceedings ofthe 2011 ACM SIGMOD International Conference on Management of data , pp. 385–396. ACM,2011.Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations inconvolutional network. arXiv preprint arXiv:1505.00853 , 2015.Jian Bo Yang, Minh Nhut Nguyen, Phyo Phyo San, Xiao Li Li, and Shonali Krishnaswamy. Deepconvolutional neural networks on multichannel time series for human activity recognition. InProceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), BuenosAires, Argentina , pp. 25–31, 2015.Chao Yuan. Unsupervised machine condition monitoring using segmental hidden markov models.InProceedings of the 24th International Conference on Artificial Intelligence , pp. 4009–4016.AAAI Press, 2015.Hai-xiang Zhao and Fr ́ed ́eric Magoul `es. A review on the prediction of building energy consumption.Renewable and Sustainable Energy Reviews , 16(6):3586–3592, 2012.7 A PPENDIX7.1 D ATA PRE-PROCESSINGIn this part, we describe the data pre-processing, which extracts the local trend sequence from rawtime series data for the subsequent neural network training and testing.We convert the raw time series data into a piecewise linear representation, namely consecutive seg-ments (Keogh et al., 2001; Wang et al., 2011). Each segment corresponds to a local trend and isfitted by a linear function of time series value w.r.t. time, e.g.,xt=1t+0+over the time range[t1;t2)of this segment. Then, the slope and duration are derived from the coefficient 1and[t1;t2).Technically, we adopt the bottom-up approach in (Keogh et al., 2001), since it can achieve lowerapproximate errors compared with top-down and sliding window methods. The process is illustratedin Figure 3. Initially, we approximate time series XwithbT2cline segments ( Tis the length of the11Under review as a conference paper at ICLR 2017Figure 3: Illustration of local trend extraction via time series segmentation. (Best viewed in colour)time series). Then, we iteratively merge the neighbouring segments to build longer ones. In eachiteration, neighbouring segments with the minimal approximation error are merged into a new one.The merging process repeats until every possible merge gives rise to a segment with errors abovea specified threshold. We use the relative mean squared error as the error metric and specify thethreshold as 0:05.7.2 A DDITIONAL EXPERIMENT RESULTS(a) HousePC (b) Stock(c) GasSensorFigure 4: Visualization of the trend prediction by TreNet in HousePC, Stock and GasSensor datasets.The blue line in each figure represents the historical trend sequence. The yellow line represents thepredicted local trend.In this group of experiments, we visualize the trend prediction using the sample testing data instancefrom each dataset in Figure 4. We can observe that in HousePC TreNet successfully predicts thechanged trend, though there are successive upward trends before. In Stock and GasSensor datasets,the succeeding upward and downward trends are correctly predicted as well.12Under review as a conference paper at ICLR 2017Window Size CNN SVRBF SVPOLY SVSIG pHMM TreNet100 18.87 11.38 11.40 11.49 - 8.86300 18.17 11.41 11.44 11.42 39.84 8.85500 18.06 11.39 11.44 11.36 32.10 8.51700 18.10 11.45 11.59 11.58 36.37 8.58900 18.07 11.32 11.47 11.59 38.36 8.78Table 4: RMSE of the duration predictions on different sizes of local data in Stock datasetWindow Size CNN SVRBF SVPOLY SVSIG pHMM TreNet100 12.78 7.40 7.42 7.41 - 6.84300 12.24 7.42 7.51 7.38 6.67 6.53500 12.13 7.47 7.41 7.42 7.59 6.58700 12.24 7.53 7.58 7.51 9.74 6.75900 12.25 7.61 7.45 7.59 14.00 6.73Table 5: RMSE of the slope predictions on different sizes of local data in Stock datasetThen, we provide the RMSE w.r.t. the varying window size on Stock and GasSensor datasets inTable 4, Table 5, Table 6 and Table 7.From the results, we observe that TreNet outperforms baselines almost on all window sizes. Mean-while, the prediction errors often present the decreasing and stable pattern as the window size varies.Window size of local data: The observation in above experiments w.r.t. the varying window sizeprovides inspiration for choosing the window size of local data. Given the training dataset, we canfind out the maximum duration of local trends and takes it as the local data size. This is becausedoing so can ensure that the range of local data in each training instance can cover the most recentlocal trend, whose raw data is believed to have strong predictive power for the subsequent trend.Additionally, we observe that setting the window size of local data of CNN and TreNet in this waycan achieve comparable prediction errors compared to the cases with larger window sizes .Window Size CNN SVRBF SVPOLY SVSIG pHMM TreNet100 54.23 57.77 65.99 99.78 - 53.91300 53.99 62.81 70.91 85.69 - 52.28500 53.82 61.86 64.33 91.51 111.62 51.77700 53.14 61.20 63.89 78.20 175.36 51.15900 53.19 61.45 63.83 68.09 255.73 51.25Table 6: RMSE of the duration predictions on different sizes of local data in GasSensor datasetWindow Size CNN SVRBF SVPOLY SVSIG pHMM TreNet100 11.98 11.16 11.19 12.48 - 10.30300 11.51 10.21 10.95 11.92 - 9.57500 11.75 10.08 10.65 11.64 13.07 9.60700 11.59 9.54 10.44 11.72 12.29 9.55900 12.10 9.61 10.37 11.54 12.37 9.46Table 7: RMSE of the slope predictions on different sizes of local data in GasSensor dataset8 D ISCUSSIONFor multivariate time series, we can augment the input of TreNet by including the trend sequencesand local data of exogenous time series and then train TreNet for a certain target time series to predictits trend. Another line of research is to explore equipping TreNet with multi-task learning. This ismotivated by the observation that if we decompose the trend forecasting problem into classificationand regression respectively for the slope and duration, we can utilize the correlation between slope13Under review as a conference paper at ICLR 2017and duration to boost the prediction performance. In addition, there could be alternative frameworksto combine the outputs of CNN and LSTM and our work opens the door for applying hybrid neuralnetworks for trend analysis in time series.14
Hk95PK9le
Published as a conference paper at ICLR 2017DEEPBIAFFINE ATTENTION FOR NEURALDEPENDENCY PARSINGTimothy DozatStanford Universitytdozat@stanford.eduChristopher D. ManningStanford Universitymanning@stanford.eduABSTRACTThis paper builds off recent work from Kiperwasser & Goldberg (2016) usingneural attention in a simple graph-based dependency parser. We use a larger butmore thoroughly regularized parser than other recent BiLSTM-based approaches,with biaffine classifiers to predict arcs and labels. Our parser gets state of the art ornear state of the art performance on standard treebanks for six different languages,achieving 95.7% UAS and 94.1% LAS on the most popular English PTB dataset.This makes it the highest-performing graph-based parser on this benchmark—outperforming Kiperwasser & Goldberg (2016) by 1.8% and 2.2%—and com-parable to the highest performing transition-based parser (Kuncoro et al., 2016),which achieves 95.8% UAS and 94.6% LAS. We also show which hyperparameterchoices had a significant effect on parsing accuracy, allowing us to achieve largegains over other graph-based approaches.1 I NTRODUCTIONDependency parsers—which annotate sentences in a way designed to be easy for humans and com-puters alike to understand—have been found to be extremely useful for a sizable number of NLPtasks, especially those involving natural language understanding in some way (Bowman et al., 2016;Angeli et al., 2015; Levy & Goldberg, 2014; Toutanova et al., 2016; Parikh et al., 2015). How-ever, frequent incorrect parses can severely inhibit final performance, so improving the quality ofdependency parsers is needed for the improvement and success of these downstream tasks.The current state-of-the-art transition-based neural dependency parser (Kuncoro et al., 2016) sub-stantially outperforms many much simpler neural graph-based parsers. We modify the neural graph-based approach first proposed by Kiperwasser & Goldberg (2016) in a few ways to achieve com-petitive performance: we build a network that’s larger but uses more regularization; we replace thetraditional MLP-based attention mechanism and affine label classifier with biaffine ones; and ratherthan using the top recurrent states of the LSTM in the biaffine transformations, we first put themthrough MLP operations that reduce their dimensionality. Furthermore, we compare models trainedwith different architectures and hyperparameters to motivate our approach empirically. The result-ing parser maintains most of the simplicity of neural graph-based approaches while approaching theperformance of the SOTA transition-based one.2 B ACKGROUND AND RELATED WORKTransition-based parsers—such as shift-reduce parsers—parse sentences from left to right, main-taining a “buffer” of words that have not yet been parsed and a “stack” of words whose head has notbeen seen or whose dependents have not all been fully parsed. At each step, transition-based parserscan access and manipulate the stack and buffer and assign arcs from one word to another. One canthen train any multi-class machine learning classifier on features extracted from the stack, buffer,and previous arc actions in order to predict the next action.Chen & Manning (2014) make the first successful attempt at incorporating deep learning into atransition-based dependency parser. At each step, the (feedforward) network assigns a probability toeach action the parser can take based on word, tag, and label embeddings from certain words on the1Published as a conference paper at ICLR 2017root /ROOT Casey/NNP hugged/VBD Kim/NNProotnsubj dobjFigure 1: A dependency tree parse for Casey hugged Kim , including part-of-speech tags and a specialroot token. Directed edges (or arcs) with labels (or relations) connect the verb to the root and thearguments to the verb head.stack and buffer. A number of other researchers have attempted to address some limitations of Chen& Manning’s Chen & Manning parser by augmenting it with additional complexity: Weiss et al.(2015) and Andor et al. (2016) augment it with a beam search and a conditional random field lossobjective to allow the parser to “undo” previous actions once it finds evidence that they may havebeen incorrect; and Dyer et al. (2015) and (Kuncoro et al., 2016) instead use LSTMs to representthe stack and buffer, getting state-of-the-art performance by building in a way of composing parsedphrases together.Transition-based parsing processes a sentence sequentially to build up a parse tree one arc at atime. Consequently, these parsers don’t use machine learning for directly predicting edges; theyuse it for predicting the operations of the transition algorithm. Graph-based parsers, by contrast,use machine learning to assign a weight or probability to each possible edge and then construct amaximum spaning tree (MST) from these weighted edges. Kiperwasser & Goldberg (2016) present aneural graph-based parser (in addition to a transition-based one) that uses the same kind of attentionmechanism as Bahdanau et al. (2014) for machine translation. In Kiperwasser & Goldberg’s 2016model, the (bidirectional) LSTM’s recurrent output vector for each word is concatenated with eachpossible head’s recurrent vector, and the result is used as input to an MLP that scores each resultingarc. The predicted tree structure at training time is the one where each word depends on its highest-scoring head. Labels are generated analogously, with each word’s recurrent output vector and itsgold or predicted head word’s recurrent vector being used in a multi-class MLP.Similarly, Hashimoto et al. (2016) include a graph-based dependency parser in their multi-task neu-ral model. In addition to training the model with multiple distinct objectives, they replace the tra-ditional MLP-based attention mechanism that Kiperwasser & Goldberg (2016) use with a bilinearone (but still using an MLP label classifier). This makes it analogous to Luong et al.’s 2015 pro-posed attention mechanism for neural machine translation. Cheng et al. (2016) likewise propose agraph-based neural dependency parser, but in a way that attempts to circumvent the limitation ofother neural graph-based parsers being unable to condition the scores of each possible arc on pre-vious parsing decisions. In addition to having one bidirectional recurrent network that computes arecurrent hidden vector for each word, they have additional, unidirectional recurrent networks (left-to-right and right-to-left) that keep track of the probabilities of each previous arc, and use thesetogether to predict the scores for the next arc.3 P ROPOSED DEPENDENCY PARSER3.1 D EEP BIAFFINE ATTENTIONWe make a few modifications to the graph-based architectures of Kiperwasser & Goldberg (2016),Hashimoto et al. (2016), and Cheng et al. (2016), shown in Figure 2: we use biaffine attentioninstead of bilinear or traditional MLP-based attention; we use a biaffine dependency label classifier;and we apply dimension-reducing MLPs to each recurrent output vector ribefore applying thebiaffine transformation.1The choice of biaffine rather than bilinear or MLP mechanisms makes theclassifiers in our model analogous to traditional affine classifiers, which use an affine transformationover a single LSTM output state ri(or other vector input) to predict the vector of scores sifor allclasses (1). We can think of the proposed biaffine attention mechanism as being a traditional affine1In this paper we follow the convention of using lowercase italic letters for scalars and indices, lowercasebold letters for vectors, uppercase italic letters for matrices, uppercase bold letters for higher order tensors. Wealso maintain this notation when indexing; so row iof matrixRwould be represented as ri.2Published as a conference paper at ICLR 2017. . .root ROOT Kim NNP1111> =BiLSTM: riEmbeddings: xiMLP: h(arc-dep)i;h(arc-head )iH(arc-dep)1U(arc)H(arc-head )S(arc)Figure 2: BiLSTM with deep biaffine attention to score each possible head for each dependent,applied to the sentence “Casey hugged Kim”. We reverse the order of the biaffine transformationhere for clarity.classifier, but using a (dd)linear transformation of the stacked LSTM output RU(1)in place ofthe weight matrix Wand a (d1)transformation Ru(2)for the bias term b(2).si=Wri+b Fixed-class affine classifier (1)s(arc)i =RU(1)ri+Ru(2)Variable-class biaffine classifier (2)In addition to being arguably simpler than the MLP-based approach (involving one bilinear layerrather than two linear layers and a nonlinearity), this has the conceptual advantage of directly mod-eling both the prior probability of a word jreceiving any dependents in the term r>ju(2)and thelikelihood of jreceiving a specific dependent iin the term r>jU(1)ri. Analogously, we also use abiaffine classifier to predict dependency labels given the gold or predicted head yi(3).s(label )i =r>yiU(1)ri+ (ryiri)>U(2)+b Fixed-class biaffine classifier (3)This likewise directly models each of the prior probability of each class, the likelihood of a classgiven just word i(how probable a word is to take a particular label), the likelihood of a class givenjust the head word yi(how probable a word is to take dependents with a particular label), and thelikelihood of a class given both word iand its head (how probable a word is to take a particular labelgiven that word’s head).Applying smaller MLPs to the recurrent output states before the biaffine classifier has the advantageof stripping away information not relevant to the current decision. That is, every top recurrent stateriwill need to carry enough information to identify word i’s head, find all its dependents, exclude allits non-dependents, assign itself the correct label, and assign all its dependents their correct labels, aswell as transfer any relevant information to the recurrent states of words before and after it. Thus rinecessarily contains significantly more information than is needed to compute any individual score,and training on this superfluous information needlessly reduces parsing speed and increases the riskof overfitting. Reducing dimensionality and applying a nonlinearity (4 - 6) addresses both of theseproblems. We call this a deep bilinear attention mechanism, as opposed to shallow bilinear attention,which uses the recurrent states directly.h(arc-dep)i =MLP(arc-dep)(ri) (4)h(arc-head )j =MLP(arc-head )(rj) (5)s(arc)i =H(arc-head )U(1)h(arc-dep)i (6)+H(arc-head )u(2)We apply MLPs to the recurrent states before using them in the label classifier as well. As with othergraph-based models, the predicted tree at training time is the one where each word is a dependent ofits highest scoring head (although at test time we ensure that the parse is a well-formed tree via theMST algorithm).3Published as a conference paper at ICLR 20173.2 H YPERPARAMETER CONFIGURATIONParam Value Param ValueEmbedding size 100 Embedding dropout 33%LSTM size 400 LSTM dropout 33%Arc MLP size 500 Arc MLP dropout 33%Label MLP size 100 Label MLP dropout 33%LSTM depth 3 MLP depth 1 2e31,2 .9Annealing :75t5000tmax 50,000Table 1: Model hyperparametersAside from architectural differences between ours and the other graph-based parsers, we make anumber of hyperparameter choices that allow us to outperform theirs, laid out in Table 1. We use100-dimensional uncased word vectors2and POS tag vectors; three BiLSTM layers (400 dimensionsin each direction); and 500- and 100-dimensional ReLU MLP layers. We also apply dropout at everystage of the model: we drop words and tags (independently); we drop nodes in the LSTM layers(input and recurrent connections), applying the same dropout mask at every recurrent timestep (cf.the Bayesian dropout of Gal & Ghahramani (2015)); and we drop nodes in the MLP layers andclassifiers, likewise applying the same dropout mask at every timestep. We optimize the networkwith annealed Adam (Kingma & Ba, 2014) for about 50,000 steps, rounded up to the nearest epoch.4 E XPERIMENTS & R ESULTS4.1 D ATASETSWe show test results for the proposed model on the English Penn Treebank, converted into StanfordDependencies using both version 3.3.0 and version 3.5.0 of the Stanford Dependency converter(PTB-SD 3.3.0 and PTB-SD 3.5.0); the Chinese Penn Treebank; and the CoNLL 09 shared taskdataset,3following standard practices for each dataset. We omit punctuation from evaluation onlyfor the PTB-SD and CTB. For the English PTB-SD datasets, we use POS tags generated from theStanford POS tagger (Toutanova et al., 2003); for the Chinese PTB dataset we use gold tags; and forthe CoNLL 09 dataset we use the provided predicted tags. Our hyperparameter search was done withthe PTB-SD 3.5.0 validation dataset in order to minimize overfitting to the more popular PTB-SD3.3.0 benchmark, and in our hyperparameter analysis in the following section we report performanceon the PTB-SD 3.5.0 test set, shown in Tables 2 and 3.4.2 H YPERPARAMETER CHOICES4.2.1 A TTENTION MECHANISMWe examined the effect of different classifier architectures on accuracy and performance. What wesee is that the deep bilinear model outperforms the others with respect to both speed and accuracy.The model with shallow bilinear arc and label classifiers gets the same unlabeled performance as thedeep model with the same settings, but because the label classifier is much larger ( (801c801) asopposed to (101c101) ), it runs much slower and overfits. One way to decrease this overfittingis by increasing the MLP dropout, but that of course doesn’t change parsing speed; another way isto decrease the recurrent size to 300, but this hinders unlabeled accuracy without increasing parsingspeed up to the same levels as our deeper model. We also implemented the MLP-based approachto attention and classification used in Kiperwasser & Goldberg (2016).4We found this version to2We compute a “trained” embedding matrix composed of words that occur at least twice in the trainingdataset and add these embeddings to their corresponding pretrained embeddings. Any words that don’t occurin either embedding matrix are replaced with a separate OOV token.3We exclude the Japanese dataset from our evaluation because we do not have access to it.4In the version of TensorFlow we used, the model’s memory requirements during training exceeded theavailable memory on a single GPU when default settings were used, so we reduced the MLP hidden size to 2004Published as a conference paper at ICLR 2017Classifier SizeModel UAS LAS Sents/sec Model UAS LAS Sents/secDeep 95.75 94.22 410.91 3 layers, 400d 95.75 94.22 410.91Shallow 95.74 94.00* 298.99 3 layers, 300d 95.82 94.24 460.01Shallow, 50% drop 95.73 94.05* 300.04 3 layers, 200d 95.55* 93.89* 469.45Shallow, 300d 95.63* 93.86* 373.24 2 layers, 400d 95.62* 93.98* 497.99MLP 95.53* 93.91* 367.44 4 layers, 400d 95.83 94.22 362.09Recurrent CellModel UAS LAS Sents/secLSTM 95.75 94.22 410.91GRU 93.18* 91.08* 435.32Cif-LSTM 95.67 94.06* 463.25Table 2: Test accuracy and speed on PTB-SD 3.5.0. Statistically significant differences are markedwith an asterisk.Input Dropout AdamModel UAS LAS Model UAS LASDefault 95.75 94.22 2=:9 95.75 94.22No word dropout 95.74 94.08* 2=:999 95.53* 93.91*No tag dropout 95.28* 93.60*No tags 95.77 93.91*Table 3: Test Accuracy on PTB-SD 3.5.0. Statistically significant differences are marked with anasterisk.likewise be somewhat slower and significantly underperform the deep biaffine approach in bothlabeled and unlabeled accuracy.4.2.2 N ETWORK SIZEWe also examine more closely how network size influences speed and accuracy. In Kiperwasser& Goldberg’s 2016 model, the network uses 2 layers of 125-dimensional bidirectional LSTMs; inHashimoto et al.’s 2016 model, it has one layer of 100-dimensional bidirectional LSTMs dedicatedto parsing (two lower layers are also trained on other objectives); and Cheng et al.’s 2016 modelhas one layer of 368-dimensional GRU cells. We find that using three or four layers gets signifi-cantly better performance than two layers, and increasing the LSTM sizes from 200 to 300 or 400dimensions likewise signficantly improves performance.54.2.3 R ECURRENT CELLGRU cells have been promoted as a faster and simpler alternative to LSTM cells, and are used inthe approach of Cheng et al. (2016); however, in our model they drastically underperformed LSTMcells. We also implemented the coupled input-forget gate LSTM cells (Cif-LSTM) suggested byGreff et al. (2015),6finding that while the resulting model still slightly underperforms the morepopular LSTM cells, the difference between the two is much smaller. Additionally, because thegate and candidate cell activations can be computed simultaneously with one matrix multiplication,the Cif-LSTM model is faster than the GRU version even though they have the same number ofparameters. We hypothesize that the output gate in the Cif-LSTM model allows it to maintain asparse recurrent output state, which helps it adapt to the high levels of dropout needed to preventoverfitting in a way that GRU cells are unable to do.5The model with 400-dimensional recurrent states significantly outperforms the 300-dimensional one onthe validation set, but not on the test set6In addition to using a coupled input-forget gate, we remove the first tanh nonlinearity, which is no longerneeded when using a coupled gate5Published as a conference paper at ICLR 2017English PTB-SD 3.3.0 Chinese PTB 5.1Type Model UAS LAS UAS LASTransitionBallesteros et al. (2016) 93.56 91.42 87.65 86.21Andor et al. (2016) 94.61 92.79 – –Kuncoro et al. (2016) 95.8 94.6 – –GraphKiperwasser & Goldberg (2016) 93.9 91.9 87.6 86.1Cheng et al. (2016) 94.10 91.49 88.1 85.7Hashimoto et al. (2016) 94.67 92.90 – –Deep Biaffine 95.74 94.08 89.30 88.23Table 4: Results on the English PTB and Chinese PTB parsing datasetsCatalan Chinese CzechModel UAS LAS UAS LAS UAS LASAndor et al. 92.67 89.83 84.72 80.85 88.94 84.56Deep Biaffine 94.69 92.02 88.90 85.38 92.08 87.38English German SpanishModel UAS LAS UAS LAS UAS LASAndor et al. 93.22 91.23 90.91 89.15 92.62 89.95Deep Biaffine 95.21 93.20 93.46 91.44 94.34 91.65Table 5: Results on the CoNLL ’09 shared task datasets4.2.4 E MBEDDING DROPOUTBecause we increase the parser’s power, we also have to increase its regularization. In addition tousing relatively extreme dropout in the recurrent and MLP layers mentioned in Table 1, we alsoregularize the input layer. We drop 33% of words and 33% of tags during training: when one isdropped the other is scaled by a factor of two to compensate, and when both are dropped together,the model simply gets an input of zeros. Models trained with only word or tag dropout but notboth wind up signficantly overfitting, hindering label accuracy and—in the latter case—attachmentaccuracy. Interestingly, not using any tags at all actually results in better performance than usingtags without dropout.4.2.5 O PTIMIZERWe choose to optimize with Adam (Kingma & Ba, 2014), which (among other things) keeps amoving average of the L2norm of the gradient for each parameter throughout training and dividesthe gradient for each parameter by this moving average, ensuring that the magnitude of the gradientswill on average be close to one. However, we find that the value for 2recommended by Kingma& Ba—which controls the decay rate for this moving average—is too high for this task (and wesuspect more generally). When this value is very large, the magnitude of the current update isheavily influenced by the larger magnitude of gradients very far in the past, with the effect that theoptimizer can’t adapt quickly to recent changes in the model. Thus we find that setting 2to:9instead of:999makes a large positive impact on final performance.4.3 R ESULTSOur model gets nearly the same UAS performance on PTB-SD 3.3.0 as the current SOTA modelfrom Kuncoro et al. (2016) in spite of its substantially simpler architecture, and gets SOTA UASperformance on CTB 5.17as well as SOTA performance on all CoNLL 09 languages. It is worthnoting that the CoNLL 09 datasets contain many non-projective dependencies, which are difficultor impossible for transition-based—but not graph-based—parsers to predict. This may account forsome of the large, consistent difference between our model and Andor et al.’s 2016 transition-basedmodel applied to these datasets.7We’d like to thank Zhiyang Teng for finding a bug in the original code that affected the CTB 5.1 dataset6Published as a conference paper at ICLR 2017Where our model appears to lag behind the SOTA model is in LAS, indicating one of a few possibil-ities. Firstly, it may be the result of inefficiencies or errors in the GloVe embeddings or POS tagger,in which case using alternative pretrained embeddings or a more accurate tagger might improvelabel classification. Secondly, the SOTA model is specifically designed to capture phrasal composi-tionality; so another possibility is that ours doesn’t capture this compositionality as effectively, andthat this results in a worse label score. Similarly, it may be the result of a more general limitation ofgraph-based parsers, which have access to less explicit syntactic information than transition-basedparsers when making decisions. Addressing these latter two limitations would require a more inno-vative architecture than the relatively simple one used in current neural graph-based parsers.5 C ONCLUSIONIn this paper we proposed using a modified version of bilinear attention in a neural dependencyparser that increases parsing speed without hurting performance. We showed that our larger but moreregularized network outperforms other neural graph-based parsers and gets comparable performanceto the current SOTA transition-based parser. We also provided empirical motivation for the proposedarchitecture and configuration over similar ones in the existing literature. Future work will involveexploring ways of bridging the gap between labeled and unlabeled accuracy and augment the parserwith a smarter way of handling out-of-vocabulary tokens for morphologically richer languages.REFERENCESDaniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev,Slav Petrov, and Michael Collins. Globally normalized transition-based neural networks. InAssociation for Computational Linguistics , 2016. URL https://arxiv.org/abs/1603.06042 .Gabor Angeli, Melvin Johnson Premkumar, and Christopher D Manning. Leveraging linguisticstructure for open domain information extraction. In Proceedings of the 53rd Annual Meeting ofthe Association for Computational Linguistics (ACL 2015) , 2015.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. International Conference on Learning Representations , 2014.Miguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A Smith. Training with explorationimproves a greedy stack-LSTM parser. Proceedings of the conference on empirical methods innatural language processing , 2016.Samuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, andChristopher Potts. A fast unified model for parsing and sentence understanding. ACL 2016 , 2016.Danqi Chen and Christopher D Manning. A fast and accurate dependency parser using neuralnetworks. In Proceedings of the conference on empirical methods in natural language processing ,pp. 740–750, 2014.Hao Cheng, Hao Fang, Xiaodong He, Jianfeng Gao, and Li Deng. Bi-directional attention withagreement for dependency parsing. arXiv preprint arXiv:1608.02076 , 2016.Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A Smith. Transition-based dependency parsing with stack long short-term memory. Proceedings of the conference onempirical methods in natural language processing , 2015.Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing modeluncertainty in deep learning. International Conference on Machine Learning , 2015.Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn ́ık, Bas R Steunebrink, and J ̈urgen Schmidhuber.LSTM: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems ,2015.7Published as a conference paper at ICLR 2017Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. A joint many-taskmodel: Growing a neural network for multiple nlp tasks. arXiv preprint arXiv:1611.01587 , 2016.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. InternationalConference on Learning Representations , 2014.Eliyahu Kiperwasser and Yoav Goldberg. Simple and accurate dependency parsing using bidirec-tional LSTM feature representations. Transactions of the Association for Computational Linguis-tics, 4:313–327, 2016.Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A.Smith. What do recurrent neural network grammars learn about syntax? CoRR , abs/1611.05774,2016. URL http://arxiv.org/abs/1611.05774 .Omer Levy and Yoav Goldberg. Dependency-based word embeddings. In ACL 2014 , pp. 302–308,2014.Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. Empirical Methods in Natural Language Processing , 2015.Ankur P Parikh, Hoifung Poon, and Kristina Toutanova. Grounded semantic parsing for complexknowledge extraction. In Proceedings of North American Chapter of the Association for Compu-tational Linguistics , pp. 756–766, 2015.Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of theNorth American Chapter of the Association for Computational Linguistics on Human LanguageTechnology-Volume 1 , pp. 173–180. Association for Computational Linguistics, 2003.Kristina Toutanova, Xi Victoria Lin, and Wen-tau Yih. Compositional learning of embeddings forrelation paths in knowledge bases and text. In ACL, 2016.David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. Structured training for neural networktransition-based parsing. Annual Meeting of the Association for Computational Linguistics , 2015.8
BJ_MGwqlg
Under review as a conference paper at ICLR 2017RETHINKING NUMERICAL REPRESENTATIONS FORDEEPNEURAL NETWORKSParker Hill, Babak Zamirai, Shengshuo Lu, Yu-Wei Chao, Michael Laurenzano, Mehrzad SamadiMarios Papaefthymiou, Scott Mahlke, Thomas Wenisch, Jia Deng, Lingjia Tang, Jason MarsDepartment of Electrical Engineering and Computer ScienceUniversity of Michigan, Ann Arborfparkerhh,zamirai,luss,ywchao,mlaurenz,mahrzads g@umich.edufmarios,mahlke,twenisch,jiadeng,lingjia,profmars g@umich.eduABSTRACTWith ever-increasing computational demand for deep learning, it is critical to in-vestigate the implications of the numeric representation and precision of DNNmodel weights and activations on computational efficiency. In this work, we ex-plore unconventional narrow-precision floating-point representations as it relatesto inference accuracy and efficiency to steer the improved design of future DNNplatforms. We show that inference using these custom numeric representationson production-grade DNNs, including GoogLeNet and VGG, achieves an averagespeedup of 7.6with less than 1% degradation in inference accuracy relative toa state-of-the-art baseline platform representing the most sophisticated hardwareusing single-precision floating point. To facilitate the use of such customized pre-cision, we also present a novel technique that drastically reduces the time requiredto derive the optimal precision configuration.1 I NTRODUCTIONRecently, deep neural networks (DNNs) have yielded state-of-the-art performance on a wide arrayof AI tasks, including image classification Krizhevsky et al. (2012), speech recognition Hannunet al. (2014), and language understanding Sutskever et al. (2014). In addition to algorithmic inno-vations Nair & Hinton (2010); Srivastava et al. (2014); Taigman et al. (2014), a key driver behindthese successes are advances in computing infrastructure that enable large-scale deep learning—thetraining and inference of large DNN models on massive datasets Dean et al. (2012); Farabet et al.(2013). Indeed, highly efficient GPU implementations of DNNs played a key role in the first break-through of deep learning for image classification Krizhevsky et al. (2012). Given the ever growingamount of data available for indexing, analysis, and training, and the increasing prevalence of ever-larger DNNs as key building blocks for AI applications, it is critical to design computing platformsto support faster, more resource-efficient DNN computation.A set of core design decisions are common to the design of these infrastructures. One such criti-cal choice is the numerical representation and precision used in the implementation of underlyingstorage and computation. Several recent works have investigated the numerical representation forDNNs Cavigelli et al. (2015); Chen et al. (2014); Du et al. (2014); Muller & Indiveri (2015). Onerecent work found that substantially lower precision can be used for training when the correct nu-merical rounding method is employed Gupta et al. (2015). Their work resulted in the design of avery energy-efficient DNN platform.This work and other previous numerical representation studies for DNNs have either limited them-selves to a small subset of the customized precision design space or drew conclusions using onlysmall neural networks. For example, the work from Gupta et al. 2015 evaluates 16-bit fixed-pointand wider computational precision on LeNet-5 LeCun et al. (1998) and CIFARNET Krizhevsky& Hinton (2009). The fixed-point representation (Figure 1) is only one of many possible numericrepresentations. Exploring a limited customized precision design space inevitably results in designslacking in energy efficiency and computational performance. Evaluating customized precision ac-curacy based on small neural networks requires the assumption that much larger, production-gradeneural networks would operate comparably when subjected to the same customized precision.In this work, we explore the accuracy-efficiency trade-off made available via specialized custom-precision hardware for inference and present a method to efficiently traverse this large design spaceto find an optimal design. Specifically, we evaluate the impact of a wide spectrum of customized1Under review as a conference paper at ICLR 2017integer fraction11001.01110||||||||||... ... Figure 1: A fixed-point representation. Hard-ware parameters include the total number of bitsand the position of the radix point.x2mantissa1.01101|||||...exponent10011|||||... - biasFigure 2: A floating-point representation. Hard-ware parameters include the number of mantissaand exponent bits, and the bias.precision settings for fixed-point and floating-point representations on accuracy and computationalperformance. We evaluate these customized precision configurations on large, state-of-the-art neu-ral networks. By evaluating the full computational precision design space on a spectrum of theseproduction-grade DNNs, we find that:1. Precision requirements do not generalize across all neural networks. This prompts designersof future DNN infrastructures to carefully consider the applications that will be executed ontheir platforms, contrary to works that design for large networks and evaluate accuracy on smallnetworks Cavigelli et al. (2015); Chen et al. (2014).2. Many large-scale DNNs require considerably more precision for fixed-point arithmetic than pre-viously found from small-scale evaluations Cavigelli et al. (2015); Chen et al. (2014); Du et al.(2014). For example, we find that GoogLeNet requires on the order of 40 bits when implementedwith fixed-point arithmetic, as opposed to less than 16 bits for LeNet-5.3. Floating-point representations are more efficient than fixed-point representations when selectingoptimal precision settings. For example, a 17-bit floating-point representation is acceptable forGoogLeNet, while over 40 bits are required for the fixed-point representation – a more expensivecomputation than the standard single precision floating-point format. Current platform designersshould reconsider the use of the floating-point representations for DNN computations instead ofthe commonly used fixed-point representations Cavigelli et al. (2015); Chen et al. (2014); Duet al. (2014); Muller & Indiveri (2015).To make these conclusions on large-scale customized precision design readily actionable for DNNinfrastructure designers, we propose and validate a novel technique to quickly search the large cus-tomized precision design space. This technique leverages the activations in the last layer to builda model to predict accuracy based on the insight that these activations effectively capture the prop-agation of numerical error from computation. Using this method on deployable DNNs, includingGoogLeNet Szegedy et al. (2015) and VGG Simonyan & Zisserman (2014), we find that usingthese recommendations to introduce customized precision into a DNN accelerator fabric results inan average speedup of 7.6 with less than 1% degradation in inference accuracy.2 C USTOMIZED PRECISION HARDWAREWe begin with an overview of the available design choices in the representation of real numbers inbinary and discuss how these choices impact hardware performance.2.1 D ESIGN SPACEWe consider three aspects of customized precision number representations. First, we contrast thehigh-level choice between fixed-point and floating-point representations. Fixed-point binary arith-metic is computationally identical to integer arithmetic, simply changing the interpretation of eachbit position. Floating-point arithmetic, however, represents the sign, mantissa, and exponent of a realnumber separately. Floating-point calculations involve several steps absent in integer arithmetic. Inparticular, addition operations require aligning the mantissas of each operand. As a result, floating-point computation units are substantially larger, slower, and more complex than integer units.In CPUs and GPUs, available sizes for both integers and floating-point calculations are fixed accord-ing to the data types supported by the hardware. Thus, the second aspect of precision customizationwe examine is to consider customizing the number of bits used in representing floating-point andfixed-point numbers. Third, we may vary the interpretation of fixed-point numbers and assignmentof bits to the mantissa and exponent in a floating-point value.2.2 C USTOMIZED PRECISION TYPESIn a fixed-point representation, we select the number of bits as well as the position of the radix point,which separates integer and fractional bits, as illustrated in Figure 1. A bit array, x, encoded in fixedpoint with the radix point at bit l(counting from the right) represents the value 2lPN1i=02ixi.2Under review as a conference paper at ICLR 2017Sign Exponent Mantissa Sign Exponent MantissaComparatorSign Exponent Mantissa8 7 6 5 4 3 2 1 0Delay+×FSMControllerAlignmentAlignmentAddition/SubtractionAlignmentIncrement /Decrement8 7 6 5 4 3 2 1 0(a) (b) (c)Figure 3: Floating point multiply-accumulate (MAC) unit with various levels of detail: (a) the highlevel mathematical operation, (b) the modules that form a floating point MAC, and (c) the signalpropagation of the unit.In contrast to floating point, fixed-point representations with a particular number of bits have a fixedlevel of precision. By varying the position of the radix point, we change the representable range.An example floating-point representation is depicted in Figure 2. As shown in the figure, thereare three parameters to select when designing a floating-point representation: the bit-width ofthe mantissa, the bit-width of the exponent, and an exponent bias. The widths of the mantissaand exponent control precision and dynamic range, respectively. The exponent bias adjusts theoffset of the exponent (which is itself represented as an unsigned integer) relative to zero to fa-cilitate positive and negative exponents. Finally, an additional bit represents the sign. Thus, afloating-point format with Nmmantissa bits, Neexponent bits, and a bias of b, encodes the value2(PNe1i=02iei)b(1 +PNmi=12imi), where mandeare the segments of a bit array representingthe mantissa and exponent, respectively. Note that the leading bit of the mantissa is assumed to be1and hence is not explicitly stored, eliminating redundant encodings of the same value. A single-precision value in the IEEE-754 standard (i.e. float ) comprises 23 mantissa bits, 8 exponent bits,and a sign bit. IEEE-754 standardized floating-point formats include special encodings for specificvalues, such as zero and infinity.Both fixed-point and floating-point representations have limitations in terms of the precision and thedynamic ranges available given particular representations, manifesting themselves computationallyas rounding and saturation errors. These errors propagate through the deep neural network in a waythat is difficult to estimate holistically, prompting experimentation on the DNN itself.2.3 H ARDWARE IMPLICATIONSThe key hardware building block for implementing DNNs is the multiply-accumulate (MAC) op-eration. The MAC operation implements the sum-of-products operation that is fundamental to theactivation of each neuron. We show a high-level hardware block diagram of a MAC unit in Figure 3(a). Figure 3 (b) adds detail for the addition operation, the more complex of the two operations.As seen in the figure, floating-point addition operations involve a number of sub-components thatcompare exponents, align mantissas, perform the addition, and normalize the result. Nearly all ofthe sub-components of the MAC unit scale in speed, power, and area with the bit width.Reducing the floating-point bit width improves hardware performance in two ways. First, reducedbit width makes a computation unit faster. Binary arithmetic computations involve chains of logicoperations that typically grows at least logarithmically, and sometimes linearly (e.g., the propagationof carries in an addition, see Figure 3 (c)), in the number of bits. Reducing the bit width reduces thelength of these chains, allowing the logic to operate at a higher clock frequency. Second, reducedbit width makes a computation unit smaller and require less energy, typically linearly in the numberof bits. The circuit delay and area is shown in Figure 4 when the mantissa bit widths are varied. Asshown in the figure, scaling the length of the mantissa provides substantial opportunity because itdefines the size of the internal addition unit. Similar trends follow for bit-widths in other represen-tations. When a unit is smaller, more replicas can fit within the same chip area and power budget,all of which can operate in parallel. Hence, for computations like those in DNNs, where ampleparallelism is available, area reductions translate into proportional performance improvement.This trend of bit width versus speed, power, and area is applicable to every computation unit inhardware DNN implementations. Thus, in designing hardware that uses customized representations3Under review as a conference paper at ICLR 20175 10 15 200.00.20.40.60.81.0Normalized AreaNormalized DelayMantissa BitsFigure 4: Delay and area implications of man-tissa width, normalized to a 32-bit Single Preci-sion MAC with 23 mantissa bits.32-bit MAC11-bitMAC11-bitMAC11-bitMAC11-bitMACDelay: 10τDelay: 4τParallelism: 1v Parallelism: 4v1v / 10τ 4v / 4τ10x speedupFigure 5: Speedup calculation with a fixed areabudget. The speedup exploits the improvedfunction delay and parallelism.there is a trade-off between accuracy on the one hand and power, area, and speed on the other. Ourgoal is to use precision that delivers sufficient accuracy while attaining large improvements in power,area, and speed over standard floating-point designs.3 M ETHODOLOGYWe describe the methodology we use to evaluate the customized precision design space, using imageclassification tasks of varying complexity as a proxy for computer vision applications. We evaluateDNN implementations using several metrics, classification accuracy, speedup, and energy savingsrelative to a baseline custom hardware design that uses single-precision floating-point representa-tions. Using the results of this analysis, we propose and validate a search technique to efficientlydetermine the correct customized precision design point.3.1 A CCURACYWe evaluate accuracy by modifying the Caffe Jia et al. (2014) deep learning framework to performcalculations with arbitrary fixed-point and floating-point formats. We continue to store values as Cfloat s in Caffe, but truncate the mantissa and exponent to the desired format after each arithmeticoperation. Accuracy, using a set of test inputs disjoint from the training input set, is then measuredby running the forward pass of a DNN model with the customized format and comparing the out-puts with the ground truth. We use the standard accuracy metrics that accompany the dataset foreach DNN. For MNIST (LeNet-5) and CIFAR-10 (CIFARNET) we use top-1 accuracy and for Ima-geNet (GoogLeNet, VGG, and AlexNet) we use top-5 accuracy. Top-1 accuracy denotes the percentof inputs that the DNN predicts correctly after a single prediction attempt, while top-5 accuracyrepresents the percent of inputs that DNN predicts correctly after five attempts.3.2 E FFICIENCYWe quantify the efficiency advantages of customized floating-point representations by designing afloating-point MAC unit in each candidate precision and determining its silicon area and delay char-acteristics. We then report speedup and energy savings relative to a baseline custom hardware im-plementation of a DNN that uses standard single-precision floating-point computations. We designeach variant of the MAC unit using Synopsys Design Compiler and Synopsys PrimeTime, industrystandard ASIC design tools, targeting a commercial 28nm silicon manufacturing process. The toolsreport the power, delay, and area characteristics of each precision variant. As shown in Figure 5,we compute speedups and energy savings relative to the standardized IEEE-754 floating-point rep-resentation considering both the clock frequency advantage and improved parallelism due to areareduction of the narrower bit-width MAC units. This allows customized precision designs to yield aquadratic improvement in total system throughput.3.3 E FFICIENT CUSTOMIZED PRECISION SEARCHTo exploit the benefits of customized precision, a mechanism to select the correct configurationmust be introduced. There are hundreds of designs among floating-point and fixed-point formatsdue to designs varying by the total bit width and the allocation of those bits. This spectrum ofdesigns strains the ability to select an optimal configuration. A straightforward approach to selectthe customized precision design point is to exhaustively compute the accuracy of each design witha large number of neural network inputs. This strategy requires substantial computational resourcesthat are proportional to the size of the network and variety of output classifications. We describe ourtechnique that significantly reduces the time required to search for the correct configuration in orderto facilitate the use of customized precision.The key insight behind our search method is that customized precision impacts the underlying in-ternal computation, which is hidden by evaluating only the NN final accuracy metric. Thus, instead4Under review as a conference paper at ICLR 20170x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(a) GoogLeNet●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(b) VGG●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(c) AlexNet●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(d) CIFARNET●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0x5x10x15x20x25xSpeedup0%20%40%60%80%100%Accuracy(e) LeNet−5●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ●●Custom Floating Point Custom Fixed Point IEEE 754 Single Prec. Figure 6: The inference accuracy versus speedup design space for each of the neural networks,showing substantial computational performance improvements for minimal accuracy degradationwhen customized precision floating-point formats are used.of comparing the final accuracy generated by networks with different precision configurations, wecompare the original NN activations to the customized precision activations. This circumvents theneed to evaluate the large number of inputs required to produce representative neural network accu-racy. Furthermore, instead of examining all of the activations, we only analyze the last layer, sincethe last layer captures the usable output from the neural network as well as the propagation of lostaccuracy. Our method summarizes the differences between the last layer of two configurations bycalculating the linear coefficient of determination between the last layer activations.A method to translate the coefficient of determination to a more desirable metric, such as end-to-endinference accuracy, is necessary. We find that a linear model provides such a transformation. Thecustomized precision setting with the highest speedup that meets a specified accuracy threshold isthen selected. In order to account for slight inaccuracies in the model, inference accuracy for asubset of configurations is evaluated. If the configuration provided by the accuracy model resultsin insufficient accuracy, then an additional bit is added and the process repeats. Similarly, if theaccuracy threshold is met, then a bit is removed from the customized precision format.4 E XPERIMENTSIn this section, we evaluate five common neural networks spanning a range of sizes and depths in thecontext of customized precision hardware. We explore the trade-off between accuracy and efficiencywhen various customized precision representations are employed. Next, we address the sources ofaccuracy degradation when customized precision is utilized. Finally, we examine the characteristicsof our customized precision search technique.4.1 E XPERIMENTAL SETUPWe evaluate the accuracy of customized precision operations on five DNNs: GoogLeNet Szegedyet al. (2015), VGG Simonyan & Zisserman (2014), AlexNet Krizhevsky et al. (2012), CIFAR-NET Krizhevsky & Hinton (2009), and LeNet-5 LeCun et al. (1998). The implementations andpre-trained weights for these DNNs were taken from Caffe Jia et al. (2014). The three largest DNNs(GoogLeNet, VGG, and AlexNet) represent real-world workloads, while the two smaller DNNs (CI-FARNET and LeNet-5) are the largest DNNs evaluated in prior work on customized precision. Foreach DNN, we use the canonical benchmark validation set: ImageNet for GoogLeNet, VGG, andAlexNet; CIFAR-10 for CIFARNET; MNIST for LeNet-5. We utilize the entire validation set forall experiments, except for GoogLeNet and VGG experiments involving the entire design space. Inthese cases we use a randomly-selected 1% of the validation set to make the experiments tractable.4.2 A CCURACY VERSUS EFFICIENCY TRADE -OFFSTo evaluate the benefits of customized precision hardware, we swept the design space for accuracyand performance characteristics. This performance-accuracy trade off is shown in Figure 6. Thisfigure shows the DNN inference accuracy across the full input set versus the speedup for each ofthe five DNN benchmarks. The black star represents the IEEE 754 single precision representation(i.e. the original accuracy with 1 speedup), while the red circles and blue triangles represent thecomplete set of our customized precision floating-point and fixed-point representations, respectively.For GoogLeNet, VGG, and AlexNet it is clear that the floating-point format is superior to the fixed-point format. In fact, the standard single precision floating-point format is faster than all fixed-point configurations that achieve above 40% accuracy. Although fixed-point computation is simplerand faster than floating-point computation when the number of bits is fixed, customized precisionfloating-point representations are more efficient because less bits are needed for similar accuracy.5Under review as a conference paper at ICLR 20173691215182124Mantissa Bits(a) Floating−point speedup46810Exponent Bits<1% AccuracyDegradation0.7x3.9x7.2x10.4x13.6x16.8x20x48121620242832Integer Bits(b) Fixed−point speedup48121620242832Fraction Bits<1% AccuracyDegradation0.2x4.3x8.5x12.6x16.8x21x25.1x3691215182124Mantissa Bits(c) Floating−point energy46810Exponent Bits<1% AccuracyDegradation0.8x1.7x2.6x3.5x4.4x5.3x6.2x48121620242832Integer Bits(d) Fixed−point energy48121620242832Fraction Bits<1% AccuracyDegradation0.3x1.4x2.5x3.7x4.8x5.9x7xFigure 7: The speedup and energy savings as the two parameters are adjusted for the custom floatingpoint and fixed-point representations. The marked area denotes configurations where the total lossin AlexNet accuracy is less than 1%.0 500 1000 1500 2000 2500 3000−1000−50005001000[1][2][3][4][5]# of Accumulated ValuesRunning Accumulator Total[1] IEEE 754 Single Prec.[2] Custom FL M=8/E=6[3] Custom FL M=2/E=14[4] Custom FL M=10/E=4[5] Custom FI L=8/R=6Figure 8: The accumulation of weighted neuron inputs for a spe-cific neuron with various customized precision DNNs as well asthe IEEE 754 single precision floating point configuration for refer-ence. FL and FI are used to abbreviate floating point and fixed-point,respectively. The format parameters are as follows: M=mantissa,E=exponent, L=bits left of radix point, R=bits right of radix point.Figure 9: The linear fit fromthe correlation between nor-malized accuracy and lastlayer activations of the ex-act and customized preci-sion DNNs.By comparing the results across the five different networks in Figure 6, it is apparent that the sizeand structure of the network impacts the customized precision flexibility of the network. This insightsuggests that hardware designers should carefully consider which neural network(s) they expect theirdevice to execute as one of the fundamental steps in the design process. The impact of network sizeon accuracy is discussed in further detail in the following section.The specific impact of bit assignments on performance and energy efficiency are illustrated in Fig-ure 7. This figure shows the the speedup and energy improvements over the single precision floating-point representation as the number of allocated bits is varied. For the floating-point representations,the number of bits allocated for the mantissa (x-axis) and exponent (y-axis) are varied. For the fixed-point representations, the number of bits allocated for the integer (x-axis) and fraction (y-axis) arevaried. We highlight a region in the plot deemed to have acceptable accuracy. In this case, we defineacceptable accuracy to be 99% normalized AlexNet accuracy (i.e., no less than a 1% degradation inaccuracy from the IEEE 754 single precision accuracy on classification in AlexNet).The fastest and most energy efficient representation occurs at the bottom-left corner of the regionwith acceptable accuracy, since a minimal number of bits are used. The configuration with thehighest performance that meets this requirement is a floating-point representation with 6 exponentbits and 7 mantissa bits, which yields a 7.2 speedup and a 3.4savings in energy over the singleprecision IEEE 754 floating-point format. If a more stringent accuracy requirement is necessary,0.3% accuracy degradation, the representation with one additional bit in the mantissa can be used,which achieves a 5.7 speedup and 3.0energy savings.4.3 S OURCES OF ACCUMULATION ERRORIn order to understand how customized precision degrades DNN accuracy among numeric represen-tations, we examine the impact of various reduced precision computations on a neuron. Figure 8presents the serialized accumulation of neuron inputs in the third convolution layer of AlexNet.The x-axis represents the number of inputs that have been accumulated, while the y-axis representsthe current value of the running sum. The black line represents the original DNN computation, abaseline for customized precision settings to match. We find two causes of error between the cus-tomized precision fixed-point and floating-point representations, saturation and excessive rounding.In the fixed-point case (green line, representing 16 bits with the radix point in the center), the centralcause of error is from saturation at the extreme values. The running sum exceeds 255, the maximumrepresentable value in this representation, after 60 inputs are accumulated, as seen in the figure.6Under review as a conference paper at ICLR 2017Figure 10: The speedup achieved by selecting the customized precision using an exhaustive search(i.e. the ideal design) and prediction using the accuracy model with accuracy evaluated for somenumber of configurations (model + X samples). The floating-point (FL) and fixed-point (FI) resultsare shown in the top and bottom rows, respectively. The model with two evaluated designs producesthe same configurations, but requires <0.6% of the search time.After reaching saturation, the positive values are discarded and the final output is unpredictable.Although floating-point representations do not saturate as easily, the floating-point configurationwith 10 mantissa bits and 4 exponent bits (orange line) saturates after accumulating 1128 inputs.Again, the lost information from saturation causes an unpredictable final output.For the next case, the floating-point configuration with 2 bits and 14 bits for the mantissa and ex-ponent (blue line), respectively, we find that the lack of precision for large values causes excessiverounding errors. As shown in the figure, after accumulating 120 inputs, this configuration’s run-ning sum exceeds 256, which limits the minimum adjustment in magnitude to 64 (the exponentnormalizes the mantissa to 256, so the two mantissa bits represent 128 and 64). Finally, one of thecustomized precision types that has high performance and accuracy for AlexNet, 8 mantissa bits and6 exponent bits (red line), is shown as well. This configuration almost perfectly matches the IEEE754 floating-point configuration, as expected based on the final output accuracy.The other main cause of accuracy loss is from values that are too small to be encoded as a non-zerovalue in the chosen customized precision configuration. These values, although not critical duringaddition, cause significant problems when multiplied with a large value, since the output should beencoded as a non-zero value in the specific precision setting. We found that the weighted input isminimally impacted, until the precision is reduced low enough for the weight to become zero.While it may be intuitive based on these results to apply different customized precision settings tovarious stages of the neural network in order to mitigate the sudden loss in accuracy, the realizablegains of multi-precision configurations present significant challenges. The variability between unitswill cause certain units to be unused during specific layers of the neural network causing gains todiminish (e.g., 11-bit units are idle when 16-bit units are required for a particular layer). Also, theapplication specific hardware design is already an extensive process and multiple customized preci-sion configurations increases the difficulty of the hardware design and verification process.4.4 C USTOMIZED PRECISION SEARCHNow we evaluate our proposed customized precision search method. The goal of this method is tosignificantly reduce the required time to navigate the customized precision design space and stillprovide an optimal design choice in terms of speedup, limited by an accuracy constraint.Correlation model. First, we present the linear correlation-accuracy model in Figure 9, which showsthe relationship between the normalized accuracy of each setting in the design space and the corre-lation between its last layer activations compared to those of the original NN. This model, althoughbuilt using all of the customized precision configurations from AlexNet, CIFARNET, and LeNet-5 neural networks, produces a good fit with a correlation of 0.96. It is important that the modelmatches across networks and precision design choices (e.g., floating point versus fixed point), sincecreating this model for each DNN, individually, requires as much time as exhaustive search.Validation. To validate our search technique, Figure 10 presents the accuracy-speedup trade-offcurves from our method compared to the ideal design points. We first obtain optimal results via7Under review as a conference paper at ICLR 2017Figure 11: The speedup resulting from searching for the fastest setting with less than 1% inferenceaccuracy degradation. All selected customized precision DNNs meet this accuracy constraint.exhaustive search. We present our search with a variable number of refinement iterations, wherewe evaluate the accuracy of the current design point and adjust the precision if necessary. To verifyrobustness, the accuracy models were generated using cross-validation where all configurations inthe DNN being searched are excluded (e.g., we build the AlexNet model with LeNet and CIFAR-NET accuracy/correlation pairs). The prediction is made using only ten randomly selected inputs,a tiny subset compared that needed for classification accuracy, some of which are even incorrectlyclassified by the original neural network. Thus, the cost of prediction using the model is negligible.We observe that, in all cases, the accuracy model combined with the evaluation of just two cus-tomized precision configurations provides the same result as the exhaustive search. Evaluating twodesigns out of 340 is 170 faster than exhaustively evaluating all designs. When only one con-figuration is evaluated instead of two (i.e. a further 50% reduction is search time), the selectedcustomized precision setting never violates the target accuracy, but concedes a small amount of per-formance. Finally, we note that our search mechanism, without evaluating inference accuracy forany of the design points, provides a representative prediction of the optimal customized precisionsetting. Although occasionally violating the target accuracy (i.e. the cases where the speedup ishigher than the exhaustive search), this prediction can be used to gauge the amenability of the NNto customized precision without investing any considerable amount of time in experimentation.Speedup. We present the final speedup produced by our search method in Figure 11 when thealgorithm is configured for 99% target accuracy and to use two samples for refinement. In allcases, the chosen customized precision configuration meets the targeted accuracy constraint. Inmost cases, we find that the larger networks require more precision (DNNs are sorted from left toright in descending order based on size). VGG requires less precision than expected, but VGG alsouses smaller convolution kernels than all of the other DNNs except LeNet-5.5 R ELATED WORKTo the best of our knowledge, our work is the first to examine the impact of numeric representationson the accuracy-efficiency trade-offs on large-scale, deployed DNNs with over half a million neu-rons (GoogLeNet, VGG, AlexNet), whereas prior work has only reported results on much smallernetworks such as CIFARNET and LeNet-5 Cavigelli et al. (2015); Chen et al. (2014); Courbariauxet al. (2014); Du et al. (2014); Gupta et al. (2015); Muller & Indiveri (2015). Many of these works fo-cused on fixed-point computation due to the fixed-point representation working well on small-scaleneural networks. We find very different conclusions when considering production-ready DNNs.Other recent works have looked at alternative neural network implementations such as spiking neuralnetworks for more efficient hardware implementation Conti & Benini (2015); Diehl & Cook (2014).This is a very different computational model that requires redevelopment of standard DNNs, unlikeour proposed methodologies. Other works have proposed several approaches to improve perfor-mance and reduce energy consumption of deep neural networks by taking advantage of the fact thatDNNs usually contain redundancies Chen et al. (2015); Figurnov et al. (2015).6 C ONCLUSIONIn this work, we introduced the importance of carefully considering customized precision whenrealizing neural networks. We show that using the IEEE 754 single precision floating point repre-sentation in hardware results in surrendering substantial performance. On the other hand, picking aconfiguration that has lower precision than optimal will result in severe accuracy loss. By reconsid-ering the representation from the ground up in designing custom precision hardware and using oursearch technique, we find an average speedup across deployable DNNs, including GoogLeNet andVGG, of 7.6with less than 1% degradation in inference accuracy.8Under review as a conference paper at ICLR 2017REFERENCESLukas Cavigelli, David Gschwend, Christoph Mayer, Samuel Willi, Beat Muheim, and Luca Benini.Origami: A convolutional network accelerator. In 25th edition on Great Lakes Symposium onVLSI , pp. 199–204, 2015.Wenlin Chen, James Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressingneural networks with the hashing trick. In 32nd International Conference on Machine Learning ,pp. 2285–2294, 2015.Yunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen, Zhi-wei Xu, Ninghui Sun, et al. Dadiannao: A machine-learning supercomputer. In 47th InternationalSymposium on Microarchitecture 2014 , pp. 609–622, 2014.Francesco Conti and Luca Benini. A ultra-low-energy convolution engine for fast brain-inspiredvision in multicore clusters. In Design, Automation & Test in Europe , 2015.Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Low precision arithmetic for deeplearning. CoRR , abs/1412.7024, 2014. URL http://arxiv.org/abs/1412.7024 .Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior,Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances inNeural Information Processing Systems , pp. 1223–1231, 2012.Peter U Diehl and Matthew Cook. Efficient implementation of stdp rules on spinnaker neuromorphichardware. In International Joint Conference on Neural Networks , 2014.Zidong Du, Krishna Palem, Avinash Lingamneni, Olivier Temam, Yunji Chen, and ChengyongWu. Leveraging the error resilience of machine-learning applications for designing highly energyefficient accelerators. In 19th Asia and South Pacific Design Automation Conference , 2014.Clement Farabet, Camille Couprie, Laurent Najman, and Yann LeCun. Learning hierarchical fea-tures for scene labeling. Pattern Analysis and Machine Intelligence, IEEE Transactions on , 35(8):1915–1929, 2013.Michael Figurnov, Dmitry Vetrov, and Pushmeet Kohli. PerforatedCNNs: Acceleration throughelimination of redundant convolutions. arXiv preprint arXiv:1504.08362 , 2015.Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning withlimited numerical precision. In Proceedings of the 32nd International Conference on MachineLearning (ICML-15) , pp. 1737–1746, 2015.Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger,Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deepspeech: Scaling up end-to-endspeech recognition. arXiv preprint arXiv:1412.5567 , 2014.Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser-gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed-ding. arXiv preprint arXiv:1408.5093 , 2014.Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Com-puter Science Department, University of Toronto, Tech. Rep , 1(4):7, 2009.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-lutional neural networks. In Advances in neural information processing systems , 2012.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Lorenz K Muller and Giacomo Indiveri. Rounding methods for neural networks with low resolutionsynaptic weights. arXiv preprint arXiv:1504.05767 , 2015.Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines.In27th International Conference on Machine Learning , 2010.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. arXiv preprint arXiv:1409.1556 , 2014.Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: A simple way to prevent neural networks from overfitting. The Journal of MachineLearning Research , 15(1):1929–1958, 2014.Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural net-works. In Advances in Neural Information Processing Systems , pp. 3104–3112, 2014.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 1–9, 2015.Yaniv Taigman, Ming Yang, Marc’Aurelio Ranzato, and Lars Wolf. Deepface: Closing the gapto human-level performance in face verification. In Computer Vision and Pattern Recognition(CVPR) , pp. 1701–1708, 2014.9
r1te3Fqel
Under review as a conference paper at ICLR 2017END-TO-END ANSWER CHUNK EXTRACTION ANDRANKING FOR READING COMPREHENSIONYang Yu, Wei Zhang, Bowen Zhou, Kazi Hasan, Mo Yu, Bing Xiangfyu, zhangwei, zhou, kshasan, yum, bingxia g@us.ibm.comIBM Watson, Yorktown Heights, NY , USAABSTRACTThis paper proposes dynamic chunk reader (DCR ), an end-to-end neural readingcomprehension (RC) model that is able to extract and rank a set of answer candi-dates from a given document to answer questions. DCR is able to predict answersof variable lengths, whereas previous neural RC models primarily focused on pre-dicting single tokens or entities. DCR encodes a document and an input questionwith recurrent neural networks, and then applies a word-by-word attention mech-anism to acquire question-aware representations for the document, followed bythe generation of chunk representations and a ranking module to propose the top-ranked chunk as the answer. Experimental results show that DCR could achievea 66.3% Exact match and 74.7% F1 score on the Stanford Question AnsweringDataset (Rajpurkar et al., 2016).1 I NTRODUCTIONReading comprehension-based question answering (RCQA) is the task of answering a question witha chunk of text taken from related document(s). A variety of neural models have been proposed re-cently either for extracting a single entity or a single token as an answer from a given text (Hermannet al., 2015; Kadlec et al., 2016; Trischler et al., 2016b; Dhingra et al., 2016; Chen et al., 2016;Sordoni et al., 2016; Cui et al., 2016a); or for selecting the correct answer by ranking a small setof human-provided candidates (Yin et al., 2016; Trischler et al., 2016a). In both cases, an answerboundary is either easy to determine or already given.Different from the above two assumptions for RCQA, in the real-world QA scenario, people mayask questions about both entities (factoid) and non-entities such as explanations and reasons (non-factoid) (see Table 1 for examples).In this regard, RCQA has the potential to complement other QA approaches that leverage structureddata (e.g., knowledge bases) for both the above question types. This is because RCQA can exploitthe textual evidences to ensure increased answer coverage, which is particularly helpful for non-factoid answers. However, it is also challenging for RCQA to identify answer in arbitrary positionin the passage with arbitrary length, especially for non-factoid answers which might be clauses orsentences.As a result, apart from a few exceptions (Rajpurkar et al., 2016; Wang & Jiang, 2016), this researchdirection has not been fully explored yet.Compared to the relatively easier RC task of predicting single tokens/entities1, predicting answersof arbitrary lengths and positions significantly increase the search space complexity:the number of possible candidates to consider is in the order of O(n2), wherenis the number ofpassage words. In contrast, for previous works in which answers are single tokens/entities or fromcandidate lists, the complexity is in O(n)or the size of candidate lists l(usuallyl5), respectively.To address the above complexity, Rajpurkar et al. (Rajpurkar et al., 2016) used a two-step chunk-and-rank approach that employs a rule-based algorithm to extract answer candidates from a passage,Both authors contribute equally1State-of-the-art RC models have a decent accuracy of 70% on the widely used CNN/DailyMail dataset(Hermann et al., 2015).1Under review as a conference paper at ICLR 2017Table 1: Example of questions (with answers) which can be potentially answered with RC on aWikipedia passage. The first question is factoid, asking for an entity. The second and third arenon-factoid.The United Kingdom (UK) intends to withdraw from the European Union (EU),a process commonly known as Brexit, as a result of a June 2016 referendum inwhich 51.9% voted to leave the EU. The separation process is complex, causingpolitical and economic changes for the UK and other countries. As of September2016, neither the timetable nor the terms for withdrawal have been established: inthe meantime, the UK remains a full member of the European Union. The term”Brexit” is a portmanteau of the words ”British” and ”exit”.Q1. Which country withdrew from EU in 2016?A1. United KingdomQ2. How did UK decide to leave the European Union?A2. as a result of a June 2016 referendum in which 51.9% voted to leave the EUQ3. What has not been finalized for Brexit as of September 2016?A3. neither the timetable nor the terms for withdrawalfollowed by a ranking approach with hand-crafted features to select the best answer. The rule-basedchunking approach suffered from low coverage ( 70% recall of answer chunks) that cannot beimproved during training; and candidate ranking performance depends greatly on the quality of thehand-crafted features. More recently, Wang and Jiang (Wang & Jiang, 2016) proposed two end-to-end neural network models, one of which chunks a candidate answer by predicting the answer’s twoboundary indices and the other classifies each passage word into answer/not-answer. Both modelsimproved significantly over the method proposed by Rajpurkar et al. (Rajpurkar et al., 2016).Our proposed model, called dynamic chunk reader (DCR ), not only significantly differs from boththe above systems in the way that answer candidates are generated and ranked, but also sharesmerits with both works. First, our model uses deep networks to learn better representations forcandidate answer chunks, instead of using fixed feature representations as in (Rajpurkar et al., 2016).Second, it represents answer candidates as chunks, as in (Rajpurkar et al., 2016), instead of word-level representations (Wang & Jiang, 2016), to make the model aware of the subtle differencesamong candidates (importantly, overlapping candidates).The contributions of this paper are three-fold. (1) We propose a novel neural network model forjoint candidate answer chunking and ranking, where the candidate answer chunks are dynamicallyconstructed and ranked in an end-to-end manner. (2) we propose a new question-attention mecha-nism to enhance passage word representation, which is subsequently used to construct chunk rep-resentations. (3) We also propose several simple but effective features to strengthen the attentionmechanism, which fundamentally improves candidate ranking, with the by-product of higher exactboundary match accuracy.The experiments on the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016),which contains a variety of human-generated factoid and non-factoid questions, have shown theeffectiveness of above three contributions.Our paper is organized as follows. We formally define the RCQA problem first. Next, we describeour baseline with a neural network component. We present the end-to-end dynamic chunk readermodel next. Finally, we analyze our experimental results and discuss the related work. In appendix,we show formal equations and details of the model.2 P ROBLEM DEFINITIONTable 1 shows an example of our RC setting where the goal is to answer a question Qi, factoid (Q1)or non-factoid (Q2 and Q3), based on a supporting passage Pi, by selecting a continuous sequenceof textAiPias answer.Qi,Pi, andAiare all word sequences, where each word is drawn froma vocabulary, V. Thei-th instance in the training set is a triple in the form of (Pi;Qi;Ai), wherePi= (pi1;:::;p ijPij),Qi= (qi1;:::;q ijQij), andAi= (ai1;:::;a ijAij)(pi;qi;ai2V). Owingto the disagreement among annotators, there could be more than one correct answer for the samequestion; and the k-th answer to Qiis denoted by Aki=faki1;:::;akijAkijg. An answer candidate forthei-th training example is defined as cm;ni, a sub-sequence in Pi, that spans from position mton(1mnjPij). The ground truth answer Aicould be included in the set of all candidates2Under review as a conference paper at ICLR 2017Ci=fcm;nij8m;n2N+;subj (m;n;P i)and 1mnjPijg, wheresubj(m;n;P i)isthe constraint put on the candidate chunk for Pi, such as, “cm;nican have at most 10 tokens”, or“cm;nimust have a pre-defined POS pattern”. To evaluate a system’s performance, its top answer toa question is matched against the corresponding gold standard answer(s).Remark: Categories of RC Tasks Other simpler variants of the aforementioned RC task wereexplored in the past. For example, quiz-style datasets (e.g., MCTest (Richardson et al., 2013),MovieQA (Tapaswi et al., 2015)) have multiple-choice questions with answer options. Cloze-styledatesets(Hermann et al., 2015; Hill et al., 2015; Onishi et al., 2016), usually automatically generated,have factoid “question”s created by replacing the answer in a sentence from the text with blank. Fortheanswer selection task this paper focuses on, several datasets exist, e.g. TREC-QA for factoidanswer extraction from multiple given passages, bAbI (Weston et al., 2014) designed for inferencepurpose, and the SQuAD dataset (Rajpurkar et al., 2016) used in this paper. To the best of ourknowledge, the SQuAD dataset is the only one for both factoid and non-factoid answer extractionwith a question distribution more close to real-world applications.3 B ASELINE : CHUNK -AND -RANK PIPELINE WITH NEURAL RCIn this section we modified a state-of-the-art RC system for cloze-style tasks for our answer extrac-tion purpose, to see how much gap we have for the two type of tasks, and to inspire our end-to-endsystem in the next section. In order to make the cloze-style RC system to make chunk-level deci-sion, we use the RC model to generate features for chunks, which are further used in a feature-basedranker like in (Rajpurkar et al., 2016). As a result, this baseline can be viewed as a deep learningbased counterpart of the system in (Rajpurkar et al., 2016). It has two main components: 1) a stand-alone answer chunker, which is trained to produce overlapping candidate chunks, and 2) a neuralRC model, which is used to score each word in a given passage to be used thereafter for generatingchunk scores.Answer Chunking To reduce the errors generated by the rule-based chunker in (Rajpurkar et al.,2016), first, we capture the part-of-speech (POS) pattern of all answer sub-sequences in the trainingdataset to form a POS pattern trie tree , and then apply the answer POS patterns to passage Pitoacquire a collection of all subsequences (chunk candidates) Ciwhose POS patterns can be matchedto the POS pattern trie . This is equivalent to putting an constraint subj(m;n;P i)to candidateanswer chunk generation process that only choose the chunk with a POS pattern seen for answersin the training data. Then the sub-sequences Ciare used as answer candidates for Pi. Note thatoverlapping chunks could be generated for a passage, and we rely on the ranker to choose the bestcandidate based on features from the cloze-style RC system. Experiments showed that for >90%of the questions on the development set, the ground truth answer is included in the candidate setconstructed in such manner.Feature Extraction and Ranking For chunk ranking, we (1) use neural RCQA model to annotateeachpijin passagePito get score sij, then (2) for every chunk cm;niin passagei, collect scores(sim;:::;s in)for all the (pim;:::;p in)contained within cm;ni, and (3) extract features on the se-quence of scores (sim;:::;s in)to characterize its scale and distribution information, which servesas the feature representation of cm;ni. In step (1) to acquire sijwe train and apply a word-levelsingle-layer Gated Attention Reader2(Dhingra et al., 2016), which has state-of-the-art performanceon CNN/DailyMail cloze-style RC task. In step (3) for chunk cm;ni, we designed 5 features, includ-ing 4 statistics on (sim;:::;s in):maximum, minimum, average and sum ; as well as the count ofmatched POS pattern within the chunk, which serves as an answer prior. We use these 5 features ina state-of-the-art ranker (Ganjisaffar et al., 2011).4 D YNAMIC CHUNK READERThe dynamic chunk reader (DCR) model is presented in Figure 1. Inspired by the baseline we built,DCR is deemed to be superior to the baseline for 3 reasons. First, each chunk has a representationconstructed dynamically, instead of having a set of pre-defined feature values. Second, each passage2We tried using more than one layers in Gated Attention Reader, but no improvement was observed.3Under review as a conference paper at ICLR 2017Figure 1: The main components in dynamic chunk reader model (from bottom to top) are bi-GRUencoders for passage and question, a word-by-word attention bi-GRU for passage, dynamic chunkrepresentations that are transformed from pooled dynamic chunks of hidden states, the questionattention on every chunk representation and final answer chunk prediction.word’s representation is enhanced by word-by-word attention that evaluates the relevance of thepassage word to the question. Third, these components are all within a single, end-to-end model thatcan be trained in a joint manner.DCR works in four steps. First, the encoder layer encodes passage and question separately, by usingbidirectional recurrent neural networks (RNN).Second, the attention layer calculates the relevance of each passage word to the question.Third, the convolution layer generates unigram, bigram and trigram representation for each word.bigram and trigram of a word ends with the same word, and proper padding is applied on the firstword to make sure the output is the same length as input to CNN layer.Fourth, the chunk representation layer dynamically extracts the candidate chunks from the givenpassage, and create chunk representation that encodes the contextual information of each chunk.Fifth, the ranker layer scores the relevance between the representations of a chunk and the givenquestion, and ranks all candidate chunks using a softmax layer.We describe each step below.Encoder Layer We use bi-directional RNN encoder to encode PiandQiof example i, and gethidden state for each word position pijandqik.3As RNN input, a word is represented by a rowvectorx2Rn.xcan be the concatenation of word embedding and word features (see Fig. 1). Theword vector for the t-th word isxt. A word sequence is processed using an RNN encoder with gatedrecurrent units (GRU) (Cho et al., 2014), which was proved to be effective in RC and neural machinetranslation tasks (Bahdanau et al., 2015; Kadlec et al., 2016; Dhingra et al., 2016). For each positiont, GRU computes htwith inputxtand previous state ht1, as:3We can have separated parameters for question and passage encoders but a single shared encoder for bothworks better in the experiments.4Under review as a conference paper at ICLR 2017rt=(Wrxt+Urht1) (1)ut=(Wuxt+Uuht1) (2)ht=tanh(Wxt+U(rtht1)) (3)ht= (1ut)ht1+utht (4)whereht,rt, andut2Rdare d-dimensional hidden state, reset gate, and update gate, respectively;Wfr;ug,W2RndandUfr;ug,U2Rddare the parameters of the GRU; is the sigmoidfunction, anddenotes element-wise production. For a word at t, we use the hidden state !htfromthe forward RNN as a representation of the preceding context, and the htfrom a backward RNNthat encodes text reversely, to incorporate the context after t. Next,ht= [ !ht; ht], the bi-directionalcontextual encoding of xt, is formed. [;]is the concatenation operator. To distinguish hidden statesfrom different sources, we denote the hjofj-th word inPand thehkofk-th word inQashpjandhqkrespectively.Attention Layer Attention mechanism in previous RC tasks (Kadlec et al., 2016; Hermann et al.,2015; Sordoni et al., 2016; Dhingra et al., 2016; Cui et al., 2016a;b) enables question-aware passagerepresentations. We propose a novel attention mechanism inspired by word-by-word style attentionmethods (Rockt ̈aschel et al., 2015; Wang & Jiang, 2015; Santos et al., 2016). For each pj, a question-attended representation vjis computed as follows (example index iis omitted for simplicity):jk=hpjhqk; (5)j=jQjXk=1jkhqk(6)vj= [hpj;j] (7)wherehpjandhqkare hidden states from the bi-directional RNN encoders (see Figure 1). An innerproduct,jk, is calculated between hpjand every question word hqk. It indicates how well thepassage word pjmatches with every question word qk.jis a weighted pooling of jQjquestionhidden states, which serves as a pj-aware question representation. The concatenation of hpjandjleads to a passage-question joint representation, vj2R4d.4Next, we apply a second bi-GRU layertaking thevjs as inputs, and obtain forward and backward representations !jand j2Rd, and inturn their concatenation, j= [ !j; j].Convolution Layer Every word is encoded with complete passage context through attention layerRNN. We would like to model more complex representation of the words, by introducing unigram,bigram and trigram representations. There are two benefits for this enhanced representation: 1)each word could be enhanced with local context information to help identify the boundary of theanswer chunk. Using previous words has been a common feature used in POS tagging and Namedentity recognition; and 2) The information brought in by the ngram into the word representationcould enhance the semantic match between the answer chunk internal and the question. Imaginescenario of a three word candidate, where the last word representation includes the two previouswords through the convolution layer. Matching to the last word could also lead to the match tothe semantics of the internal of the chunk. Specifically, we create for every word position jthreerepresentations, by using ngrams ending with the hidden state j:~j1=jWc1 (8)~j2= [j1;j]Wc2 (9)~j3= [j2;j1;j]Wc3 (10)4We tried another word-by-word attention methods as in (Santos et al., 2016), which has similar passagerepresentation input to question side. However, this does not lead to improvement due to the confusion causedby long passages in RC. Consequently, we used the proposed simplified version of word-by-word attention onpassage side only.5Under review as a conference paper at ICLR 2017The details shown in equations above. We used three different convolution kernels for differentn-grams.Chunk Representation Layer A candidate answer chunk representation is dynamically createdgiven convolution layer output. We first decide the text boundary for the candidate chunk, and thenform a chunk representation using all or part of those joutputs inside the chunk. To decide acandidate chunk (boundary): we tried two ways: (1) adopt the POS trie -based approach used inour baseline, and (2) enumerate all possible chunks up to a maximum number of tokens. For (2),we create up to N(max chunk length) chunks starting from any position jinPj. Approach (1) cangenerate candidates with arbitrary lengths, but fails to recall candidates whose POS pattern is unseenin training set; whereas approach (2) considers all possible candidates within a window and is moreflexible, but over-generates invalid candidates.For a candidate answer chunk cm;nspanning from position mtoninclusively, we construct chunkrepresentation lm;n2R2dusing every ~jlwithin range [m;n], with a function g(), andl2f1;2;3g. Formally,lm;n=g(~ml;:::; ~nl)Each ~jlis a convolution output over concatenated forward and backward RNN hidden states fromattention layer. So the first half in ~jlencodes information in forward RNN hidden states and thesecond half encodes information in backward RNN hidden states. We experimented with severalpooling functions (e.g., max, average) for g(), and found out that, instead of pooling, the best g()function is to concatenate the first half of convolution output of the chunk’s first word and the secondhalf of convolution output of the chunk’s last word. Formally,lm;n=g(~ml;:::; ~nl) = [!~ml; ~nl] (11)where!~mlis half of the hidden state for l-gram word representation corresponding to forward at-tention RNN output. We hypothesize that the hidden states at that two ends can better represent thechunk’s contexts, which is critical for this task, than the states within the chunk. This observationalso agrees with (Kobayashi et al., 2016).Ranker Layer A scoreslm;nfor eachl-gram chunk representation lm;ndenoting the probabilityof that chunk to be the true answer is calculated by dot product with question representation. Thequestion representation is the concatenation of the last hidden state in forward RNN and the firsthidden state in backward RNN. Formally for the chunk cm;niwe havesl(cm;nijPi;Qi) =lm;n[!hQijQij; hQi1] (12)wheresldenotes the score generated from l-gram representation.!hQikor hQikis thek-th hidden stateoutput from question Qi’s forward and backward RNN encoder, respectively.After that, the final score for cm;niis evaluated as the linear combination of three scores, followedby a softmax:s(cm;nijPi;Qi) =softmax (W[s1;s2;s3]) (13)whereslis the shorthand notation for sl(cm;nijPi;Qi);W2R3. In runtime, the chunk with thehighest probability is taken as the answer. In training, the following negative log likelihood isminimized:L=NXi=1logP(AijPi;Qi) (14)Note that the i-th training instance is only used when Aiis included in the corresponding candidatechunk setCi, i.e.9m;nAi=cm;ni. The softmax in the final layer serves as the list-wise rankingmodule similar in spirit to (Cao et al., 2007).5 E XPERIMENTSDataset We used the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016)for the experiment. SQuAD came into our sight because it is a mix of factoid and non-factoid6Under review as a conference paper at ICLR 2017Table 2: Results on the SQuAD dataset.Dev TestModels EM F1 EM F1Rajpurkar 2016 39.8% 51.0% 40.4% 51.0%Wang 2016 59.1% 70.0% 59.5% 70.3%DCR w/o Conv. 62.5% 71.2% 62.5% 71.0%DCR 63.4% 72.3% - -DCR Ensemble 66.3% 74.7% - -questions, a real-world data (crowd-sourced), and of large scale (over 100K question-answer pairscollected from 536 Wikipedia articles). Answers range from single words to long, variable-lengthphrase/clauses. It is a relaxation of assumptions by the cloze-style and quiz-style RC datasets in theProblem Definition section.Features The input vector representation of each word wto encoder RNNs has six parts including apre-trained 300-dimensional GloVe embedding (Pennington et al., 2014) and five features (see Fig-ure 1): (1) a one-hot encoding (46 dimensions) for the part-of-speech (POS) tag of w; (2) a one-hotencoding (14 dimensions) for named entity (NE) tag of w; (3) a binary value indicating whether w’ssurface form is the same to any word in the quesiton; (4) if the lemma form of wis the same to anyword in the question; and (5) if wis caplitalized. Feature (3) and (4) are designed to help the modelalign the passage text with question. Note that some types of questions (e.g., “who”, “when” ques-tions) have answers that have a specific POS/NE tag pattern. For instance, “who” questions mostlyhave proper nouns/persons as answers and “when” questions may frequently have numbers/dates(e.g., a year) as answers. Thus, we believe that the model could exploit the co-relation betweenquestion types and answer POS/NE patterns easier with POS and NE tag features. Implementa-tion Details We pre-processed the SQuAD dataset using Stanford CoreNLP tool5(Manning et al.,2014) with its default setting to tokenize the text and obtain the POS and NE annotations. To trainour model, we used stochastic gradient descent with the ADAM optimizer (Kingma & Ba, 2014),with an initial learning rate of 0.001. All GRU weights were initialized from a uniform distribu-tion between (-0.01, 0.01). The hidden state size, d, was set to 300 for all GRUs. The questionbi-GRU shared parameters with the passage bi-GRU, while the attention-based passage bi-GRU hadits own parameters. We shuffled all training examples at the beginning of each epoch and adopted acurriculum learning approach (Bengio et al., 2009), by sorting training instances by length in every10 batches, to enable the model start learning from relatively easier instances and to harder ones.We also applied dropout of rate 0.2 to the embedding layer of input bi-GRU encoder, and gradientclipping when the norm of gradients exceeded 10. We trained in mini-batch style (mini-batch sizeis 180) and applied zero-padding to the passage and question inputs in each batch. We also set themaximum passage length to be 300 tokens, and pruned all the tokens after the 300-th token in thetraining set to save memory and speed up the training process. This step reduced the training setsize by about 1.6%. During test, we test on the full length of passage, so that we don’t prune out thepotential candidates. We trained the model for at most 30 epochs, and in case the accuracy did notimprove for 10 epochs, we stopped training.For the feature ranking-based system, we used jforest ranker (Ganjisaffar et al., 2011) withLambdaMART-RegressionTree algorithm and the ranking metric was NDCG@10. For the GatedAttention Reader in baseline system, we replicated the method and use the same configurations asin (Dhingra et al., 2016).ResultsTable 2 shows our main results on the SQuAD dataset. Compared to the scores reported in (Wang& Jiang, 2016), our exact match (EM) and F1 on the development set and EM score on the test setare better, and F1 on the test set is comparable. We also studied how each component in our modelcontributes to the overall performance. Table 3 shows the details as well as the results of the baselineranker. As the first row of Table 3 shows, our baseline system improves 10% (EM) over Rajpurkaret al. (Rajpurkar et al., 2016) (Table 2, row 1), the feature-based ranking system. However whencompared to our DCR model (Table 3, row 2), the baseline (row 1) is more than 12% (EM) behind5stanfordnlp.github.io/CoreNLP/7Under review as a conference paper at ICLR 2017Table 3: Detailed system experiments on the SQuAD development set.Models EM F1Chunk-and-Rank Pipeline Baseline 49.7% 64.9%DCR w/o Convolution 62.5% 71.2%DCR w/o Word-by-Word Attention 57.6% 68.7%DCR w/o POS feature (1) 59.2% 68.8%DCR w/o NE feature (2) 60.4% 70.2%DCR w/o Question-word feature (3) 59.5% 69.0%DCR w/o Question-lemma feature (4) 61.2% 69.9%DCR w/o Capitalized feature (5) 61.5% 70.6%DCR w/o Conv. w POS-trie 62.1% 70.8%(a) (b)Figure 2: (a) Variations of DCR performance on ground truth answer length (up to 10) in the devel-opment set. The curve with diamond knots also shows the percentage of answers for each length inthe development set. (b) Performance comparisons for different question head word.even though it is based on the state-of-the-art model for cloze-style RC tasks. This can be attributedto the advanced model structure and end-to-end manner of DCR.We also did ablation tests on our DCR model. First, replacing the word-by-word attention withAttentive Reader style attention (Hermann et al., 2015) decreases the EM score by about 4.5%,showing the strength of our proposed attention mechanism.Second, we remove the features in input to see the contribution of each feature. The result showsthat POS feature (1) and question-word feature (3) are the two most important features.Finally, combining the DCR model with the proposed POS-trie constraints yields a score similar tothe one obtained using the DCR model with all possible n-gram chunks. The result shows that (1)our chunk representations are powerful enough to differentiate even a huge amount of chunks whenno constraints are applied; and (2) the proposed POS-trie reduces the search space at the cost of asmall drop in performance.Analysis To better understand our system, we calculated the accuracy of the attention mechanism ofthe gated attention reader used in our deep learning-based baseline. We found that it is 72% accuratei.e., 72% of the times a word with the highest attention score is inside the correct answer span. Thismeans that, if we could accurately detect the boundary around the word with the highest attentionscore to form the answer span, we could achieve an accuracy close to 72%. In addition, we checkedthe answer recall of our candidate chunking approach. When we use a window size of 10, 92% ofthe time, the ground truth answer will be included in the extracted Candidate chunk set. Thus theupper bound of the exact match score of our baseline system is around 66% (92% (the answer recall)72%). From the results, we see our DCR system’s exact match score is at 62%. This shows thatDCR is proficient at differentiating answer spans dynamically.To further analyze the system’s performance while predicting answers of different lengths, we showthe exact match (EM) and F1 scores for answers with lengths up to 10 tokens in Figure 2(a). Fromthe graph, we can see that, with the increase of answer length, both EM and F1 drops, but in differentspeed. The gap between F1 and exact match also widens as answer length increases. However, themodel still yields a decent accuracy when the answer is longer than a single word. Additionally,Figure 2(b) shows that the system is better at “when” and “who” questions, but performs poorly8Under review as a conference paper at ICLR 2017Figure 3: Development set performance comparisons for different types of “what” questions (con-sidering the types with more than 20 examples in the development set).on “why” questions. The large gap between exact match and F1 on “why” questions means thatperfectly identifying the span is harder than locating the core of the answer span.Since “what”, “which”, and “how” questions contain a broad range of question types, we split themfurther based on the bigram a question starts with, and Figure 3 shows the breakdown for “what”questions. We can see that “what” questions asking for explanations such as “what happens” and“what happened” have lower EM and F1 scores. In contrast, “what” questions asking for year andnumbers have much higher scores and, for these questions, exact match scores are close to F1 scores,which means chunking for these questions are easier for DCR.6 R ELATED WORKAttentive Reader was the first neural model for factoid RCQA (Hermann et al., 2015). It uses Bidi-rectional RNN (Cho et al., 2014; Chung et al.,2014) to encode document and query respectively,and use query representation to match with every token from the document. Attention Sum Reader(Kadlec et al., 2016) simplifies the model to just predicting positions of correct answer in the doc-ument and the training speed and test accuracy are both greatly improved on the CNN/Daily Maildataset. (Chen et al., 2016) also simplified Attentive Reader and reported higher accuracy. Window-based Memory Networks (MemN2N) is introduced along with the CBT dataset (Hill et al., 2015),which does not use RNN encoders, but embeds contexts as memory and matches questions withembedded contexts. Those models’ mechanism is to learn the match between answer context withquestion/query representation. In contrast, memory enhanced neural networks like Neural TuringMachines (Graves et al., 2014) and its variants (Zhang et al., 2015; Gulcehre et al., 2016; Zaremba& Sutskever, 2015; Chandar et al., 2016; Grefenstette et al., 2015) were also potential candidatesfor the task, and Gulcehre et al. (Gulcehre et al., 2016) reported results on the bAbI task, which isworse than memory networks. Similarly, sequence-to-sequence models were also used (Yu et al.,2015; Hermann et al., 2015), but they did not yield better results either.Recently, several models have been proposed to enable more complex inference for RC task. Forinstance, gated attention model (Dhingra et al., 2016) employs a multi-layer architecture, whereeach layer encodes the same document, but the attention is updated from layer to layer. EpiReader(Trischler et al., 2016b) adopted a joint training model for answer extractor and reasoner, where theextractor proposes top candidates, and the reasoner weighs each candidate by examining entailmentrelationship between question-answer representation and the document. An iterative alternating at-tention mechanism and gating strategies were proposed in (Sordoni et al., 2016) to optimize theattention through several hops. In contrast, Cui et al. (Cui et al., 2016a;b) introduced fine-graineddocument attention from each question word and then aggregated those attentions from each ques-tion token by summation with or without weights. This system achieved the state-of-the-art score onthe CNN dataset. Those different variations all result in roughly 3-5% improvement over attentionsum reader, but none of those could achieve higher than that. Other methods include using dynamicentity representation with max-pooling (Kobayashi et al., 2016) that aims to change entity represen-tation with context, and Weissenborn’s (Weissenborn, 2016) system, which tries to separate entityfrom the context and then matches the question to context, scoring an accuracy around 70% on theCNN dataset.9Under review as a conference paper at ICLR 2017However, all of those models assume that the answers are single tokens. This limits the type ofquestions the models can answer. Wang and Jiang (Wang & Jiang, 2016) proposed a match-lstm andachieved good results on SQuAD. However, this approach predicts a chunk boundary or whether aword is part of a chunk or not. In contrast, our approach explicitly constructs the chunk representa-tions and similar chunks are compared directly to determine correct answer boundaries.7 C ONCLUSIONIn this paper we proposed a novel neural reading comprehension model for question answering.Different from the previously proposed models for factoid RCQA, the proposed model, dynamicchunk reader, is not restricted to predicting a single named entity as an answer or selecting an answerfrom a small, pre-defined candidate list. Instead, it is capable of answering both factoid and non-factoid questions as it learns to select answer chunks that are suitable for an input question. DCRachieves this goal with a joint deep learning model enhanced with a novel attention mechanismand five simple yet effective features. Error analysis shows that the DCR model achieves goodperformance, but still needs to improve on predicting longer answers, which are usually non-factoidin nature.REFERENCESDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. ICLR , 2015.Yoshua Bengio, J ́erˆome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. InProceedings of the 26th annual international conference on machine learning , pp. 41–48. ACM,2009.Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwiseapproach to listwise approach. In Proceedings of the 24th international conference on Machinelearning , pp. 129–136. ACM, 2007.Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Ben-gio. Hierarchical memory networks. arXiv preprint arXiv:1605.07427 , 2016.Danqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/dailymail reading comprehension task. ACL, 2016.Kyunghyun Cho, Bart Van Merri ̈enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol-ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoderfor statistical machine translation. arXiv preprint arXiv:1406.1078 , 2014.Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over-attention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423 , 2016a.Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. Consensus attention-basedneural networks for chinese reading comprehension. arXiv preprint arXiv:1607.02250 , 2016b.Bhuwan Dhingra, Hanxiao Liu, William W Cohen, and Ruslan Salakhutdinov. Gated-attentionreaders for text comprehension. arXiv preprint arXiv:1606.01549 , 2016.Yasser Ganjisaffar, Rich Caruana, and Cristina Lopes. Bagging gradient-boosted trees for high pre-cision, low variance ranking models. pp. 85–94, 2011. doi: http://doi.acm.org/10.1145/2009916.2009932.Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprintarXiv:1410.5401 , 2014.Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning totransduce with unbounded memory. In Advances in Neural Information Processing Systems , pp.1828–1836, 2015.10Under review as a conference paper at ICLR 2017Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. Dynamic neural turingmachine with soft and hard addressing schemes. arXiv preprint arXiv:1607.00036 , 2016.Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, MustafaSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances inNeural Information Processing Systems , pp. 1693–1701, 2015.Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Readingchildren’s books with explicit memory representations. arXiv preprint arXiv:1511.02301 , 2015.Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with theattention sum reader network. ACL, 2016.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Sosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. Dynamic entity representationswith max-pooling improves machine reading. NAACL-HLT, 2016.Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, andDavid McClosky. The Stanford CoreNLP natural language processing toolkit. In Associ-ation for Computational Linguistics (ACL) System Demonstrations , pp. 55–60, 2014. URLhttp://www.aclweb.org/anthology/P/P14/P14-5010 .T. Onishi, H. Wang, M. Bansal, K. Gimpel, and D. McAllester. Who did What: A large-scaleperson-centered cloze dataset. In Proc. of EMNLP , 2016.Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for wordrepresentation. In EMNLP , volume 14, pp. 1532–43, 2014.Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questionsfor machine comprehension of text. arXiv preprint arXiv:1606.05250 , 2016.Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset forthe open-domain machine comprehension of text. In EMNLP , volume 3, pp. 4, 2013.Tim Rockt ̈aschel, Edward Grefenstette, Karl Moritz Hermann, Tom ́aˇs Ko ˇcisk`y, and Phil Blunsom.Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664 , 2015.Cicero dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. Attentive pooling networks. arXivpreprint arXiv:1602.03609 , 2016.Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative alternating neural attention formachine reading. arXiv preprint arXiv:1606.02245 , 2016.Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and SanjaFidler. Movieqa: Understanding stories in movies through question-answering. arXiv preprintarXiv:1512.02902 , 2015.Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, and Kaheer Suleman.A parallel-hierarchical model for machine comprehension on sparse data. arXiv preprintarXiv:1603.08884 , 2016a.Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehensionwith the epireader. arXiv preprint arXiv:1606.02270 , 2016b.Shuohang Wang and Jing Jiang. Learning natural language inference with lstm. arXiv preprintarXiv:1512.08849 , 2015.Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer.arXiv preprint arXiv:1608.07905 , 2016.Dirk Weissenborn. Separating answers from queries for neural reading comprehension. arXivpreprint arXiv:1607.03316 , 2016.11Under review as a conference paper at ICLR 2017Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. CoRR , abs/1410.3916, 2014.URLhttp://arxiv.org/abs/1410.3916 .Wenpeng Yin, Sebastian Ebert, and Hinrich Sch ̈utze. Attention-based convolutional neural networkfor machine comprehension. arXiv preprint arXiv:1602.04341 , 2016.Yang Yu, Wei Zhang, Chung-Wei Hang, and Bowen Zhou. Empirical study on deep learning modelsfor question answering. arXiv preprint arXiv:1510.07526 , 2015.Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXivpreprint arXiv:1505.00521 , 362, 2015.Wei Zhang, Yang Yu, and Bowen Zhou. Structured memory for neural turing machines. arXivpreprint arXiv:1510.03931 , 2015.12
B1Igu2ogg
Published as a conference paper at ICLR 2017EFFICIENT VECTOR REPRESENTATION FOR DOCU-MENTS THROUGH CORRUPTIONMinmin ChenCriteo ResearchPalo Alto, CA 94301, USAm.chen@criteo.comABSTRACTWe present an efficient document representation learning framework, DocumentVector through Corruption (Doc2VecC). Doc2VecC represents each document asa simple average of word embeddings. It ensures a representation generated assuch captures the semantic meanings of the document during learning. A cor-ruption model is included, which introduces a data-dependent regularization thatfavors informative or rare words while forcing the embeddings of common andnon-discriminative ones to be close to zero. Doc2VecC produces significantlybetter word embeddings than Word2Vec. We compare Doc2VecC with severalstate-of-the-art document representation learning algorithms. The simple modelarchitecture introduced by Doc2VecC matches or out-performs the state-of-the-artin generating high-quality document representations for sentiment analysis, doc-ument classification as well as semantic relatedness tasks. The simplicity of themodel enables training on billions of words per hour on a single machine. Atthe same time, the model is very efficient in generating representations of unseendocuments at test time.1 I NTRODUCTIONText understanding starts with the challenge of finding machine-understandable representation thatcaptures the semantics of texts. Bag-of-words (BoW) and its N-gram extensions are arguably themost commonly used document representations. Despite its simplicity, BoW works surprisinglywell for many tasks (Wang & Manning, 2012). However, by treating words and phrases as uniqueand discrete symbols, BoW often fails to capture the similarity between words or phrases and alsosuffers from sparsity and high dimensionality.Recent works on using neural networks to learn distributed vector representations of words havegained great popularity. The well celebrated Word2Vec (Mikolov et al., 2013a), by learning topredict the target word using its neighboring words, maps words of similar meanings to nearbypoints in the continuous vector space. The surprisingly simple model has succeeded in generatinghigh-quality word embeddings for tasks such as language modeling, text understanding and machinetranslation. Word2Vec naturally scales to large datasets thanks to its simple model architecture. Itcan be trained on billions of words per hour on a single machine.Paragraph Vectors (Le & Mikolov, 2014) generalize the idea to learn vector representation for docu-ments. A target word is predicted by the word embeddings of its neighbors in together with a uniquedocument vector learned for each document. It outperforms established document representations,such as BoW and Latent Dirichlet Allocation (Blei et al., 2003), on various text understandingtasks (Dai et al., 2015). However, two caveats come with this approach: 1) the number of parame-ters grows with the size of the training corpus, which can easily go to billions; and 2) it is expensiveto generate vector representations for unseen documents at test time.We propose an efficient model architecture, referred to as Document Vector through Corruption(Doc2VecC), to learn vector representations for documents. It is motivated by the observation thatlinear operations on the word embeddings learned by Word2Vec can sustain substantial amountof syntactic and semantic meanings of a phrase or a sentence (Mikolov et al., 2013b). For ex-ample, vec(“Russia”) + vec(“river”) is close to vec(“V olga River”) (Mikolov & Dean, 2013), and1Published as a conference paper at ICLR 2017vec(“king”) - vec(“man”) + vec(“women”) is close to vec(“queen”) (Mikolov et al., 2013b). InDoc2VecC, we represent each document as a simple average of the word embeddings of all thewords in the document. In contrast to existing approaches which post-process learned word em-beddings to form document representation (Socher et al., 2013; Mesnil et al., 2014), Doc2VecCenforces a meaningful document representation can be formed by averaging the word embeddingsduring learning . Furthermore, we include a corruption model that randomly remove words from adocument during learning, a mechanism that is critical to the performance and learning speed of ouralgorithm.Doc2VecC has several desirable properties: 1. The model complexity of Doc2VecC is decoupledfrom the size of the training corpus, depending only on the size of the vocabulary; 2. The modelarchitecture of Doc2VecC resembles that of Word2Vec, and can be trained very efficiently; 3. Thenew framework implicitly introduces a data-dependent regularization, which favors rare or informa-tive words and suppresses words that are common but not discriminative; 4. Vector representationof a document can be generated by simply averaging the learned word embeddings of all the wordsin the document, which significantly boost test efficiency; 5. The vector representation generated byDoc2VecC matches or beats the state-of-the-art for sentiment analysis, document classification aswell as semantic relatedness tasks.2 R ELATED WORKS AND NOTATIONSText representation learning has been extensively studied. Popular representations range from thesimplest BoW and its term-frequency based variants (Salton & Buckley, 1988), language modelbased methods (Croft & Lafferty, 2013; Mikolov et al., 2010; Kim et al., 2015), topic models (Deer-wester et al., 1990; Blei et al., 2003), Denoising Autoencoders and its variants (Vincent et al., 2008;Chen et al., 2012), and distributed vector representations (Mesnil et al., 2014; Le & Mikolov, 2014;Kiros et al., 2015). Another prominent line of work includes learning task-specific document rep-resentation with deep neural networks, such as CNN (Zhang & LeCun, 2015) or LSTM based ap-proaches (Tai et al., 2015; Dai & Le, 2015).In this section, we briefly introduce Word2Vec and Paragraph Vectors, the two approaches that aremost similar to ours. There are two well-know model architectures used for both methods, referredto as Continuous Bag-of-Words (CBoW) and Skipgram models (Mikolov et al., 2013a). In thiswork, we focus on CBoW. Extending to Skipgram is straightforward. Here are the notations we aregoing to use throughout the paper:D=fD1;;Dng: a training corpus of size n, in which each document Dicontains a variable-length sequence of words w1i;;wTii;V: the vocabulary used in the training corpus, of sizes v;x2Rv1: BoW of a document, where xj= 1iff wordjdoes appear in the document.ct2Rv1: BoW of the local context wtk;;wt1;wt+1;;wt+kat the target position t.ctj= 1iff wordjappears within the sliding window of the target;U2Rhv: the projection matrix from the input space to a hidden space of size h. We use uwtodenote the column in Ufor wordw, i.e., the “input“ vector of word w;V>2Rvh: the projection matrix from the hidden space to output. Similarly, we use vwtodenote the column in Vfor wordw, i.e., the “output“ vector of word w.Word2Vec. Word2Vec proposed a neural network architecture of an input layer, a projection layerparameterized by the matrix Uand an output layer by V>. It defines the probability of observingthe target word wtin a document Dgiven its local context ctasP(wtjct) =exp(v>wtUct)Pw02Vexp(v>w0Uct)The word vectors are then learned to maximize the log likelihood of observing the target word ateach position of the document. Various techniques (Mitchell & Lapata, 2010; Zanzotto et al., 2010;Yessenalina & Cardie, 2011; Grefenstette et al., 2013; Socher et al., 2013; Kusner et al., 2015)2Published as a conference paper at ICLR 2017have been studied to generate vector representations of documents from word embeddings, amongwhich the simplest approach is to use weighted average of word embeddings. Similarly, our methodforms document representation by averaging word embeddings of all the words in the document.Differently, as our model encodes the compositionality of words in the learned word embeddings,heuristic weighting at test time is not required.Paragraph Vectors. Paragraph Vectors, on the other hands, explicitly learns a document vectorwith the word embeddings. It introduces another projection matrix D2Rhn. Each column of Dacts as a memory of the global topic of the corresponding document. It then defines the probabilityof observing the target word wtin a document Dgiven its local context ctasP(wtjct;d) =exp(v>wt(Uct+d))Pw02Vexp(v>w0(Uct+d))where d2Dis the vector representation of the document. As we can see from this formula, thecomplexity of Paragraph Vectors grows with not only the size of the vocabulary, but also the size ofthe training corpus. While we can reasonably limit the size of a vocabulary to be within a millionfor most datasets, the size of a training corpus can easily go to billions. What is more concerning isthat, in order to come up with the vector representations of unseen documents, we need to performan expensive inference by appending more columns to Dand gradient descent on Dwhile fixingother parameters of the learned model.3 M ETHODSeveral works (Mikolov & Dean, 2013; Mikolov et al., 2013b) showcased that syntactic and seman-tic regularities of phrases and sentences are reasonably well preserved by adding or subtracting wordembeddings learned through Word2Vec. It prompts us to explore the option of simply representinga document as an average of word embeddings. Figure 1 illustrates the new model architecture.wt#1Wt+1Wt+2wpwqwrwtopeningfortheperformancepraisedbrazilceremonyword<vectorsdocument< vectorAverage/ConcatenateFigure 1: A new framework for learning document vectors.Similar to Word2Vec or Paragraph Vectors, Doc2VecC consists of an input layer, a projection layeras well as an output layer to predict the target word, “ceremony” in this example. The embeddings ofneighboring words (“opening”, “for”, “the”) provide local context while the vector representation ofthe entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors,which directly learns a unique vector for each document, Doc2VecC represents each document asan average of the embeddings of words randomly sampled from the document (“performance” atpositionp, “praised” at position q, and “brazil” at position r).Huang et al. (2012) also proposed the idea of using average of word embeddings to represent theglobal context of a document. Different from their work, we choose to corrupt the original documentby randomly removing significant portion of words, and represent the document using only theembeddings of the words remained. This corruption mechanism offers us great speedup duringtraining as it significantly reduces the number of parameters to update in back propagation. At thesame time, as we are going to detail in the next section, it introduces a special form of regularization,which brings great performance improvement.3Published as a conference paper at ICLR 2017Here we describe the stochastic process we used to generate a global context at each update. Theglobal context, which we denote as ~x, is generated through a unbiased mask-out/drop-out corruption,in which we randomly overwrites each dimension of the original document xwith probability q. Tomake the corruption unbiased, we set the uncorrupted dimensions to 1=(1q)times its originalvalue. Formally,~xd=(0; with probability qxd1q;otherwise(1)Doc2VecC then defines the probability of observing a target word wtgiven its local context ctaswell as the global context ~xasP(wtjct;~x) =exp(v>wt(local contextz}|{Uct+global contextz}|{1TU~x))Pw02Vexp(v>w0Uct+1TU~x) (2)HereTis the length of the document. Exactly computing the probability is impractical, instead weapproximate it with negative sampling (Mikolov et al., 2013a).f(w;c;~x)logP(wtjct;~x)logv>w(Uc+1TU~x)+Xw0Pvlogv>w0(Uc+1TU~x)(3)herePvstands for a uniform distribution over the terms in the vocabulary. The two projectionmatrices UandVare then learned to minimize the loss:`=nXi=1TiXt=1f(wti;cti;~xti) (4)Given the learned projection matrix U, we then represent each document simply as an average ofthe embeddings of the words in the document,d=1TXw2Duw: (5)We are going to elaborate next why we choose to corrupt the original document with the corruptionmodel in eq.(1) during learning, and how it enables us to simply use the average word embeddingsas the vector representation for documents at test time.3.1 C ORRUPTION AS DATA -DEPENDENT REGULARIZATIONWe approximate the log likelihood for each instance f(w;c;~x)in eq.(4) with its Taylor expansionwith respect to ~xup to the second-order (Van Der Maaten et al., 2013; Wager et al., 2013; Chenet al., 2014). Concretely, we choose to expand at the mean of the corruption x=Ep(~xjx)[~x]:f(w;c;~x)f(w;c;x) + (~xx)>r~xf+12(~xx)>r2~xf(~xx)wherer~xfandr2~xfare the first-order (i.e., gradient) and second-order (i.e., Hessian) of the loglikelihood with respect to ~x. Expansion at the mean xis crucial as shown in the following steps.Let us assume that for each instance, we are going to sample the global context ~xinfinitely manytimes, and thus compute the expected log likelihood with respect to the corrupted ~x.Ep(~xjx)[f(w;c;~x)]f(w;c;x) +12trE[(~xx)(~xx)>]r2~xfThe linear term disappears as Ep(~xjx)[~xx] = 0 . We substitute in xfor the mean xof thecorrupting distribution (unbiased corruption) and the matrix x=E[(~xx)(~xx)>]for thevariance, and obtainEp(~xjx)[f(w;c;~x)]f(w;c;x) +12trxr2~xf(6)4Published as a conference paper at ICLR 2017As each word in a document is corrupted independently of others, the variance matrix xis simpli-fied to a diagonal matrix with jthelement equalsq1qx2j. As a result, we only need to compute thediagonal terms of the Hessian matrix r2~xf.Thejthdimension of the Hessian’s diagonal evaluated at the mean xis given by@2f@x2j=w;c;x(1w;c;x)(1Tv>wuj)2Xw0Pvw0;c;x(1w0;c;x)(1Tv>w0uj)2Plug the Hessian matrix and the variance matrix back into eq.(6), and then back to the loss definedin eq.(4), we can see that Doc2VecC intrinsically minimizes`=nXi=1TiXt=1f(wti;cti;xi) +q1qvXj=1R(uj) (7)Eachf(wti;cti;xi)in the first term measures the log likelihood of observing the target word wtigiven its local context ctiand the document vector di=1TUxi.As such, Doc2VecC enforces that adocument vector generated by averaging word embeddings can capture the global semantics of thedocument, and fill in information missed in the local context.The second term here is a data-dependent regularization. The regularization on the embedding ujof each word jtakes the following form,R(uj)/nXi=1TiXt=1x2ij"wti;cti;xi(1wti;cti;xi)(1Tv>wtiuj)2+Xw0Pvw0;cti;xi(1w0;cti;xi)(1Tv>w0uj)2#wherew;c;x=(v>w(Uc+1TUx))prescribes the confidence of predicting the target word wgivenits neighboring context cas well as the document vector d=1TUx.Closely examining R(uj)leads to several interesting findings: 1. the regularizer penalizes moreon the embeddings of common words. A word jthat frequently appears across the training corpus,i.e,xij= 1 often, will have a bigger regularization than a rare word; 2. on the other hand, theregularization is modulated by w;c;x(1w;c;x), which is small if w;c;x!1or0. In otherwords, if ujis critical to a confident prediction w;c;xwhen it is active, then the regularization isdiminished. Similar effect was observed for dropout training for logistic regression model (Wageret al., 2013) and denoising autoencoders (Chen et al., 2014).4 E XPERIMENTSWe evaluate Doc2VecC on a sentiment analysis task, a document classification task and a semanticrelatedness task, along with several document representation learning algorithms. All experimentscan be reproduced using the code available at https://github.com/mchen24/iclr20174.1 B ASELINESWe compare against the following document representation baselines: bag-of-words (BoW) ;De-noising Autoencoders (DEA) (Vincent et al., 2008) , a representation learned from reconstructingoriginal document xusing corrupted one ~x. SDAs have been shown to be the state-of-the-art for sen-timent analysis tasks (Glorot et al., 2011). We used Kullback-Liebler divergence as the reconstruc-tion error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into ac-count the non-zero elements of xin the reconstruction error and employed negative sampling for theremainings; Word2Vec (Mikolov et al., 2013a)+IDF , a representation generated through weightedaverage of word vectors learned using Word2Vec; Doc2Vec (Le & Mikolov, 2014) ;Skip-thoughtVectors(Kiros et al., 2015) , a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representationsthat apply to various natural language processing tasks. We also include RNNLM (Mikolov et al.,2010) , a recurrent neural network based language model in the comparison. In the semantic related-ness task, we further compare to LSTM-based methods (Tai et al., 2015) that have been reportedon this dataset.5Published as a conference paper at ICLR 2017Table 1: Classification error of a linear classifier trained on various document representations on theImdb dataset.Model Error rate % (include test) Error rate % (exclude test)Bag-of-Words (BOW) 12.53 12.59RNN-LM 13.59 13.59Denoising Autoencoders (DEA) 11.58 12.54Word2Vec + A VG 12.11 12.69Word2Vec + IDF 11.28 11.92Paragraph Vectors 10.81 12.10Skip-thought Vectors - 17.42Doc2VecC 10.48 11.704.2 S ENTIMENT ANALYSISFor sentiment analysis, we use the IMDB movie review dataset. It contains 100,000 movies reviewscategorized as either positive or negative. It comes with predefined train/test split (Maas et al.,2011): 25,000 reviews are used for training, 25,000 for testing, and the rest as unlabeled data. Thetwo classes are balanced in the training and testing sets. We remove words that appear less than 10times in the training set, resulting in a vocabulary of 43,375 distinct words and symbols.Setup. We test the various representation learning algorithms under two settings: one follows thesame protocol proposed in (Mesnil et al., 2014), where representation is learned using all the avail-able data, including the test set; another one where the representation is learned using training andunlabeled set only. For both settings, a linear support vector machine (SVM) (Fan et al., 2008)is trained afterwards on the learned representation for classification. For Skip-thought Vectors, weused the generic model1trained on a much bigger book corpus to encode the documents. A vector of4800 dimensions, first 2400 from the uni-skip model, and the last 2400 from the bi-skip model, aregenerated for each document. In comparison, all the other algorithms produce a vector representa-tion of size 100. The supervised RNN-LM is learned on the training set only. The hyper-parametersare tuned on a validation set subsampled from the training set.Accuracy. Comparing the two columns in Table 1, we can see that all the representation learn-ing algorithms benefits from including the testing data during the representation learning phrase.Doc2VecC achieved similar or even better performance than Paragraph Vectors. Both methodsoutperforms the other baselines, beating the BOW representation by 15%. In comparison withWord2Vec+IDF, which applies post-processing on learned word embeddings to form document rep-resentation, Doc2VecC naturally enforces document semantics to be captured by averaged wordembeddings during training. This leads to better performance. Doc2VecC reduces to Denoising Au-toencoders (DEA) if the local context words are removed from the paradigm shown in Figure 1. Byincluding the context words, Doc2VecC allows the document vector to focus more on capturing theglobal context. Skip-thought vectors perform surprisingly poor on this dataset comparing to othermethods. We hypothesized that it is due to the length of paragraphs in this dataset. The averagelength of paragraphs in the IMDB movie review dataset is 296:5, much longer than the ones usedfor training and testing in the original paper, which is in the order of 10. As noted in (Tai et al.,2015), the performance of LSTM based method (similarly, the gated RNN used in Skip-thoughtvectors) drops significantly with increasing paragraph length, as it is hard to preserve state over longsequences of words.Time. Table 2 summarizes the time required by these algorithms to learn and generate the documentrepresentation. Word2Vec is the fastest one to train. Denoising Autoencoders and Doc2VecC secondthat. The number of parameters that needs to be back-propagated in each update was increased bythe number of surviving words in ~x. We found that both models are not sensitive to the corruptionrateqin the noise model. Since the learning time decreases with higher corruption rate, we usedq= 0:9throughout the experiments. Paragraph Vectors takes longer time to train as there aremore parameters (linear to the number of document in the learning set) to learn. At test time,Word2Vec+IDF, DEA and Doc2VecC all use (weighted) averaging of word embeddings as document1available at https://github.com/ryankiros/skip-thoughts6Published as a conference paper at ICLR 2017Table 2: Learning time and representation generation time required by different representation learn-ing algorithms.Model Learning time Generation timeDenoising Autoencoders 3m 23s 7sWord2Vec + IDF 2m 33s 7sParagraph Vectors 4m 54s 4m 17sSkip-thought 2h 2hDoc2VecC 4m 30s 7sTable 3: Words with embeddings closest to 0 learned by different algorithms.Word2Vec harp(118) distasteful(115) switzerland(101) shabby(103) fireworks(101) heav-ens(100) thornton(108) endeavor(100) dense(108) circumstance(119) debacle(103)ParaVectors harp(118) dense(108) reels(115) fireworks(101) its’(103) unnoticed(112) pony(102)fulfilled(107) heavens(100) bliss(110) canned(114) shabby(103) debacle(103)Doc2VecC ,(1099319) .(1306691) the(1340408) of(581667) and(651119) up(49871) to(537570)that(275240) time(48205) endeavor(100) here(21118) way(31302) own(13456)representation. Paragraph Vectors, on the other hand, requires another round of inference to producethe vector representation of unseen test documents. It takes Paragraph Vectors 4 minutes and 17seconds to infer the vector representations for the 25,000 test documents, in comparison to 7 secondsfor the other methods. As we did not re-train the Skip-thought vector models on this dataset, thetraining time2reported in the table is the time it takes to generate the embeddings for the 25,000training documents. Due to repeated high-dimensional matrix operations required for encoding longparagraphs, it takes fairly long time to generate the representations for these documents. Similarlyfor testing. The experiments were conducted on a desktop with Intel i7 2.2Ghz cpu.Data dependent regularization. As explained in Section 3.1, the corruption introduced inDoc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent butuninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100in this experiment. Table 3 lists the words having the smallest l2norm of embeddings found bydifferent algorithms. The number inside the parenthesis after each word is the number of times thisword appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words haveembeddings that are close to zero, despite some of them being indicative of sentiment such as deba-cle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of wordsfrequently appear in the training set, but are uninformative, such as symbols and stop words.Subsampling frequent words. Note that for all the numbers reported, we applied the trick ofsubsampling of frequent words introduced in (Mikolov & Dean, 2013) to counter the imbalancebetween frequent and rare words. It is critical to the performance of simple Word2Vec+A VG as thesole remedy to diminish the contribution of common words in the final document representation. Ifwe were to remove this step, the error rate of Word2Vec+A VG will increases from 12:1%to13:2%.Doc2VecC on the other hand naturally exerts a stronger regularization toward embeddings of wordsthat are frequent but uninformative, therefore does not rely on this trick.4.3 W ORD ANALOGYIn table 3, we demonstrated that the corruption model introduced in Doc2VecC dampens the embed-dings of words which are common and non-discriminative (stop words). In this experiment, we aregoing to quantatively compare the word embeddings generated by Doc2VecC to the ones generatedby Word2Vec, or Paragraph Vectors on the word analogy task introduced by Mikolov et al. (2013a).The dataset contains five types of semantic questions, and nine types of syntactic questions, with atotal of 8,869 semantic and 10,675 syntactic questions. The questions are answered through simplelinear algebraic operations on the word embeddings generated by different methods. Please refer tothe original paper for more details on the evaluation protocol.2As reported in the original paper, training of the skip-thought vector model on the book corpus datasettakes around 2 weeks on GPU.7Published as a conference paper at ICLR 20171M 2M 4M 8M 15M02040603:86:18:3 9:113:318:726:432:736:138:920:328:136:442:546:7Number of paragraphs used for learningAccuracy (%)ParagraphVectors Word2Vec Doc2VecC(a) h=501M 2M 4M 8M 15M02040605:17:510:9 10:2 10:223:634:742:448:250:724:334:144:152:658:2Number of paragraphs used for learningParagraphVectors Word2Vec Doc2VecC(b) h=100Figure 2: Accuracy on subset of the Semantic-Syntactic Word Relationship test set. Only questionscontaining words from the most frequent 30k words are included in the test.Semantic questions Word2Vec Doc2VecC Syntactic questions Word2Vec Doc2VecCcapital-common-countries 73.59 81.82 gram1-adjective-to-adverb 19.25 20.32capital-world 67.94 77.96 gram2-opposite 14.07 25.54currency 17.14 12.86 gram3-comparative 60.21 74.47city-in-state 34.49 42.86 gram4-superlative 52.87 55.40family 68.71 64.62 gram5-present-participle 56.34 65.81gram6-nationality-adjective 88.71 91.03gram7-past-tense 47.05 51.86gram8-plural 50.28 61.27gram9-plural-verbs 25.38 39.69Table 4: Top 1 accuracy on the 5 type of semantics and 9 types of syntactic questions.We trained the word embeddings of different methods using the English news dataset released underthe ACL workshop on statistical machine translation. The training set includes close to 15M para-graphs with 355M tokens. We compare the performance of word embeddings trained by differentmethods with increasing embedding dimensionality as well as increasing training data.We observe similar trends as in Mikolov et al. (2013a). Increasing embedding dimensionality aswell as training data size improves performance of the word embeddings on this task. However, theimprovement is diminishing. Doc2VecC produces word embeddings which performs significantlybetter than the ones generated by Word2Vec. We observe close to 20% uplift when we train on thefull training corpus. Paragraph vectors on the other hand performs surprisingly bad on this dataset.Our hypothesis is that due to the large capacity of the model architecture, Paragraph Vectors reliesmostly on the unique document vectors to capture the information in a text document instead oflearning the word semantic or syntactic similarities. This also explains why the PV-DBOW Le &Mikolov (2014) model architecture proposed in the original work, which completely removes wordembedding layers, performs comparable to the distributed memory version.In table 5, we list a detailed comparison of the performance of word embeddings generated byWord2Vec and Doc2VecC on the 14 subtasks, when trained on the full dataset with embedding ofsize 100. We can see that Doc2VecC significantly outperforms the word embeddings produced byWord2Vec across almost all the subtasks.4.4 D OCUMENT CLASSIFICATIONFor the document classification task, we use a subset of the wikipedia dump, which contains over300,000 wikipedia pages in 100 categories. The 100 categories includes categories under sports,8Published as a conference paper at ICLR 2017Table 5: Classification error (%) of a linear classifier trained on various document representationson the Wikipedia dataset.Model BOW DEA Word2Vec + A VG Word2Vec + IDF ParagraphVectors Doc2VecCh= 100 36.03 32.30 33.2 33.16 35.78 31.92h= 200 36.03 31.36 32.46 32.48 34.92 30.84h= 500 36.03 31.10 32.02 32.13 33.93 30.43h= 1000 36.03 31.13 31.78 32.06 33.02 30.24(a) Doc2Vec (b) Doc2VecCFigure 3: Visualization of document vectors on Wikipedia dataset using t-SNE.entertainment, literature, and politics etc. Examples of categories include American drama films,Directorial debut films, Major League Baseball pitchers and Sydney Swans players. Body texts(the second paragraph) were extracted for each page as a document. For each category, we select1,000 documents with unique category label, and 100 documents were used for training and 900documents for testing. The remaining documents are used as unlabeled data. The 100 classesare balanced in the training and testing sets. For this data set, we learn the word embedding anddocument representation for all the algorithms using all the available data. We apply a cutoff of 10,resulting in a vocabulary of size 107;691.Table 5 summarizes the classification error of a linear SVM trained on representations of differentsizes. We can see that most of the algorithms are not sensitive to the size of the vector represen-tation. Doc2Vec benefits most from increasing representation size. Across all sizes of representa-tions, Doc2VecC outperform the existing algorithms by a significant margin. In fact, Doc2VecC canachieve same or better performance with a much smaller representation vector.Figure 4: Visualization of Wikipedia Doc2VecCvectors using t-SNE.Figure 3 visualizes the document representa-tions learned by Doc2Vec (left) and Doc2VecC(right) using t-SNE (Maaten & Hinton, 2008).We can see that documents from the same cat-egory are nicely clustered using the representa-tion generated by Doc2VecC. Doc2Vec, on theother hand, does not produce a clear separationbetween different categories, which explains itsworse performance reported in Table 5.Figure 4 visualizes the vector representationgenerated by Doc2VecC w.r.t. coarser catego-rization. we manually grouped the 100 cate-gories into 7 coarse categories, television, al-bums, writers, musicians, athletes, species andactors. Categories that do no belong to any ofthese 7 groups are not included in the figure.9Published as a conference paper at ICLR 2017We can see that documents belonging to a coarser category are grouped together. This subset in-cludes is a wide range of sports descriptions, ranging from football, crickets, baseball, and cyclingetc., which explains why the athletes category are less concentrated. In the projection, we can seedocuments belonging to the musician category are closer to those belonging to albums category thanthose of athletes or species.4.5 S EMANTIC RELATEDNESSWe test Doc2VecC on the SemEval 2014 Task 1: semantic relatedness SICK dataset (Marelli et al.,2014). Given two sentences, the task is to determine how closely they are semantically related. Theset contains 9,927 pairs of sentences with human annotated relatedness score, ranging from 1 to 5.A score of 1 indicates that the two sentences are not related, while 5 indicates high relatedness. Theset is splitted into a training set of 4,500 instances, a validation set of 500, and a test set of 4,927.We compare Doc2VecC with several winning solutions of the competition as well as several morerecent techniques reported on this dataset, including bi-directional LSTM and Tree-LSTM3trainedfrom scratch on this dataset, Skip-thought vectors learned a large book corpus4(Zhu et al., 2015)and produced sentence embeddings of 4,800 dimensions on this dataset. We follow the same proto-col as in skip-thought vectors, and train Doc2VecC on the larger book corpus dataset. Contrary tothe vocabulary expansion technique used in (Kiros et al., 2015) to handle out-of-vocabulary words,we extend the vocabulary of the learned model directly on the target dataset in the following way:we use the pre-trained word embedding as an initialization, and fine-tune the word and sentencerepresentation on the SICK dataset. Notice that the fine-tuning is done for sentence representationlearning only, and we did not use the relatedness score in the learning. This step brings small im-provement to the performance of our algorithm. Given the sentence embeddings, we used the exactsame training and testing protocol as in (Kiros et al., 2015) to score each pair of sentences: withtwo sentence embedding u1andu2, we concatenate their component-wise product, u1u2and theirabsolute difference, ju1u2jas the feature representation.Table 6 summarizes the performance of various algorithms on this dataset. Despite its simplicity,Doc2VecC significantly out-performs the winning solutions of the competition, which are heavilyfeature engineered toward this dataset and several baseline methods, noticeably the dependency-treeRNNs introduced in (Socher et al., 2014), which relies on expensive dependency parsers to composesentence vectors from word embeddings. The performance of Doc2VecC is slightly worse than theLSTM based methods or skip-thought vectors on this dataset, while it significantly outperformsskip-thought vectors on the IMDB movie review dataset ( 11:70% error rate vs 17:42%). As wehypothesized in previous section, while Doc2VecC is better at handling longer paragraphs, LSTM-based methods are superior for relatively short sentences (of length in the order of 10s). We wouldlike to point out that Doc2VecC is much faster to train and test comparing to skip-thought vectors. Ittakes less than 2 hours to learn the embeddings on the large book corpus for Doc2VecC on a desktopwith Intel i7 2.2Ghz cpu, in comparison to the 2 weeks on GPU required by skip-thought vectors.5 C ONCLUSIONWe introduce a new model architecture Doc2VecC for document representation learning. It is veryefficient to train and test thanks to its simple model architecture. Doc2VecC intrinsically makes suredocument representation generated by averaging word embeddings capture semantics of documentduring learning. It also introduces a data-dependent regularization which favors informative or rarewords while dampening the embeddings of common and non-discriminative words. As such, eachdocument can be efficiently represented as a simple average of the learned word embeddings. Incomparison to several existing document representation learning algorithms, Doc2VecC outperformsnot only in testing efficiency, but also in the expressiveness of the generated representations.3The word representation was initialized using publicly available 300-dimensional Glove vectors trained on840 billion tokens of Common Crawl data4The dataset contains 11,038 books with over one billion words10Published as a conference paper at ICLR 2017Table 6: Test set results on the SICK semantic relatedness task. The first group of results are from thesubmission to the 2014 SemEval competition; the second group includes several baseline methodsreported in (Tai et al., 2015); the third group are methods based on LSTM reported in (Tai et al.,2015) as well as the skip-thought vectors (Kiros et al., 2015).Method Pearson’sSpearman’s MSEIllinois-LH 0.7993 0.7538 0.3692UNAL-NLP 0.8070 0.7489 0.3550Meaning Factory 0.8268 0.7721 0.3224ECNU 0.8279 0.7689 0.3250Mean vectors (Word2Vec + avg) 0.7577 0.6738 0.4557DT-RNN (Socher et al., 2014) 0.7923 0.7319 0.3822SDT-RNN (Socher et al., 2014) 0.7900 0.7304 0.3848LSTM (Tai et al., 2015) 0.8528 0.7911 0.2831Bidirectional LSTM (Tai et al., 2015) 0.8567 0.7966 0.2736Dependency Tree-LSTM (Tai et al., 2015) 0.8676 0.8083 0.2532combine-skip (Kiros et al., 2015) 0.8584 0.7916 0.2687Doc2VecC 0.8381 0.7621 0.3053REFERENCESDavid M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machineLearning research , 3(Jan):993–1022, 2003.Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. Marginalized denoising autoencodersfor domain adaptation. arXiv preprint arXiv:1206.4683 , 2012.Minmin Chen, Kilian Q Weinberger, Fei Sha, and Yoshua Bengio. Marginalized denoising auto-encoders for nonlinear representations. In ICML , pp. 1476–1484, 2014.Bruce Croft and John Lafferty. Language modeling for information retrieval , volume 13. SpringerScience & Business Media, 2013.Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in Neural Infor-mation Processing Systems , pp. 3079–3087, 2015.Andrew M Dai, Christopher Olah, and Quoc V Le. Document embedding with paragraph vectors.arXiv preprint arXiv:1507.07998 , 2015.Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman.Indexing by latent semantic analysis. Journal of the American society for information science , 41(6):391, 1990.Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. Liblinear: Alibrary for large linear classification. JMLR , 9(Aug):1871–1874, 2008.Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Domain adaptation for large-scale sentimentclassification: A deep learning approach. In ICML , pp. 513–520, 2011.Edward Grefenstette, Georgiana Dinu, Yao-Zhong Zhang, Mehrnoosh Sadrzadeh, and Marco Ba-roni. Multi-step regression learning for compositional distributional semantics. arXiv preprintarXiv:1301.6939 , 2013.Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. Improving word repre-sentations via global context and multiple word prototypes. In ACL, pp. 873–882, 2012.Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural languagemodels. arXiv preprint arXiv:1508.06615 , 2015.Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Tor-ralba, and Sanja Fidler. Skip-thought vectors. In Advances in neural information processingsystems , pp. 3294–3302, 2015.11Published as a conference paper at ICLR 2017Matt J Kusner, Yu Sun, Nicholas I Kolkin, and Kilian Q Weinberger. From word embeddings todocument distances. In Proceedings of the 32nd International Conference on Machine Learning(ICML 2015) , pp. 957–966, 2015.Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICML ,volume 14, pp. 1188–1196, 2014.Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and ChristopherPotts. Learning word vectors for sentiment analysis. In ACL, pp. 142–150, 2011.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research , 9(Nov):2579–2605, 2008.Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and RobertoZamparelli. Semeval-2014 task 1: Evaluation of compositional distributional semantic models onfull sentences through semantic relatedness and textual entailment. SemEval-2014 , 2014.Gr ́egoire Mesnil, Tomas Mikolov, Marc’Aurelio Ranzato, and Yoshua Bengio. Ensemble of gen-erative and discriminative techniques for sentiment analysis of movie reviews. arXiv preprintarXiv:1412.5335 , 2014.T Mikolov and J Dean. Distributed representations of words and phrases and their compositionality.Advances in neural information processing systems , 2013.Tomas Mikolov, Martin Karafi ́at, Lukas Burget, Jan Cernock `y, and Sanjeev Khudanpur. Recurrentneural network based language model. In Interspeech , volume 2, pp. 3, 2010.Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represen-tations in vector space. arXiv preprint arXiv:1301.3781 , 2013a.Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space wordrepresentations. In HLT-NAACL , volume 13, pp. 746–751, 2013b.Jeff Mitchell and Mirella Lapata. Composition in distributional models of semantics. Cognitivescience , 34(8):1388–1429, 2010.Gerard Salton and Christopher Buckley. Term-weighting approaches in automatic text retrieval.Information processing & management , 24(5):513–523, 1988.Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng,and Christopher Potts. Recursive deep models for semantic compositionality over a sentimenttreebank. In EMNLP , volume 1631, pp. 1642, 2013.Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng.Grounded compositional semantics for finding and describing images with sentences. Trans-actions of the Association for Computational Linguistics , 2:207–218, 2014.Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representationsfrom tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075 , 2015.Laurens Van Der Maaten, Minmin Chen, Stephen Tyree, and Kilian Q Weinberger. Learning withmarginalized corrupted features. In ICML (1) , pp. 410–418, 2013.Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting andcomposing robust features with denoising autoencoders. In Proceedings of the 25th internationalconference on Machine learning , pp. 1096–1103. ACM, 2008.Stefan Wager, Sida Wang, and Percy S Liang. Dropout training as adaptive regularization. InAdvances in neural information processing systems , pp. 351–359, 2013.Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topicclassification. In Proceedings of the 50th Annual Meeting of the Association for ComputationalLinguistics: Short Papers-Volume 2 , pp. 90–94. Association for Computational Linguistics, 2012.12Published as a conference paper at ICLR 2017Ainur Yessenalina and Claire Cardie. Compositional matrix-space models for sentiment analysis.InProceedings of the Conference on Empirical Methods in Natural Language Processing , pp.172–182. Association for Computational Linguistics, 2011.Fabio Massimo Zanzotto, Ioannis Korkontzelos, Francesca Fallucchi, and Suresh Manandhar. Es-timating linear models for compositional distributional semantics. In Proceedings of the 23rdInternational Conference on Computational Linguistics , pp. 1263–1271, 2010.Xiang Zhang and Yann LeCun. Text understanding from scratch. arXiv preprint arXiv:1502.01710 ,2015.Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba,and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watchingmovies and reading books. In arXiv preprint arXiv:1506.06724 , 2015.13
SyWvgP5el
Published as a conference paper at ICLR 2017EPO PT: LEARNING ROBUST NEURAL NETWORKPOLICIES USING MODEL ENSEMBLESAravind Rajeswaran1, Sarvjeet Ghotra2, Balaraman Ravindran3, Sergey Levine4aravraj@cs.washington.edu, sarvjeet.13it236@nitk.edu.in,ravi@cse.iitm.ac.in, svlevine@eecs.berkeley.edu1University of Washington Seattle2NITK Surathkal3Indian Institute of Technology Madras4University of California BerkeleyABSTRACTSample complexity and safety are major challenges when learning policies withreinforcement learning for real-world tasks, especially when the policies are repre-sented using rich function approximators like deep neural networks. Model-basedmethods where the real-world target domain is approximated using a simulatedsource domain provide an avenue to tackle the above challenges by augmenting realdata with simulated data. However, discrepancies between the simulated sourcedomain and the target domain pose a challenge for simulated training. We introducethe EPOpt algorithm, which uses an ensemble of simulated source domains anda form of adversarial training to learn policies that are robust and generalize to abroad range of possible target domains, including unmodeled effects. Further, theprobability distribution over source domains in the ensemble can be adapted usingdata from target domain and approximate Bayesian methods, to progressively makeit a better approximation. Thus, learning on a model ensemble, along with sourcedomain adaptation, provides the benefit of both robustness and learning/adaptation.1 I NTRODUCTIONReinforcement learning with powerful function approximators like deep neural networks (deep RL)has recently demonstrated remarkable success in a wide range of tasks like games (Mnih et al., 2015;Silver et al., 2016), simulated control problems (Lillicrap et al., 2015; Mordatch et al., 2015b), andgraphics (Peng et al., 2016). However, high sample complexity is a major barrier for directly applyingmodel-free deep RL methods for physical control tasks. Model-free algorithms like Q-learning,actor-critic, and policy gradients are known to suffer from long learning times (Kakade, 2003), whichis compounded when used in conjunction with expressive function approximators like deep neuralnetworks (DNNs). The challenge of gathering samples from the real world is further exacerbatedby issues of safety for the agent and environment, since sampling with partially learned policiescould be unstable (Garc ́ıa & Fern ́andez, 2015). Thus, model-free deep RL methods often require aprohibitively large numbers of potentially dangerous samples for physical control tasks.Model-based methods, where the real-world target domain is approximated with a simulated sourcedomain, provide an avenue to tackle the above challenges by learning policies using simulated data.The principal challenge with simulated training is the systematic discrepancy between source andtarget domains, and therefore, methods that compensate for systematic discrepancies (modelingerrors) are needed to transfer results from simulations to real world using RL. We show that theimpact of such discrepancies can be mitigated through two key ideas: (1) training on an ensembleof models in an adversarial fashion to learn policies that are robust to parametric model errors, aswell as to unmodeled effects; and (2) adaptation of the source domain ensemble using data fromthe target domain to progressively make it a better approximation. This can be viewed either as aninstance of model-based Bayesian RL (Ghavamzadeh et al., 2015); or as transfer learning from acollection of simulated source domains to a real-world target domain (Taylor & Stone, 2009). Whilea number of model-free RL algorithms have been proposed (see, e.g., Duan et al. (2016) for a survey),their high sample complexity demands use of a simulator, effectively making them model-based. We1Published as a conference paper at ICLR 2017show in our experiments that such methods learn policies which are highly optimized for the specificmodels used in the simulator, but are brittle under model mismatch. This is not surprising, since deepnetworks are remarkably proficient at exploiting any systematic regularities in a simulator. Addressingrobustness of DNN-policies is particularly important to transfer their success from simulated tasks tophysical systems.In this paper, we propose the Ensemble Policy Optimization (EPOpt ) algorithm for finding policiesthat are robust to model mismatch. In line with model-based Bayesian RL, we learn a policy for thetarget domain by alternating between two phases: (i) given a source (model) distribution (i.e. ensembleof models), find a robust policy that is competent for the whole distribution; (ii) gather data from thetarget domain using said robust policy, and adapt the source distribution. EPOpt uses an ensembleof models sampled from the source distribution, and a form of adversarial training to learn robustpolicies that generalize to a broad range of models. By robust, we mean insensitivity to parametricmodel errors and broadly competent performance for direct-transfer (also referred to as jumpstartlike in Taylor & Stone (2009)). Direct-transfer performance refers to the average initial performance(return) in the target domain, without any direct training on the target domain. By adversarial training,we mean that model instances on which the policy performs poorly in the source distribution aresampled more often in order to encourage learning of policies that perform well for a wide range ofmodel instances. This is in contrast to methods which learn highly optimized policies for specificmodel instances, but brittle under model perturbations. In our experiments, we did not observesignificant loss in performance by requiring the policy to work on multiple models (for example,through adopting a more conservative strategy). Further, we show that policies learned using EPOptare robust even to effects not modeled in the source domain. Such unmodeled effects are a majorissue when transferring from simulation to the real world. For the model adaptation step (ii), wepresent a simple method using approximate Bayesian updates, which progressively makes the sourcedistribution a better approximation of the target domain. We evaluate the proposed methods on thehopper (12 dimensional state space; 3 dimensional action space) and half-cheetah (18 dimensionalstate space; 6 dimensional action space) benchmarks in MuJoCo. Our experimental results suggestthat adversarial training on model ensembles produces robust policies which generalize better thanpolicies trained on a single, maximum-likelihood model (of source distribution) alone.2 P ROBLEM FORMULATIONWe consider parametrized Markov Decision Processes (MDPs), which are tuples of the form:M(p)<S;A;Tp;Rp;;S 0;p>whereS,Aare (continuous) states and actions respectively;TpRp, andS0;pare the state transition, reward function, and initial state distribution respectively, allparametrized by p; andis the discount factor. Thus, we consider a set of MDPs with the same stateand action spaces. Each MDP in this set could potentially have different transition functions, rewards,and initial state distributions. We use transition functions of the form St+1Tp(st;at)whereTpisa random process and St+1is a random variable.We distinguish between source and target MDPs using MandWrespectively. We also refer to MandWas source and target domains respectively, as is common in the transfer learning set-up. Ourobjective is to learn the optimal policy for W; and to do so, we have access to M(p). We assumethat we have a distribution ( D) over the source domains (MDPs) generated by a distribution overthe parameters PP(p)that capture our subjective belief about the parameters of W. LetPbeparametrized by (e.g. mean, standard deviation). For example, Mcould be a hopping task withreward proportional to hopping velocity and falling down corresponds to a terminal state. For thistask,pcould correspond to parameters like torso mass, ground friction, and damping in joints, allof which affect the dynamics. Ideally, we would like the target domain to be in the model class, i.e.f9pjM(p) =Wg. However, in practice, there are likely to be unmodeled effects, and we analyzethis setting in our experiments. We wish to learn a policy (s)that performs well for all MD .Note that this robust policy does not have an explicit dependence on p, and we require it to performwell without knowledge of p.3 L EARNING PROTOCOL AND EPO PT ALGORITHMWe follow the round-based learning protocol of Bayesian model-based RL. We use the term roundswhen interacting with the target domain, and episode when performing rollouts with the simulator. Ineach round, we interact with the target domain after computing the robust policy on the current (i.e.2Published as a conference paper at ICLR 2017posterior) simulated source distribution. Following this, we update the source distribution using datafrom the target domain collected by executing the robust policy. Thus, in round i, we update two setsof parameters: i, the parameters of the robust policy (neural network); and i, the parameters of thesource distribution. The two key steps in this procedure are finding a robust policy given a sourcedistribution; and updating the source distribution using data from the target domain. In this section,we present our approach for both of these steps.3.1 R OBUST POLICY SEARCHWe introduce the EPOpt algorithm for finding a robust policy using the source distribution. EPOpt isa policy gradient based meta-algorithm which uses batch policy optimization methods as a subroutine.Batch policy optimization algorithms (Williams, 1992; Kakade, 2001; Schulman et al., 2015) collecta batch of trajectories by rolling out the current policy, and use the trajectories to make a policyupdate. The basic structure of EPOpt is to sample a collection of models from the source distribution,sample trajectories from each of these models, and make a gradient update based on a subset ofsampled trajectories. We first define evaluation metrics for the parametrized policy, :M(;p) =E~"T1Xt=0trt(st;at)p#; (1)D() =EpP[M(;p)] =EpP"E^"T1Xt=0trt(st;at)p##=E"T1Xt=0trt(st;at)#:In (1),M(;p)is the evaluation of on the modelM(p), with ~being trajectories generatedbyM(p)and:~=fst;at;rtgTt=0wherest+1Tp(st;at),s0S0;p,rtRp(st;at), andat(st). Similarly, D()is the evaluation of over the source domain distribution. Thecorresponding expectation is over trajectories generated byDand:=fst;at;rtgTt=0, wherest+1Tpt(st;at),pt+1=pt,s0S0;p0,rtRpt(st;at),at(st), andp0P . With thismodified notation of trajectories, batch policy optimization can be invoked for policy search.OptimizingDallows us to learn a policy that performs best in expectation over models in the sourcedomain distribution. However, this does not necessarily lead to a robust policy, since there could behigh variability in performance for different models in the distribution. To explicitly seek a robustpolicy, we use a softer version of max-min objective suggested in robust control, and optimize for theconditional value at risk (CVaR) (Tamar et al., 2015):max;yZF()M(;p)P(p)dp s:t: P(M(;P)y) =; (2)whereF() =fpjM(;p)ygis the set of parameters corresponding to models that produce theworstpercentile of returns, and provides the limit for the integral; M(;P)is the random variableof returns, which is induced by the distribution over model parameters; and is a hyperparameterwhich governs the level of relaxation from max-min objective. The interpretation is that (2) maximizesthe expected return for the worst -percentile of MDPs in the source domain distribution. We adaptthe previous policy gradient formulation to approximately optimize the objective in (2). The resultingalgorithm, which we call EPOpt- , generalizes learning a policy using an ensemble of source MDPswhich are sampled from a source domain distribution.In Algorithm 1, R(k)PT1t=0trt;kdenotes the discounted return obtained in trajectory samplek. In line 7, we compute the percentile value of returns from the Ntrajectories. In line 8, wefind the subset of sampled trajectories which have returns lower than Q. Line 9calls one step ofan underlying batch policy optimization subroutine on the subset of trajectories from line 8. For theCVaR objective, it is important to use a good baseline for the value function. Tamar et al. (2015)show that without a baseline, the resulting policy gradient is biased and not consistent. We use alinear function as the baseline with a time varying feature vector to approximate the value function,similar to Duan et al. (2016). The parameters of the baseline are estimated using only the subset oftrajectories with return less than Q. We found that this approach led to empirically good results.For small values of , we observed that using the sub-sampling step from the beginning led to unstablelearning. Policy gradient methods adjust parameters of policy to increase probability of trajectories3Published as a conference paper at ICLR 2017Algorithm 1: EPOpt–for Robust Policy Search1Input: ,0,niter ,N,2foriterationi= 0;1;2;:::niter do3 fork= 1;2;:::N do4 sample model parameters pkP 5 sample a trajectory k=fst;at;rt;st+1gT1t=0fromM(pk)using policy (i)6 end7 computeQ=percentile offR(k)gNk=18 select sub-set T=fk:R(k)Qg9 Update policy: i+1=BatchPolOpt (i;T)10endwith high returns and reduce probability of poor trajectories. EPOpt due to the sub-sampling stepemphasizes penalizing poor trajectories more. This might constrain the initial exploration neededto find good trajectories. Thus, we initially use a setting of = 1for few iterations before settingepsilon to the desired value. This corresponds to exploring initially to find promising trajectories andrapidly reducing probability of trajectories that do not generalize.3.2 A DAPTING THE SOURCE DOMAIN DISTRIBUTIONIn line with model-based Bayesian RL, we can adapt the ensemble distribution after observingtrajectory data from the target domain. The Bayesian update can be written as:P(Pjk) =1ZP(kjP)P(P) =1ZT1Yt=0P(St+1=s(k)t+1js(k)t;a(k)t;p)P(P=p);(3)where1Zis the partition function (normalization) required to make the probabilities sum to 1, St+1isthe random variable representing the next state, ands(k)t;a(k)t;s(k)t+1Tt=0are data observed alongtrajectoryk. We try to explain the target trajectory using the stochasticity in the state-transitionfunction, which also models sensor errors. This provides the following expression for the likelihood:P(St+1jst;at;p)Tp(st;at): (4)We follow a sampling based approach to calculate the posterior, by sampling a set of model parameters:pi= [p1;p2;:::;pM]from a sampling distribution, PS(pi). Consequently, using Bayes rule andimportance sampling, we have:P(pijk)/L(kjpi)PP(pi)PS(pi); (5)where PP(pi)is the probability of drawing pifrom the prior distribution; and L(kjpi)is the likeli-hood of generating the observed trajectory with model parameters pi. The weighted samples from theposterior can be used to estimate a parametric model, as we do in this paper. Alternatively, one couldapproximate the continuous probability distribution using discrete weighted samples like in case of par-ticle filters. In cases where the prior has very low probability density in certain parts of the parameterspace, it might be advantageous to choose a sampling distribution different from the prior. The like-lihood can be factored using the Markov property as: L(kjpi) =QtP(St+1=s(k)t+1js(k)t;a(k)t;pi).This simple model adaptation rule allows us to illustrate the utility of EPOpt for robust policy search,as well as its integration with model adaptation to learn policies in cases where the target model couldbe very different from the initially assumed distribution.4 E XPERIMENTSWe evaluated the proposed EPOpt- algorithm on the 2D hopper (Erez et al., 2011) and half-cheetah (Wawrzynski, 2009) benchmarks using the MuJoCo physics simulator (Todorov et al.,2012).1Both tasks involve complex second order dynamics and direct torque control. Underactuation,1Supplementary video: https://youtu.be/w1YJ9vwaoto4Published as a conference paper at ICLR 2017high dimensionality, and contact discontinuities make these tasks challenging reinforcement learningbenchmarks. These challenges when coupled with systematic parameter discrepancies can quicklydegrade the performance of policies and make them unstable, as we show in the experiments. Thebatch policy optimization sub-routine is implemented using TRPO. We parametrize the stochasticpolicy using the scheme presented in Schulman et al. (2015). The policy is represented with aGaussian distribution, the mean of which is parametrized using a neural network with two hiddenlayers. Each hidden layer has 64 units, with a tanh non-linearity, and the final output layer is made oflinear units. Normally distributed independent random variables are added to the output of this neuralnetwork, and we also learn the standard deviation of their distributions. Our experiments are aimed atanswering the following questions:1.How does the performance of standard policy search methods (like TRPO) degrade in the presenceof systematic physical differences between the training and test domains, as might be the casewhen training in simulation and testing in the real world?2.Does training on a distribution of models with EPOpt improve the performance of the policy whentested under various model discrepancies, and how much does ensemble training degrade overallperformance (e.g. due to acquiring a more conservative strategy)?3.How does the robustness of the policy to physical parameter discrepancies change when using therobust EPOpt- variant of our method?4.Can EPOpt learn policies that are robust to unmodeled effects – that is, discrepancies in physicalparameters between source and target domains that do not vary in the source domain ensemble?5.When the initial model ensemble differs substantially from the target domain, can the ensemblebe adapted efficiently, and how much data from the target domain is required for this?In all the comparisons, performance refers to the average undiscounted return per trajectory or episode(we consider finite horizon episodic problems). In addition to the previously defined performance,we also use the 10thpercentile of the return distribution as a proxy for the worst-case return.4.1 C OMPARISON TO STANDARD POLICY SEARCHIn Figure 1, we evaluate the performance of standard TRPO and EPOpt (= 0:1)on the hoppertask, in the presence of a simple parametric discrepancy in the physics of the system between thetraining (source) and test (target) domains. The plots show the performance of various policies ontest domains with different torso mass. The first three plots show policies that are each trained ona single torso mass in the source domain, while the last plot illustrates the performance of EPOpt,3456789Torso Mass05001000150020002500300035004000Performancem = 33456789Torso Massm = 63456789Torso Massm = 93456789Torso MassEnsembleFigure 1: Performance of hopper policies when testing on target domains with different torso masses.The first three plots (blue, green, and red) show the performance of policies trained with TRPOon source domains with torso mass 3, 6, and 9, respectively (denoted by m=in the legend). Therightmost plot shows the performance of EPOpt( = 0:1) trained on a Gaussian source distributionwith mean mass = 6and standard deviation = 1:5. The shaded regions show the 10thand 90thpercentile of the return distribution. Policies trained using traditional approaches on a single massvalue are unstable for even slightly different masses, making the hopper fall over when trying tomove forward. In contrast, the EPOpt policy is stable and achieves a high level of performance on theentire range of masses considered. Further, the EPOpt policy does not suffer from degradation inperformance as a consequence of adopting a more robust policy.5Published as a conference paper at ICLR 2017Figure 2: On the left, is an illustration of the simulated 2D hopper task studied in this paper. Onright, we depict the performance of policies for various model instances of the hopper task. Theperformance is depicted as a heat map for various model configurations, parameters of which aregiven in the x and y axis. The adversarially trained policy, EPOpt (= 0:1), is observed to generalizeto a wider range of models and is more robust.which is trained on a Gaussian mass distribution. The results show that no single torso mass valueproduces a policy that is successful in all target domains. However, the EPOpt policy succeeds almostuniformly for all tested mass values. Furthermore, the results show that there is almost no degradationin the performance of EPOpt for any mass setting, suggesting that the EPOpt policy does not suffersubstantially from adopting a more robust strategy.4.2 A NALYSIS OF ROBUSTNESSNext, we analyze the robustness of policies trained using EPOpt on the hopper domain. For thisanalysis, we construct a source distribution which varies four different physical parameters: torsomass, ground friction, foot joint damping, and joint inertia (armature). This distribution is presentedin Table 1. Using this source distribution, we compare between three different methods: (1) standardpolicy search (TRPO) trained on a single model corresponding to the mean parameters in Table 1;(2) EPOpt (= 1) trained on the source distribution; (3) EPOpt (= 0:1)– i.e. the adversariallytrained policy, again trained on the previously described source distribution. The aim of the compari-son is to study direct-transfer performance, similar to the robustness evaluations common in robustcontroller design (Zhou et al., 1996). Hence, we learn a policy using each of the methods, and thentest policies on different model instances (i.e. different combinations of physical parameters) withoutany adaptation. The results of this comparison are summarized in Figure 2, where we present theperformance of the policy for testing conditions corresponding to different torso mass and frictionvalues, which we found to have the most pronounced impact on performance. The results indicatethat EPOpt (= 0:1)produces highly robust policies. A similar analysis for the 10thpercentile of thereturn distribution (softer version of worst-case performance), the half-cheetah task, and different settings are presented in the appendix.Table 1: Initial source domain distributionHopper low highmass 6.0 1.5 3.0 9.0ground friction 2.0 0.25 1.5 2.5joint damping 2.5 1.0 1.0 4.0armature 1.0 0.25 0.5 1.5Half-Cheetah low highmass 6.0 1.5 3.0 9.0ground friction 0.5 0.1 0.3 0.7joint damping 1.5 0.5 0.5 2.5armature 0.125 0.04 0.05 0.23 4 5 6 7 8 9Torso Mass05001000150020002500300035004000PerformanceEnsemble (unmodeled)Maximum-LikelihoodFigure 3: Comparison between policies trainedon a fixed maximum-likelihood model with mass(6), and an ensemble where all models have thesame mass (6) and other parameters varying asdescribed in Table 1.6Published as a conference paper at ICLR 20174.3 R OBUSTNESS TO UNMODELED EFFECTSTo analyze the robustness to unmodeled effects, our next experiment considers the setting wherethe source domain distribution is obtained by varying friction, damping, and armature as in Table 1,but does not consider a distribution over torso mass. Specifically, all models in the source domaindistribution have the same torso mass (value of 6), but we will evaluate the policy trained onthis distribution on target domains where the torso mass is different. Figure 3 indicates that theEPOpt (= 0:1)policy is robust to a broad range of torso masses even when its variation is notconsidered. However, as expected, this policy is not as robust as the case when mass is also modeledas part of the source domain distribution.4.4 M ODEL ADAPTATIONThe preceding experiments show that EPOpt can find robust policies, but the source distribution inthese experiments was chosen to be broad enough such that the target domain is not too far fromhigh-density regions of the distribution. However, for real-world problems, we might not have thedomain knowledge to identify a good source distribution in advance. In such settings, model (source)adaptation allows us to change the parameters of the source distribution using data gathered from thetarget domain. Additionally, model adaptation is helpful when the parameters of the target domaincould change over time, for example due to wear and tear in a physical system. To illustrate modeladaptation, we performed an experiment where the target domain was very far from the high densityregions of the initial source distribution, as depicted in Figure 4(a). In this experiment, the sourcedistribution varies the torso mass and ground friction. We observe that progressively, the sourcedistribution becomes a better approximation of the target domain and consequently the performanceimproves. In this case, since we followed a sampling based approach, we used a uniform samplingdistribution, and weighted each sample with the importance weight as described in Section 3.2.Eventually, after 10 iterations, the source domain distribution is able to accurately match the targetdomain. Figure 4(b) depicts the learning curve, and we see that a robust policy with return more than2500, which roughly corresponds to a situation where the hopper is able to move forward withoutfalling down for the duration of the episode, can be discovered with just 5 trajectories from the targetdomain. Subsequently, the policy improves near monotonically, and EPOpt finds a good policy withjust 11 episodes worth of data from the target domain. In contrast, to achieve the same level ofperformance on the target domain, completely model-free methods like TRPO would require morethan2104trajectories when the neural network parameters are initialized randomly.1.01.52.02.53.0Iteration 0 Iteration 10 5 10 15 201.01.52.02.53.0Iteration 20 5 10 15 20Iteration 7FrictionTorso Mass(a)0 2 4 6 8 10Iterations0500100015002000250030003500Performance (b)Figure 4: (a) Visualizes the source distribution during model adaptation on the hopper task, wheremass and friction coefficient are varied in the source domain. The red cross indicates the unknownparameters of the target domain. The contours in the plot indicate the distribution over models(we assume a Gaussian distribution). Lighter colors and more concentrated contour lines indicateregions of higher density. Each iteration corresponds to one round (episode) of interaction with thetarget domain. The high-density regions gradually move toward the true model, while maintainingprobability mass over a range of parameters which can explain the behavior of target domain.Figure 4(b) presents the corresponding learning curve, where the shaded region describes the 10thand 90th percentiles of the performance distribution, and the solid line is the average performance.7Published as a conference paper at ICLR 20175 R ELATED WORKRobust control is a branch of control theory which formally studies development of robust policies(Zhou et al., 1996; Nilim & Ghaoui, 2005; Lim et al., 2013). However, typically no distribution oversource or target tasks is assumed, and a worst case analysis is performed. Most results from thisfield have been concentrated around linear systems or finite MDPs, which often cannot adequatelymodel complexities of real-world tasks. The set-up of model-based Bayesian RL maintains a beliefover models for decision making under uncertainty (Vlassis et al., 2012; Ghavamzadeh et al., 2015).In Bayesian RL, through interaction with the target domain, the uncertainty is reduced to find thecorrect or closest model. Application of this idea in its full general form is difficult, and requireseither restrictive assumptions like finite MDPs (Poupart et al., 2006), gaussian dynamics (Rosset al., 2008), or task specific innovations. Previous methods have also suggested treating uncertainmodel parameters as unobserved state variables in a continuous POMDP framework, and solving thePOMDP to get optimal exploration-exploitation trade-off (Duff, 2003; Porta et al., 2006). While thisapproach is general, and allows automatic learning of epistemic actions, extending such methods tolarge continuous control tasks like those considered in this paper is difficult.Risk sensitive RL methods (Delage & Mannor, 2010; Tamar et al., 2015) have been proposed to actas a bridge between robust control and Bayesian RL. These approaches allow for using subjectivemodel belief priors, prevent overly conservative policies, and enjoy some strong guarantees typicallyassociated with robust control. However, their application in high dimensional continuous controltasks have not been sufficiently explored. We refer readers to Garc ́ıa & Fern ́andez (2015) for a surveyof related risk sensitive RL methods in the context of robustness and safety.Standard model-based control methods typically operate by finding a maximum-likelihood estimateof the target model (Ljung, 1998; Ross & Bagnell, 2012; Deisenroth et al., 2013), followed bypolicy optimization. Use of model ensembles to produce robust controllers was explored recentlyin robotics. Mordatch et al. (2015a) use a trajectory optimization approach and an ensemble withsmall finite set of models; whereas we follow a sampling based direct policy search approach over acontinuous distribution of uncertain parameters, and also show domain adaptation. Sampling basedapproaches can be applied to complex models and discrete MDPs which cannot be planned througheasily. Similarly, Wang et al. (2010) use an ensemble of models, but their goal is to optimize foraverage case performance as opposed to transferring to a target MDP. Wang et al. (2010) use a handengineered policy class whose parameters are optimized with CMA-ES. EPOpt on the other handcan optimize expressive neural network policies directly. In addition, we show model adaptation,effectiveness of the sub-sampling step ( <1case), and robustness to unmodeled effects, all of whichare important for transfering to a target MDP.Learning of parametrized skills (da Silva et al., 2012) is also concerned with finding policies fora distribution of parametrized tasks. However, this is primarily geared towards situations wheretask parameters are revealed during test time. Our work is motivated by situations where target taskparameters (e.g. friction) are unknown. A number of methods have also been suggested to reducesample complexity when provided with either a baseline policy (Thomas et al., 2015; Kakade &Langford, 2002), expert demonstration (Levine & Koltun, 2013; Argall et al., 2009), or approximatesimulator (Tamar et al., 2012; Abbeel et al., 2006). These are complimentary to our work, in thesense that our policy, which has good direct-transfer performance, can be used to sample from thetarget domain and other off-policy methods could be explored for policy improvement.6 C ONCLUSIONS AND FUTURE WORKIn this paper, we presented the EPOpt- algorithm for training robust policies on ensembles of sourcedomains. Our method provides for training of robust policies, and supports an adversarial trainingregime designed to provide good direct-transfer performance. We also describe how our approachcan be combined with Bayesian model adaptation to adapt the source domain ensemble to a targetdomain using a small amount of target domain experience. Our experimental results demonstratethat the ensemble approach provides for highly robust and generalizable policies in fairly complexsimulated robotic tasks. Our experiments also demonstrate that Bayesian model adaptation canproduce distributions over models that lead to better policies on the target domain than more standardmaximum likelihood estimation, particularly in presence of unmodeled effects.8Published as a conference paper at ICLR 2017Although our method exhibits good generalization performance, the adaptation algorithm we usecurrently relies on sampling the parameter space, which is computationally intensive as the number ofvariable physical parameters increase. We observed that (adaptive) sampling from the prior leads tofast and reliable adaptation if the true model does not have very low probability in the prior. However,when this assumption breaks, we require a different sampling distribution which could producesamples from all regions of the parameter space. This is a general drawback of Bayesian adaptationmethods. In future work, we plan to explore alternative sampling and parameterization schemes,including non-parametric distributions. An eventual end-goal would be to replace the physicssimulator entirely with learned Bayesian neural network models, which could be adapted with limiteddata from the physical system. These models could be pre-trained using physics based simulators likeMuJoCo to get a practical initialization of neural network parameters. Such representations are likelyuseful when dealing with high dimensional inputs like simulated vision from rendered images ortasks with complex dynamics like deformable bodies, which are needed to train highly generalizablepolicies that can successfully transfer to physical robots acting in the real world.ACKNOWLEDGMENTSThe authors would like to thank Emo Todorov, Sham Kakade, and students of Emo Todorov’s researchgroup for insightful comments about the work. The authors would also like to thank Emo Todorovfor the MuJoCo simulator. Aravind Rajeswaran and Balaraman Ravindran acknowledge financialsupport from ILDS, IIT Madras.REFERENCESPieter Abbeel, Morgan Quigley, and Andrew Y . Ng. Using inaccurate models in reinforcementlearning. In ICML , 2006.Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learningfrom demonstration. Robotics and Autonomous Systems , 57(5):469 – 483, 2009.Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, andWojciech Zaremba. OpenAI Gym, 2016.Bruno Castro da Silva, George Konidaris, and Andrew G. Barto. Learning parameterized skills. InICML , 2012.Marc Peter Deisenroth, Gerhard Neumann, and Jan Peters. A survey on policy search for robotics.Foundations and Trends in Robotics , 2(12):1–142, 2013.Erick Delage and Shie Mannor. Percentile optimization for markov decision processes with parameteruncertainty. Operations Research , 58(1):203–213, 2010.Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deepreinforcement learning for continuous control. In ICML , 2016.Michael O. Duff. Design for an optimal probe. In ICML , 2003.Tom Erez, Yuval Tassa, and Emanuel Todorov. Infinite-horizon model predictive control for periodictasks with contacts. In Proceedings of Robotics: Science and Systems , 2011.Javier Garc ́ıa and Fernando Fern ́andez. A comprehensive survey on safe reinforcement learning.Journal of Machine Learning Research , 2015.Mohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, and Aviv Tamar. Bayesian reinforcementlearning: A survey. Foundations and Trends in Machine Learning , 8(5-6):359–483, 2015.Sham Kakade. A natural policy gradient. In NIPS , 2001.Sham Kakade. On the Sample Complexity of Reinforcement Learning . PhD thesis, University CollegeLondon, 2003.9Published as a conference paper at ICLR 2017Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. InICML , 2002.Sergey Levine and Vladlen Koltun. Guided policy search. In ICML , 2013.T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y . Tassa, D. Silver, and D. Wierstra. Continuouscontrol with deep reinforcement learning. ArXiv e-prints , September 2015.Shiau Hong Lim, Huan Xu, and Shie Mannor. Reinforcement learning in robust markov decisionprocesses. In NIPS . 2013.Lennart Ljung. System Identification , pp. 163–173. Birkh ̈auser Boston, Boston, MA, 1998.V olodymyr Mnih et al. Human-level control through deep reinforcement learning. Nature , 518(7540):529–533, Feb 2015.I. Mordatch, K. Lowrey, and E. Todorov. Ensemble-CIO: Full-body dynamic motion planning thattransfers to physical humanoids. In IROS , 2015a.Igor Mordatch, Kendall Lowrey, Galen Andrew, Zoran Popovic, and Emanuel V . Todorov. Interactivecontrol of diverse complex characters with neural networks. In NIPS . 2015b.Arnab Nilim and Laurent El Ghaoui. Robust control of markov decision processes with uncertaintransition matrices. Operations Research , 53(5):780–798, 2005.Xue Bin Peng, Glen Berseth, and Michiel van de Panne. Terrain-adaptive locomotion skills usingdeep reinforcement learning. ACM Transactions on Graphics (Proc. SIGGRAPH 2016) , 2016.Josep M. Porta, Nikos A. Vlassis, Matthijs T. J. Spaan, and Pascal Poupart. Point-based value iterationfor continuous pomdps. Journal of Machine Learning Research , 7:2329–2367, 2006.Pascal Poupart, Nikos A. Vlassis, Jesse Hoey, and Kevin Regan. An analytic solution to discretebayesian reinforcement learning. In ICML , 2006.S. Ross, B. Chaib-draa, and J. Pineau. Bayesian reinforcement learning in continuous pomdps withapplication to robot navigation. In ICRA , 2008.Stephane Ross and Drew Bagnell. Agnostic system identification for model-based reinforcementlearning. In ICML , 2012.John Schulman, Sergey Levine, Philipp Moritz, Michael Jordan, and Pieter Abbeel. Trust regionpolicy optimization. In ICML , 2015.David Silver et al. Mastering the game of go with deep neural networks and tree search. Nature , 529(7587):484–489, Jan 2016.Aviv Tamar, Dotan Di Castro, and Ron Meir. Integrating a partial model into model free reinforcementlearning. Journal of Machine Learning Research , 2012.Aviv Tamar, Yonatan Glassner, and Shie Mannor. Optimizing the cvar via sampling. In AAAIConference on Artificial Intelligence , 2015.Matthew E. Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey.Journal of Machine Learning Research , 10:1633–1685, December 2009.Philip Thomas, Georgios Theocharous, and Mohammad Ghavamzadeh. High-confidence off-policyevaluation. In AAAI Conference on Artificial Intelligence . 2015.E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In 2012IEEE/RSJ International Conference on Intelligent Robots and Systems , pp. 5026–5033, Oct 2012.Nikos Vlassis, Mohammad Ghavamzadeh, Shie Mannor, and Pascal Poupart. Bayesian ReinforcementLearning , pp. 359–386. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.Jack M. Wang, David J. Fleet, and Aaron Hertzmann. Optimizing walking controllers for uncertaininputs and environments. ACM Trans. Graph. , 2010.10Published as a conference paper at ICLR 2017Pawel Wawrzynski. Real-time reinforcement learning by sequential actor-critics and experiencereplay. Neural Networks , 22:1484–1497, 2009.Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcementlearning. Machine Learning , 8(3):229–256, 1992.Kemin Zhou, John C. Doyle, and Keith Glover. Robust and Optimal Control . Prentice-Hall, Inc.,Upper Saddle River, NJ, USA, 1996. ISBN 0-13-456567-3.11Published as a conference paper at ICLR 2017A A PPENDIXA.1 D ESCRIPTION OF SIMULATED ROBOTIC TASKS CONSIDERED IN THIS WORKHopper: The hopper task is to make a 2D planar hopper with three joints and 4 body parts hopforward as fast as possible (Erez et al., 2011). This problem has a 12 dimensional state space and a 3dimensional action space that corresponds to torques at the joints. We construct the source domainby considering a distribution over 4 parameters: torso mass, ground friction, armature (inertia), anddamping of foot.Half Cheetah: The half-cheetah task (Wawrzynski, 2009) requires us to make a 2D cheetah withtwo legs run forward as fast as possible. The simulated robot has 8 body links with an 18 dimensionalstate space and a 6 dimensional action space that corresponds to joint torques. Again, we constructthe source domain using a distribution over the following parameters: torso and head mass, groundfriction, damping, and armature (inertia) of foot joints.(a) (b)Figure 5: Illustrations of the 2D simulated robot models used in the experiments. The hopper (a) andhalf-cheetah (b) tasks present the challenges of under-actuation and contact discontinuities. Thesechallenges when coupled with parameter uncertainties lead to dramatic degradation in the quality ofpolicies when robustness is not explicitly considered.A video demonstration of the trained policies on these tasks can be viewed here: Supplimenrary video(https://youtu.be/w1YJ9vwaoto )Reward functions: For both tasks, we used the standard reward functions implemented withOpenAI gym (Brockman et al., 2016), with minor modifications. The reward structure for hoppertask is:r(s;a) =vx0:001jjajj2+b;wheresare the states comprising of joint positions and velocities; aare the actions (controls); and vxis the forward velocity. bis a bonus for being alive ( b= 1). The episode terminates when ztorso<0:7or whenjyj<0:2whereyis the forward pitch of the body.For the cheetah task, we use the reward function:r(s;a) =vx0:1jjajj2+b;the alive bonus is 1if head of cheetah is above 0:25(relative to torso) and similarly episodeterminates if the alive condition is violated.Our implementation of the algorithms and environments are public in this repository to facilitatereproduction of results: https://github.com/aravindr93/robustRLA.2 H YPERPARAMETERS1.Neural network architecture: We used a neural network with two hidden layers, each with 64 unitsandtanh non-linearity. The policy updates are implemented using TRPO.2.Trust region size in TRPO: The maximum KL divergence between sucessive policy updates areconstrained to be 0:0112Published as a conference paper at ICLR 20173.Number and length of trajectory rollouts: In each iteration, we sample N= 240 models from theensemble, one rollout is performed on each such model. This was implemented in parallel onmultiple (6) CPUs. Each trajectory is of length 1000 – same as the standard implimentations ofthese tasks in gym and rllab.The results in Fig 1 and Fig 2 were generated after 150 and 200 iterations of TRPO respectively, witheach iteration consisting of 240 trajectories as specified in (3) above.A.3 W ORST -CASE ANALYSIS FOR HOPPER TASKFigure 2 illustrates the performance of the three considered policies: viz. TRPO on mean parameters,EPOpt (= 1) , and EPOpt (= 0:1). We similarly analyze the 10thpercentile of the return distributionas a proxy for worst-case analysis, which is important for a robust control policy (here, distributionof returns for a given model instance is due to variations in initial conditions). The correspondingresults are presented below:Figure 6: 10thpercentile of return distribution for the hopper task. EPOpt (= 0:1)clearly outper-forms the other approaches. The 10thof return distribution for EPOpt (= 0:1)also nearly overlapswith the expected return, indicating that the policies trained using EPOpt (= 0:1)are highly robustand reliable.A.4 R OBUSTNESS ANALYSIS FOR HALF -CHEETAH TASK(a)(b)Figure 7: Performance of policies for various model instances for the half-cheetah domain, similar toFigure 2. Again, it is observed that the adversarial trained policy is robust and generalizes well to allmodels in the source distribution.13Published as a conference paper at ICLR 2017A.5 D IFFERENT SETTINGS FOR Here, we analyze how different settings for influences the robustness of learned policies. Thepolicies in this section have been trained for 200 iterations with 240 trajectory samples per iteration.Similar to the description in Section 3.1, the first 100 iterations use = 1, and the final 100 iterationsuse the desired . The source distribution is described in Table 1. We test the performance on a gridover the model parameters. Our results, summarized in Table 2, indicate that decreasing decreasesthe variance in performance, along with a small decrease in average performance, and hence enhancesrobustness.Table 2: Performance statistics for different settings for the hopper taskPerformance (Return) mean std Percentiles5 10 25 50 75 900.05 2889 502 1662 2633 2841 2939 2966 30830.1 3063 579 1618 2848 3223 3286 3336 33960.2 3097 665 1527 1833 3259 3362 3423 34830.3 3121 706 1461 1635 3251 3395 3477 35130.4 3126 869 1013 1241 3114 3412 3504 35460.5 3122 1009 984 1196 1969 3430 3481 35670.75 3133 952 1005 1516 2187 3363 3486 35481.0 3224 1060 1198 1354 1928 3461 3557 3604Max-Lik 1710 1140 352 414 646 1323 3088 3272A.6 I MPORTANCE OF BASELINE FOR BATCH POLOPTAs described in Section 3.1, it is important to use a good baseline estimate for the value function forthe batch policy optimization step. When optimizing for the expected return, we can interpret thebaseline as a variance reduction technique. Intuitively, policy gradient methods adjust parametersof the policy to improve probability of trajectories in proportion to their performance. By using abaseline for the value function, we make updates that increase probability of trajectories that performbetter than average and vice versa. In practice, this variance reduction is essential for getting policygradients to work. For the CVaR case, Tamar et al. (2015) showed that without using a baseline,the policy gradient is biased. To study importance of the baseline, we first consider the case wherewe do not employ the adversarial sub-sampling step, and fix = 1. We use a linear baseline with atime-varying feature vector as described in Section 3.1. Figure 8(a) depicts the learning curve for thesource distribution in Table 1. The results indicate that use of a baseline is important to make policygradients work well in practice.Next, we turn to the case of <1. As mentioned in section 3.1, setting a low from the start leadsto unstable learning. The adversarial nature encourages penalizing poor trajectories more, whichconstrains the initial exploration needed to find promising trajectories. Thus we will “pre-train” byusing= 1 for some iterations, before switching to the desired setting. From Figure 8(a), it isclear that pre-training without a baseline is unlikely to help, since the performance is poor. Thus, weuse the following setup for comparison: for 100 iterations, EPOpt (= 1) is used with the baseline.Subsequently, we switch to EPOpt (= 0:1)and run for another 100 iterations, totaling 200 iterations.The results of this experiment are depicted in Figure 8(b). This result indicates that use of a baselineis crucial for the CVaR case, without which the performance degrades very quickly. We repeatedthe experiment with 100 iterations of pre-training with = 1and without baseline, and observed thesame effect. These empirical results reinforce the theoretical findings of Tamar et al. (2015).A.7 A LTERNATE POLICY GRADIENT SUBROUTINES FOR BATCH POLOPTAs emphasized previously, EPOpt is a generic policy gradient based meta algorithm for finding robustpolicies. The BatchPolOpt step (line 9, Algorithm 1) calls one gradient step of a policy gradientmethod, the choice of which is largely orthogonal to the main contributions of this paper. For the14Published as a conference paper at ICLR 20170 50 100 150 200Iterations0500100015002000250030003500PerformanceEPOpt(/epsilon1=1)/uni00A0with/uni00A0baselineEPOpt(/epsilon1=1)/uni00A0without/uni00A0baseline0 50 100 150 200Iterations0500100015002000250030003500PerformanceEPOpt(/epsilon1=1)/uni00A0with/uni00A0baselineEPOpt(/epsilon1=0.1)/uni00A0with/uni00A0baselineEPOpt(/epsilon1=0.1)/uni00A0without/uni00A0baseline(a) (b)Figure 8: (a) depicts the learning curve for EPOpt (= 1) with and without baselines. The learningcurves indicate that use of a baseline provides a better ascent direction, thereby enabling fasterlearning. Figure 8(b) depicts the learning curve when using the average return and CVaR objectives.For the comparison, we “pre-train” for 100 iterations with = 1setting and using a baseline. Theresults indicates that a baseline is very important for the CVaR objective (<1), without which theperformance drops very quickly. Here, performance is the average return in the source distribution.0 50 100 150 200Iterations0500100015002000250030003500PerformanceEPOpt(/epsilon1=1)/uni00A0with/uni00A0TRPOEPOpt(/epsilon1=1)/uni00A0with/uni00A0REINFORCEFigure 9: Learning curves for EPOpt (= 1) when using the TRPO and REINFORCE methods forthe BatchPolOpt step.reported results, we have used TRPO as the policy gradient method. Here, we compare the results tothe case when using the classic REINFORCE algorithm. For this comparison, we use the same valuefunction baseline parametrization for both TRPO and REINFORCE. Figure 9 depicts the learningcurve when using the two policy gradient methods. We observe that performance with TRPO issignificantly better. When optimizing over probability distributions, the natural gradient can navigatethe warped parameter space better than the “vanilla” gradient. This observation is consistent with thefindings of Kakade (2001), Schulman et al. (2015), and Duan et al. (2016).15
BkUDvt5gg
Under review as a conference paper at ICLR 2017WAV2LETTER :ANEND-TO-ENDCONV NET-BASEDSPEECH RECOGNITION SYSTEMRonan CollobertFacebook AI Research, Menlo Parklocronan@fb.comChristian PuhrschFacebook AI Research, Menlo Parkcpuhrsch@fb.comGabriel SynnaeveFacebook AI Research, New Yorkgab@fb.comABSTRACTThis paper presents a simple end-to-end model for speech recognition, combininga convolutional network based acoustic model and a graph decoding. It is trainedto output letters, with transcribed speech, without the need for force alignment ofphonemes. We introduce an automatic segmentation criterion for training fromsequence annotation without alignment that is on par with CTC (Graves et al.,2006) while being simpler. We show competitive results in word error rate on theLibrispeech corpus (Panayotov et al., 2015) with MFCC features, and promisingresults from raw waveform.1 I NTRODUCTIONWe present an end-to-end system to speech recognition, going from the speech signal (e.g. Mel-Frequency Cepstral Coefficients (MFCC), power spectrum, or raw waveform) to the transcription.The acoustic model is trained using letters (graphemes) directly, which take out the need for anintermediate (human or automatic) phonetic transcription. Indeed, the classical pipeline to buildstate of the art systems for speech recognition consists in first training an HMM/GMM model toforce align the units on which the final acoustic model operates (most often context-dependent phonestates). This approach takes its roots in HMM/GMM training (Woodland & Young, 1993). Theimprovements brought by deep neural networks (DNNs) (Mohamed et al., 2012; Hinton et al., 2012)and convolutional neural networks (CNNs) (Sercu et al., 2015; Soltau et al., 2014) for acousticmodeling only extend this training pipeline.The current state of the art on Librispeech (the dataset that we used for our evaluations) uses thisapproach too (Panayotov et al., 2015; Peddinti et al., 2015b), with an additional step of speakeradaptation (Saon et al., 2013; Peddinti et al., 2015a). Recently, Senior et al. (2014) proposed GMM-free training, but the approach still requires to generate a force alignment. An approach that cut tieswith the HMM/GMM pipeline (and with force alignment) was to train with a recurrent neural network(RNN) (Graves et al., 2013) for phoneme transcription. There are now competitive end-to-endapproaches of acoustic models toppled with RNNs layers as in (Hannun et al., 2014; Miao et al.,2015; Saon et al., 2015; Amodei et al., 2015), trained with a sequence criterion (Graves et al., 2006).However these models are computationally expensive, and thus take a long time to train.Compared to classical approaches that need phonetic annotation (often derived from a phoneticdictionary, rules, and generative training), we propose to train the model end-to-end, using graphemesdirectly. Compared to sequence criterion based approaches that train directly from speech signal tographemes (Miao et al., 2015), we propose a simple(r) architecture (23 millions of parameters for ourbest model, vs. 100 millions of parameters in (Amodei et al., 2015)) based on convolutional networks1Under review as a conference paper at ICLR 2017for the acoustic model, toppled with a graph transformer network (Bottou et al., 1997), trained witha simpler sequence criterion. Our word-error-rate on clean speech is slightly better than (Hannunet al., 2014), and slightly worse than (Amodei et al., 2015), in particular factoring that they train on12,000 hours while we only train on the 960h available in LibriSpeech’s train set. Finally, some ofour models are also trained on the raw waveform, as in (Palaz et al., 2013; 2015; Sainath et al., 2015).The rest of the paper is structured as follows: the next section presents the convolutional networksused for acoustic modeling, along with the automatic segmentation criterion. The following sectionshows experimental results comparing different features, the criterion, and our current best word errorrates on LibriSpeech.2 A RCHITECTUREOur speech recognition system is a standard convolutional neural network (LeCun & Bengio, 1995)fed with various different features, trained through an alternative to the Connectionist TemporalClassification (CTC) (Graves et al., 2006), and coupled with a simple beam search decoder. In thefollowing sub-sections, we detail each of these components.2.1 F EATURESCONVkw = 12000 : 40CONVkw = 12000 : 2000CONVkw = 32250 : 2000CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw = 7250 : 250CONVkw= 48, dw = 2250 : 250CONVkw= 250 , dw = 1601 : 250Figure 1: Our neural networkarchitecture for raw wave. Firsttwo layers are convolutions withstrides. Last two layers areconvolutions with kw = 1 ,which are equivalent to fullyconnected layers. Power spec-trum and MFCC based networksdo not have the first layer.We consider three types of input features for our model: MFCCs,power-spectrum, and raw wave. MFCCs are carefully designedspeech-specific features, often found in classical HMM/GMMspeech systems (Woodland & Young, 1993) because of their di-mensionality compression (13 coefficients are often enough to spanspeech frequencies). Power-spectrum features are found in mostrecent deep learning acoustic modeling features (Amodei et al.,2015). Raw wave has been somewhat explored in few recent work(Palaz et al., 2013; 2015). ConvNets have the advantage to beflexible enough to be used with either of these input feature types.Our acoustic models output letter scores (one score per letter, givena dictionaryL).2.2 C ONV NETACOUSTIC MODELThe acoustic models we considered in this paper are all based onstandard 1D convolutional neural networks (ConvNets). ConvNetsinterleave convolution operations with pointwise non-linearity op-erations. Often ConvNets also embark pooling layers: these type oflayers allow the network to “see” a larger context, without increas-ing the number of parameters, by locally aggregating the previousconvolution operation output. Instead, our networks leverage strid-ing convolutions. Given (xt)t=1:::Txan input sequence with Txframes ofdxdimensional vectors, a convolution with kernel widthkw, stridedwanddyframe size output computes the following:yit=bi+dxXj=1kwXk=1wi;j;kxjdw(t1)+k81idy; (1)whereb2Rdyandw2Rdydxkware the parameters of theconvolution (to be learned).Pointwise non-linear layers are added after convolutional layers.In our experience, we surprisingly found that using hyperbolictangents, their piecewise linear counterpart HardTanh (as in (Palazet al., 2015)) or ReLU units lead to similar results.There are some slight variations between the architectures, depend-ing on the input features. MFCC-based networks need less striding,as standard MFCC filters are applied with large strides on the input2Under review as a conference paper at ICLR 2017raw sequence. With power spectrum-based and raw wave-based networks, we observed that theoverall stride of the network was more important than where the convolution with strides were placed.We found thus preferrable to set the strided convolutions near the first input layers of the network, asit leads to the fastest architectures: with power spectrum features or raw wave, the input sequencesare very long and the first convolutions are thus the most expensive ones.The last layer of our convolutional network outputs one score per letter in the letter dictionary(dy=jLj). Our architecture for raw wave is shown in Figure 1 and is inspired by (Palaz et al., 2015).The architectures for both power spectrum and MFCC features do not include the first layer. Thefull network can be seen as a non-linear convolution, with a kernel width of size 31280 and strideequal to 320; given the sample rate of our data is 16KHz, label scores are produced using a windowof 1955 ms, with steps of 20ms.2.3 I NFERRING SEGMENTATION WITH AUTOSEGCRITERIONMost large labeled speech databases provide only a text transcription for each audio file. In aclassification framework (and given our acoustic model produces letter predictions), one wouldneed the segmentation of each letter in the transcription to train properly the model. Unfortunately,manually labeling the segmentation of each letter would be tedious. Several solutions have beenexplored in the speech community to alleviate this issue: HMM/GMM models use an iterative EMprocedure: (i) during the Estimation step, the best segmentation is inferred, according to the currentmodel, by maximizing the joint probability of the letter (or any sub-word unit) transcription and inputsequence. (ii) During the Maximization step the model is optimized by minimizing a frame-levelcriterion, based on the (now fixed) inferred segmentation. This approach is also often used to boostrapthe training of neural network-based acoustic models.Other alternatives have been explored in the context of hybrid HMM/NN systems, such as the MMIcriterion (Bahl et al., 1986) which maximizes the mutual information between the acoustic sequenceand word sequences or the Minimum Bayse Risk (MBR) criterion (Gibson & Hain, 2006).More recently, standalone neural network architectures have been trained using criterions whichjointly infer the segmentation of the transcription while increase the overall score of the right tran-scription (Graves et al., 2006; Palaz et al., 2014). The most popular one is certainly the ConnectionistTemporal Classification (CTC) criterion, which is at the core of Baidu’s Deep Speech architec-ture (Amodei et al., 2015). CTC assumes that the network output probability scores, normalizedat the frame level. It considers all possible sequence of letters (or any sub-word units), which canlead to a to a given transcription. CTC also allow a special “blank” state to be optionally insertedbetween each letters. The rational behind the blank state is two-folds: (i) modeling “garbage” frameswhich might occur between each letter and (ii) identifying the separation between two identicalconsecutive letters in a transcription. Figure 2a shows an example of the sequences accepted by CTCfor a given transcription. In practice, this graph is unfolded as shown in Figure 2b, over the availableframes output by the acoustic model. We denote Gctc(;T)an unfolded graph over Tframes for agiven transcription , and=1; :::; T2Gctc(;T)a path in this graph representing a (valid)sequence of letters for this transcription. At each time step t, each node of the graph is assignedwith the corresponding log-probability letter (that we denote ft()) output by the acoustic model.CTC aims at maximizing the “overall” score of paths in Gctc(;T); for that purpose, it minimizes theForward score:CTC (;T) =logadd2Gctc(;T)TXt=1ft(x); (2)where the “logadd” operation, also often called “log-sum-exp” is defined as logadd(a;b) =exp(log(a) + log(b)). This overall score can be efficiently computed with the Forward algorithm. Toput things in perspective, if one would replace the logadd()by amax()in (2) (which can be thenefficiently computed by the Viterbi algorithm, the counterpart of the Forward algorithm), one wouldthen maximize the score of the best path, according to the model belief. The logadd()can be seenas a smooth version of the max(): paths with similar scores will be attributed the same weight in theoverall score (and hence receive the same gradient), and paths with much larger score will have muchmore overall weight than paths with low scores. In practice, using the logadd()works much betterthan the max(). It is also worth noting that maximizing (2) does not diverge, as the acoustic modelis assumed to output normalized scores (log-probabilities) fi().3Under review as a conference paper at ICLR 2017∅C∅A∅T∅(a)∅ ∅ ∅C C C C∅ ∅ ∅A A A A∅ ∅ ∅T T T T∅ ∅ ∅ (b)Figure 2: The CTC criterion graph. (a) Graph which represents all the acceptable sequences of letters(with the blank state denoted “ ;”), for the transcription “cat”. (b) Shows the same graph unfoldedover 5 frames. There are no transitions scores. At each time step, nodes are assigned a conditionalprobability output by the neural network acoustic model.In this paper, we explore an alternative to CTC, with three differences: (i) there are no blank labels,(ii) un-normalized scores on the nodes (and possibly un-normalized transition scores on the edges)(iii) global normalization instead of per-frame normalization:The advantage of (i) is that it produces a much simpler graph (see Figure 3a and Figure 3b).We found that in practice there was no advantage of having a blank class to model thepossible “garbage” frames between letters. Modeling letter repetitions (which is also animportant quality of the blank label in CTC) can be easily replaced by repetition characterlabels (we used two extra labels for two and three repetitions). For example “caterpillar”could be written as “caterpil2ar”, where “2” is a label to represent the repetition of theprevious letter. Not having blank labels also simplifies the decoder.With (ii) one can easily plug an external language model, which would insert transitionscores on the edges of the graph. This could be particularly useful in future work, if onewanted to model representations more high-level than letters. In that respect, avoidingnormalized transitions is important to alleviate the problem of “label bias” Bottou (1991);Lafferty et al. (2001). In this work, we limited ourselves to transition scalars, which arelearned together with the acoustic model.The normalization evoked in (iii) is necessary when using un-normalized scores on nodes oredges; it insures incorrect transcriptions will have a low confidence.In the following, we name our criterion “Auto Segmentation Criterion” (ASG). Considering thesame notations than for CTC in (2), and an unfolded graph Gasg(;T)overTframes for a giventranscription (as in Figure 3b), as well as a fully connected graph Gfull(;T)overTframes(representing all possible sequence of letters, as in Figure 3c), ASG aims at minimizing:ASG (;T) =logadd2Gasg(;T)TXt=1(ft(x) +gt1;t(x)) + logadd2Gfull(;T)TXt=1(ft(x) +gt1;t(x));(3)wheregi;j()is a transition score model to jump from label ito labelj. The left-hand part of 3promotes sequences of letters leading to the right transcription, and the right-hand part demotes allsequences of letters. As for CTC, these two parts can be efficiently computed with the Forwardalgorithm. Derivatives with respect to fi()andgi;j()can be obtained (maths are a bit tedious) byapplying the chain rule through the Forward recursion.2.4 B EAM -SEARCH DECODERWe wrote our own one-pass decoder, which performs a simple beam-search with beam threholding,histogram pruning and language model smearing Steinbiss et al. (1994). We kept the decoder as4Under review as a conference paper at ICLR 2017C A T(a)C C C CA A A AT T T T (b)AB...ZAB...ZAB...ZAB...ZAB...ZAB...Z(c)Figure 3: The ASG criterion graph. (a) Graph which represents all the acceptable sequences ofletters for the transcription “cat”. (b) Shows the same graph unfolded over 5 frames. (c) Shows thecorresponding fully connected graph, which describe all possible sequences of letter; this graph isused for normalization purposes. Un-normalized transitions scores are possible on the edges. Ateach time step, nodes are assigned a conditional un-normalized score, output by the neural networkacoustic model.simple as possible (under 1000 lines of C code). We did not implement any sort of model adaptationbefore decoding, nor any word graph rescoring. Our decoder relies on KenLM Heafield et al. (2013)for the language modeling part. It also accepts un-normalized acoustic scores (transitions andemissions from the acoustic model) as input. The decoder attempts to maximize the following:L() = logadd2Gasg(;T)TXt=1(ft(x) +gt1;t(x)) +logPlm() +jj; (4)wherePlm()is the probability of the language model given a transcription ,andare twohyper-parameters which control the weight of the language model and the word insertion penaltyrespectively.3 E XPERIMENTS3.1 S ETUPWe implemented everything using Torch71. The ASG criterion as well as the decoder were imple-mented in C (and then interfaced into Torch).We consider as benchmark LibriSpeech, a large speech database freely available for download (Panay-otov et al., 2015). LibriSpeech comes with its own train, validation and test sets. Except whenspecified, we used all the available data (about 1000h of audio files) for training and validating ourmodels. We use the original 16 KHz sampling rate. The vocabulary Lcontains 30 graphemes: thestandard English alphabet plus the apostrophe, silence, and two special “repetition” graphemes whichencode the duplication (once or twice) of the previous letter (see Section 2.3).The architecture hyper-parameters, as well the decoder ones were tuned using the validation set. Inthe following, we either report letter-error-rates (LERs) or word-error-rates (WERs). WERs havebeen obtained by using our own decoder (see Section 2.4), with the standard 4-gram language modelprovided with LibriSpeech2.1http://www.torch.ch .2http://www.openslr.org/11 .5Under review as a conference paper at ICLR 2017Table 1: CTC vs ASG. CTC is Baidu’s implementation. ASG is implemented on CPU (C withOpenMP). Timings (in ms) for small sequences (input frames: 150, letter vocabulary size: 28,transcription size: 40) and long sequences (input frames: 700, letter vocabulary size: 28, transcriptionsize: 200) are reported in (a) and (b) respectively. (c) reports performance in LER. Timings includeboth forward and backward passes. CPU implementations use 8 threads.(a)batch CTC ASGsize CPU GPU CPU1 1.9 5.9 2.54 2.0 6.0 2.88 2.0 6.1 2.8(b)batch CTC ASGsize CPU GPU CPU1 40.9 97.9 16.04 41.6 99.6 17.78 41.7 100.3 19.2(c)ASG CTCdev-clean 10.4 10.7test-clean 10.1 10.5MFCC features are computed with 13 coefficients, a 25 ms sliding window and 10 ms stride. Weincluded first and second order derivatives. Power spectrum features are computed with a 25 mswindow, 10 ms stride, and have 257 components. All features are normalized (mean 0, std 1) perinput sequence.3.2 R ESULTSTable 1 reports a comparison between CTC and ASG, in terms of LER and speed. Our ASG criterionis implemented in C (CPU only), leveraging SSE instructions when possible. Our batching is donewith an OpenMP parallel for. We picked the CTC criterion implementation provided by Baidu3. Bothcriteria lead to the same LER. For comparing the speed, we report performance for sequence sizes asreported initially by Baidu, but also for longer sequence sizes, which corresponds to our average usecase. ASG appears faster on long sequences, even though it is running on CPU only. Baidu’s GPUCTC implementation seems more aimed at larger vocabularies (e.g. 5000 Chinese characters).We also investigated the impact of the training size on the dataset, as well as the effect of a simpledata augmentation procedure, where shifts were introduced in the input frames, as well as stretching.For that purpose, we tuned the size of our architectures (given a particular size of the dataset), toavoid over-fitting. Figure 4a shows the augmentation helps for small training set size. However, withenough training data, the effect of data augmentation vanishes, and both type of features appear toperform similarly. Figure 4b reports the WER with respect to the available training data size. Weobserve that we compare very well against Deep Speech 1 & 2 which were trained with much moredata Hannun et al. (2014); Amodei et al. (2015).Finally, we report in Table 2 the best results of our system so far, trained on 1000h of speech, foreach type of features. The overall stride of architectures is 320 (see Figure 1), which produces a labelevery 20 ms. We found that one could squeeze out about 1%in performance by refining the precisionof the output. This is efficiently achieved by shifting the input sequence, and feeding it to the networkseveral times. Results in Table 2 were obtained by a single extra shift of 10 ms. Both power spectrumand raw features are performing slightly worse than MFCCs. One could expect, however, that withenough data (see Figure 4) the gap would vanish.3https://github.com/baidu-research/warp-ctc .6Under review as a conference paper at ICLR 20170 200 400 600 800 1,000101520training set size (h)LERMFCCMFCC+AUGPOWPOW+AUG(a)101102103104510152025training set size (h)WERMFCCPOWDS1DS2 (b)Figure 4: Valid LER (a) and WER (b) v.s. training set size (10h, 100h, 200h, 1000h). Thiscompares MFCC-based and power spectrum-based (POW) architectures. AUG experiments includedata augmentation. In (b) we provide Baidu Deep Speech 1 and 2 numbers on LibriSpeech, as acomparison Hannun et al. (2014); Amodei et al. (2015).Table 2: LER/WER of the best sets of hyper-parameters for each feature types.MFCC PS RawLER WER LER WER LER WERdev-clean 6.9 9.3 10.3test-clean 6.9 7.2 9.1 9.4 10.6 10.14 C ONCLUSIONWe have introduced a simple end-to-end automatic speech recognition system, which combines astandard 1D convolutional neural network, a sequence criterion which can infer the segmentation, anda simple beam-search decoder. The decoding results are competitive on the LibriSpeech corpus withMFCC features (7.2% WER), and promising with power spectrum and raw speech (9.4% WER and10.1% WER respectively). We showed that our AutoSegCriterion can be faster than CTC (Graveset al., 2006), and as accurate (table 1). Our approach breaks free from HMM/GMM pre-trainingand force-alignment, as well as not being as computationally intensive as RNN-based approaches(Amodei et al., 2015) (on average, one LibriSpeech sentence is processed in less than 60ms by ourConvNet, and the decoder runs at 8.6x on a single thread).REFERENCESDario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, JingdongChen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-to-end speechrecognition in english and mandarin. arXiv preprint arXiv:1512.02595 , 2015.L. R. Bahl, P. F. Brown, P. V . de Souza, and R. L. Mercer. Maximum mutual information estimationof hidden markov model parameters for speech recognition. In Acoustics, Speech and SignalProcessing (ICASSP), 1986 IEEE International Conference on , pp. 49–52. IEEE, 1986.Leon Bottou. Une approche theorique de l’apprentissage connexionniste et applications a lareconnaissance de la parole . PhD thesis, 1991.7Under review as a conference paper at ICLR 2017Léon Bottou, Yoshua Bengio, and Yann Le Cun. Global training of document processing systemsusing graph transformer networks. In Computer Vision and Pattern Recognition, 1997. Proceedings.,1997 IEEE Computer Society Conference on , pp. 489–494. IEEE, 1997.M. Gibson and T. Hain. Hypothesis spaces for minimum bayes risk training in large vocabularyspeech recognition. In Proceedings of INTERSPEECH , pp. 2406—-2409. IEEE, 2006.Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrentneural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE InternationalConference on , pp. 6645–6649. IEEE, 2013.Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. Connectionist temporalclassification: labelling unsegmented sequence data with recurrent neural networks. In Proceedingsof the 23rd international conference on Machine learning , pp. 369–376. ACM, 2006.Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger,Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deep speech: Scaling up end-to-endspeech recognition. arXiv preprint arXiv:1412.5567 , 2014.Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H Clark, and Philipp Koehn. Scalable modifiedkneser-ney language model estimation. In ACL (2) , pp. 690–696, 2013.Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly,Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networksfor acoustic modeling in speech recognition: The shared views of four research groups. SignalProcessing Magazine, IEEE , 29(6):82–97, 2012.J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models forsegmenting and labeling sequence data. In Eighteenth International Conference on MachineLearning, ICML , 2001.Yann LeCun and Yoshua Bengio. Convolutional networks for images, speech, and time series. Thehandbook of brain theory and neural networks , 3361(10):1995, 1995.Yajie Miao, Mohammad Gowayyed, and Florian Metze. Eesen: End-to-end speech recognition usingdeep rnn models and wfst-based decoding. arXiv preprint arXiv:1507.08240 , 2015.Abdel-rahman Mohamed, George E Dahl, and Geoffrey Hinton. Acoustic modeling using deep beliefnetworks. Audio, Speech, and Language Processing, IEEE Transactions on , 20(1):14–22, 2012.Dimitri Palaz, Ronan Collobert, and Mathew Magimai Doss. Estimating phoneme class condi-tional probabilities from raw speech signal using convolutional neural networks. arXiv preprintarXiv:1304.1018 , 2013.Dimitri Palaz, Mathew Magimai-Doss, and Ronan Collobert. Joint phoneme segmentation inferenceand classification using crfs. In Signal and Information Processing (GlobalSIP), 2014 IEEE GlobalConference on , pp. 587–591. IEEE, 2014.Dimitri Palaz, Ronan Collobert, et al. Analysis of cnn-based speech recognition system using rawspeech as input. In Proceedings of Interspeech , number EPFL-CONF-210029, 2015.Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpusbased on public domain audio books. In Acoustics, Speech and Signal Processing (ICASSP), 2015IEEE International Conference on , pp. 5206–5210. IEEE, 2015.Vijayaditya Peddinti, Guoguo Chen, Vimal Manohar, Tom Ko, Daniel Povey, and Sanjeev Khudanpur.Jhu aspire system: Robust lvcsr with tdnns, i-vector adaptation, and rnn-lms. In Proceedings of theIEEE Automatic Speech Recognition and Understanding Workshop , 2015a.Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur. A time delay neural network architecturefor efficient modeling of long temporal contexts. In Proceedings of INTERSPEECH , 2015b.Tara N Sainath, Ron J Weiss, Andrew Senior, Kevin W Wilson, and Oriol Vinyals. Learning thespeech front-end with raw waveform cldnns. In Proc. Interspeech , 2015.8Under review as a conference paper at ICLR 2017George Saon, Hagen Soltau, David Nahamoo, and Michael Picheny. Speaker adaptation of neuralnetwork acoustic models using i-vectors. In ASRU , pp. 55–59, 2013.George Saon, Hong-Kwang J Kuo, Steven Rennie, and Michael Picheny. The ibm 2015 englishconversational telephone speech recognition system. arXiv preprint arXiv:1505.05899 , 2015.Andrew Senior, Georg Heigold, Michiel Bacchiani, and Hank Liao. Gmm-free dnn training. InProceedings of ICASSP , pp. 5639–5643, 2014.Tom Sercu, Christian Puhrsch, Brian Kingsbury, and Yann LeCun. Very deep multilingual convolu-tional neural networks for lvcsr. arXiv preprint arXiv:1509.08967 , 2015.Hagen Soltau, George Saon, and Tara N Sainath. Joint training of convolutional and non-convolutionalneural networks. In ICASSP , pp. 5572–5576, 2014.V olker Steinbiss, Bach-Hiep Tran, and Hermann Ney. Improvements in beam search. In ICSLP ,volume 94, pp. 2143–2146, 1994.Philip C Woodland and Steve J Young. The htk tied-state continuous speech recogniser. In Eurospeech ,1993.9
HyoST_9xl
Published as a conference paper at ICLR 2017DSD: D ENSE -SPARSE -DENSE TRAINING FOR DEEPNEURAL NETWORKSSong Han, Huizi Mao, Enhao Gong, Shijian Tang, William J. DallyyStanford University{songhan,huizi,enhaog,sjtang,dally}@stanford.eduJeff Pool, John Tran, Bryan CatanzaroNVIDIA{jpool,johntran,bcatanzaro}@nvidia.comSharan Narang, Erich ElsenzBaidu Researchsharan@baidu.comPeter Vajda, Manohar PaluriFacebook{vajdap,mano}@fb.comABSTRACTModern deep neural networks have a large number of parameters, making themvery hard to train. We propose DSD, a dense-sparse-dense training flow, forregularizing deep neural networks and achieving better optimization performance.In the first D (Dense) step, we train a dense network to learn connection weightsand importance. In the S (Sparse) step, we regularize the network by pruning theunimportant connections with small weights and retraining the network given thesparsity constraint. In the final D (re-Dense) step, we increase the model capacityby removing the sparsity constraint, re-initialize the pruned parameters from zeroand retrain the whole dense network. Experiments show that DSD training canimprove the performance for a wide range of CNNs, RNNs and LSTMs on thetasks of image classification, caption generation and speech recognition. OnImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%.On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over1.7. DSD is easy to use in practice: at training time, DSD incurs only one extrahyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’tchange the network architecture or incur any inference overhead. The consistentand significant performance gain of DSD experiments shows the inadequacy of thecurrent training methods for finding the best local optimum, while DSD effectivelyachieves superior optimization performance for finding a better solution. DSDmodels are available to download at https://songhan.github.io/DSD.1 I NTRODUCTIONDeep neural networks (DNNs) have shown significant improvements in many application domains,ranging from computer vision (He et al. (2015)) to natural language processing (Luong et al. (2015))and speech recognition (Amodei et al. (2015)). The abundance of powerful hardware makes it easierto train complicated DNN models with large capacities. The upside of complicated models is thatthey are very expressive and can capture the highly non-linear relationship between features andoutput. The downside of such large models is that they are prone to capturing the noise, rather thanthe intended pattern, in the training dataset. This noise does not generalize to new datasets, leading toover-fitting and a high variance.Indicates equal contributionyAlso at NVIDIAzNow at Google Brain. eriche@google.com1Published as a conference paper at ICLR 2017Dense Pruning Sparsity Constraint Sparse Increase Model Capacity Re-Dense Dense Figure 1: Dense-Sparse-Dense Training Flow. The sparse training regularizes the model, and the finaldense training restores the pruned weights (red), increasing the model capacity without overfitting.Algorithm 1: Workflow of DSD trainingInitialization: W(0)withW(0)N(0;)Output :W(t).———————————————– Initial Dense Phase ———————————————–while not converged doW(t)=W(t1)(t)rf(W(t1);x(t1));t=t+ 1;end————————————————— Sparse Phase —————————————————-//initialize the mask by sorting and keeping the Top-k weights.S=sort(jW(t1)j);=Ski;Mask =1(jW(t1)j>);while not converged doW(t)=W(t1)(t)rf(W(t1);x(t1));W(t)=W(t)Mask ;t=t+ 1;end————————————————- Final Dense Phase ————————————————–while not converged doW(t)=W(t1)(t)rf(W(t1);x(t1));t=t+ 1;endgoto Sparse Phase for iterative DSD;In contrast, simply reducing the model capacity would lead to the other extreme, causing a machinelearning system to miss the relevant relationships between features and target outputs, leading tounder-fitting and a high bias. Bias and variance are hard to optimize at the same time.To solve this problem, we propose a dense-sparse-dense training flow (DSD), a novel training strategythat starts from a dense model from conventional training, then regularizes the model with sparsity-constrained optimization, and finally increases the model capacity by restoring and retraining thepruned weights. At testing time, the final model produced by DSD still has the same architectureand dimension as the original dense model, and DSD training doesn’t incur any inference overhead.We experimented DSD training on 7 mainstream CNN / RNN / LSTMs and found consistentperformance gains over its comparable counterpart for image classification, image captioning andspeech recognition.2 DSD T RAINING FLOWOur DSD training employs a three-step process: dense, sparse, re-dense. Each step is illustrated inFigure 1 and Algorithm 1. The progression of weight distribution is plotted in Figure 2.Initial Dense Training: The first D step learns the connection weights and importance via normalnetwork training on the dense network. Unlike conventional training, however, the goal of this D stepis not only to learn the values of the weights; we are also learning which connections are important.We use a simple heuristic to quantify the importance of the weights using their absolute value.2Published as a conference paper at ICLR 2017−0.05 0 0.0501600320048006400Weight ValueCountTrain on Dense (D)(a)−0.05 0 0.0501600320048006400Weight ValueCountPruning the Network (b)−0.05 0 0.0501600320048006400Weight ValueCountTrain on Sparse (S) (c)−0.05 0 0.0501600320048006400Weight ValueCountRecover Zero Weights (d)−0.05 0 0.0501600320048006400Weight ValueCountTrain on Dense (D) (e)Figure 2: Weight distribution of a layer of GoogLeNet at different points in DSD training: the originalGoogLeNet (a), pruned (b), after retraining with the sparsity constraint (c), ignoring the sparistyconstraint and recovering the zero weights (d), and after retraining the dense network (e).Sparse Training: The S step prunes the low-weight connections and trains a sparse network. Weapplied the same sparsity to all the layers, thus there’s a single hyper parameter: the sparsity , thepercentage of weights that are pruned to 0. For each layer WwithNparameters, we sorted theparameters, picked the k-th largest one =Skas the threshold where k=N(1sparsity ), andgenerated a binary mask to remove all the weights smaller than . Details are shown in Algorithm 1 .We remove small weights because of the Taylor expansion. The loss function and its Taylor expansionare shown in Equation (1)(2). We want to minimize the increase in Loss when conducting a hardthresholding on the weights, so we need to minimize the first and second terms in Equation 2.Since we are zeroing out parameters, Wiis actuallyWi0 =Wi. At the local minimum where@Loss=@W i0and@2Loss@W2i>0, only the second order term matters. Since second order gradient@2Loss=@W2iis expensive to calculate and Wihas a power of 2, we use jWijas the metric of pruning.SmallerjWijmeans a smaller increase to the loss function.Loss =f(x;W 1;W 2;W 3:::) (1)Loss =@Loss@WiWi+12@2Loss@W2iWi2+::: (2)Retraining while enforcing the binary mask in each iteration, we converted a dense network into asparse network that has a known sparsity support and can fully recover or even increase the originalaccuracy of initial dense model under the sparsity constraint. The sparsity is the same for all thelayers and can be tuned using validation. We find a sparsity value between 25% and 50% generallyworks well in our experiments.Final Dense Training: The final D step recovers the pruned connections, making the network denseagain. These previously-pruned connections are initialized to zero and the entire network is retrainedwith 1/10 the original learning rate (since the sparse network is already at a good local minima).Hyper parameters like dropout ratios and weight decay remained unchanged. By restoring the prunedconnections, the final D step increases the model capacity of the network and makes it possible toarrive at a better local minima compared with the sparse model from the S step.To visualize the DSD training flow, we plotted the progression of the weight distribution in Figure 2.The figure is plotted using GoogLeNet’s inception_5b3x3 layer, and we found this progression ofweight distribution very representative for VGGNet and ResNet as well. The original distributionof weight is centered on zero with tails dropping off quickly. Pruning is based on absolute value soafter pruning the large center region is truncated away. The un-pruned network parameters adjustthemselves during the retraining phase, so in (c), the boundary becomes soft and forms a bimodaldistribution. In (d), at the beginning of the re-dense training step, all the pruned weights come backagain and are reinitialized to zero. Finally, in (e), the pruned weights are retrained together with theun-pruned weights. In this step, we kept the same learning hyper-parameters (weight decay, learningrate, etc.) for pruned weights and un-pruned weights. Comparing Figure (d) and (e), the un-prunedweights’ distribution almost remained the same, while the pruned weights became distributed furtheraround zero. The overall mean absolute value of the weight distribution is much smaller. Thisis a good phenomenon: choosing the smallest vector that solves the learning problem suppressesirrelevant components of the weight vector ( Moody et al. (1995)).3Published as a conference paper at ICLR 2017Table 1: Overview of the neural networks, data sets and performance improvements from DSD.Neural Network Domain Dataset Type Baseline DSD Abs. Imp. Rel. Imp.GoogLeNet Vision ImageNet CNN 31.1%130.0% 1.1% 3.6%VGG-16 Vision ImageNet CNN 31.5%127.2% 4.3% 13.7%ResNet-18 Vision ImageNet CNN 30.4%129.2% 1.2% 4.1%ResNet-50 Vision ImageNet CNN 24.0%122.9% 1.1% 4.6%NeuralTalk Caption Flickr-8K LSTM 16.8218.5 1.7 10.1%DeepSpeech Speech WSJ’93 RNN 33.6%331.6% 2.0% 5.8%DeepSpeech-2 Speech WSJ’93 RNN 14.5%313.4% 1.1% 7.4%1Top-1 error. VGG/GoogLeNet baselines from the Caffe Model Zoo, ResNet from Facebook.2BLEU score baseline from Neural Talk model zoo, the higher the better.3Word error rate: DeepSpeech2 is trained with a portion of Baidu internal dataset with only maxdecoding to show the effect of DNN improvement.3 R ELATED WORKDropout and DropConnect: DSD, Dropout (Srivastava et al. (2014)) and DropConnnect (Wan et al.(2013)) can all regularize neural networks and prevent over-fitting. The difference is that Dropout andDropConnect use a random sparsity pattern at each SGD iteration, while DSD training learns with adeterministic data driven sparsity pattern throughout sparse training. Our experiments on VGG16,GoogLeNet and NeuralTalk show that DSD training can work together with Dropout.Distillation: Model distillation (Hinton et al. (2015)) is a method that can transfer the learnedknowledge from a large model to a small model, which is more efficient for deployment. This isanother method that allows for performance improvements in neural networks without architecturalchanges.Model Compression: Both model compression (Han et al. (2016; 2015)) and DSD training usenetwork pruning (LeCun et al. (1990); Hassibi et al. (1993)). The difference is that the focus ofDSD training goes beyond maintaining the accuracy. DSD is able to further improve the accuracy byconsiderable margins. Another difference is that DSD training doesn’t require aggressive pruning. Amodestly pruned network (50%-60% sparse) can work well. However, model compression requiresaggressively pruning the network to achieve high compression rates.Sparsity Regularization and Hard Thresholding: the truncation-based sparse network has beentheoretically analyzed for learning a broad range of statistical models in high dimensions (Langfordet al. (2009); Yuan & Zhang (2013); Wang et al. (2014)). A similar training strategy with iterativehard thresholding and connection restoration is proposed by Jin et al. (2016) during the same timeperiod as, but independently from, DSD. Sparsity regularized optimization is heavily applied inCompressed Sensing (Candes & Romberg (2007)) to find optimal solutions to the inverse problemsin highly under-determined systems based on the sparsity assumption.4 E XPERIMENTSWe applied DSD training to different kinds of neural networks in different domains. We found thatDSD training improved the accuracy for allthese networks compared to the baseline networks thatwere not trained with DSD. The neural networks are chosen from CNN, RNN and LSTMs; thedatasets covered image classification, speech recognition, and caption generation. For networkstrained for ImageNet, we focus on GoogLeNet, VGG and ResNet, which are widely used in researchand production. An overview of the networks, dataset and accuracy results are shown in Table 1. Forthe convolutional networks, we do not prune the first layer during the sparse phase, since it has only 3channels and is very sensitive to pruning. The sparsity is the same for all the other layers, includingconvolutional and fully-connected layers. We do not change any other training hyper-parameters, andthe initial learning rate at each stage is decayed the same as conventional training. The epochs aredecided by when the loss converges. When the loss no longer decreases, we stop the training.4Published as a conference paper at ICLR 20174.1 G OOG LENETWe experimented with the BVLC GoogLeNet (Szegedy et al. (2015)) model obtained from the CaffeModel Zoo (Jia (2013)). It has 13 million parameters and 57 convolutional layers. We pruned eachlayer (except the first) to 30% sparsity. Retraining the sparse network gave some improvement inaccuracy due to regularization, as shown in Table 2. After the final dense training step, GoogLeNet’serror rates were reduced by 1.12% (Top-1) and 0.62% (Top-5) over the baseline.We compared DSD v.s. conventional training for the same number of epochs by dropping the learningrate upon "convergence" and continuing to learn. The result is shown as LLR (lower the learningrate). The training epochs for LLR is equal to that of Sparse+re-Dense as a fair comparison. LLR cannot achieve the same accuracy as DSD.Table 2: DSD results on GoogLeNetGoogLeNet Top-1 Err Top-5 Err Sparsity Epochs LRBaseline 31.14% 10.96% 0% 250 1e-2Sparse 30.58% 10.58% 30% 11 1e-3DSD 30.02% 10.34% 0% 22 1e-4LLR 30.20% 10.41% 0% 33 1e-5Improve (abs) 1.12% 0.62% - - -Improve (rel) 3.6% 5.7% - - -4.2 VGGN ETWe explored DSD training on VGG-16 (Simonyan & Zisserman (2014)), which is widely used indetection, segmentation and transfer learning. The baseline model is obtained from the Caffe ModelZoo (Jia (2013)). Similar to GoogLeNet, each layer is pruned to 30% sparsity. DSD training greatlyreduced the error by 4.31% (Top-1) and 2.65% (Top-5), detailed in Table 3. DSD also wins over theLLR result by a large margin.Table 3: DSD results on VGG-16VGG-16 Top-1 Err Top-5 Err Sparsity Epochs LRBaseline 31.50% 11.32% 0% 74 1e-2Sparse 28.19% 9.23% 30% 1.25 1e-4DSD 27.19% 8.67% 0% 18 1e-5LLR 29.33% 10.00% 0% 20 1e-7Improve (abs) 4.31% 2.65% - - -Improve (rel) 13.7% 23.4% - - -4.3 R ESNETDeep Residual Networks (ResNets, He et al. (2015)) were the top performer in the 2015 ImageNetchallenge. The baseline ResNet-18 and ResNet-50 models are provided by Facebook (2016). Weprune to 30% sparsity uniformly, and a single DSD pass for these networks reduced top-1 error by1.26% (ResNet-18) and 1.12% (ResNet-50), shown in Table 4. A second DSD iteration can furtherimprove the accuracy. As a fair comparison, we continue train the original model by lowering thelearning rate by another decade, but can’t reach the same accuracy as DSD, as shown in the LLR row.Table 4: DSD results on ResNet-18 and ResNet-50ResNet-18 ResNet-50Top-1 Err Top-5 Err Top-1 Err Top-5 Err Sparsity Epochs LRBaseline 30.43% 10.76% 24.01% 7.02% 0% 90 1e-1Sparse 30.15% 10.56% 23.55% 6.88% 30% 45 1e-2DSD 29.17 % 10.13 % 22.89 % 6.47% 0% 45 1e-3LLR 30.04% 10.49% 23.58% 6.84% 0% 90 1e-5Improve (abs) 1.26% 0.63% 1.12% 0.55% - - -Improve (rel) 4.14% 5.86% 4.66% 7.83% - - -5Published as a conference paper at ICLR 2017Baseline : a man and a woman are sitting on a bench. Sparse : a man is sitting on a bench with his hands in the air. DSD : a man is sitting on a bench with his arms folded.Baseline : two dogs are playing together in a field. Sparse : two dogs are playing in a field. DSD : two dogs are playing in the grass.Baseline : a boy in a red shirt is climbing a rock wall. Sparse : a young girl is jumping off a tree. DSD : a young girl in a pink shirt is s w i n g i n g o n a swing.Baseline : a basketball player in a red uniform is playing with a ball. Sparse : a basketball player in a blue uniform is jumping over the goal. DSD : a basketball player in a white uniform is trying to make a shot.Baseline : a person in a red jacket is riding a b i k e t h r o u g h t h e woods. Sparse : a car drives through a mud puddle. DSD : a car drives through a forest.1Figure 3: Visualization of DSD training improving the performance of image captioning.Table 5: DSD results on NeuralTalkNeuralTalk BLEU-1 BLEU-2 BLEU-3 BLEU-4 Sparsity Epochs LRBaseline 57.2 38.6 25.4 16.8 0 19 1e-2Sparse 58.4 39.7 26.3 17.5 80% 10 1e-3DSD 59.2 40.7 27.4 18.5 0 6 1e-4Improve(abs) 2.0 2.1 2.0 1.7 - - -Improve(rel) 3.5% 5.4% 7.9% 10.1% - - -4.4 N EURAL TALKWe evaluated DSD training on RNN and LSTM beyond CNN. We applied DSD to NeuralTalk(Karpathy & Fei-Fei (2015)), an LSTM for generating image descriptions. It uses a CNN as an imagefeature extractor and an LSTM to generate captions. To verify DSD training on LSTMs, we fixedthe CNN weights and only train the LSTM weights. The baseline NeuralTalk model we used is theflickr8k_cnn_lstm_v1.p downloaded from NeuralTalk Model Zoo.In the pruning step, we pruned all layers except Ws, the word embedding lookup table, to 80%sparse. We used a higher sparsity than CNN’s experiments based on the validation set of flickr8k. Weretrained the remaining sparse network using the same weight decay and batch size as the originalpaper. The learning rate is tuned based on the validation set, shown in Table 5. Retraining the sparsenetwork improved the BLUE score by [1.2, 1.1, 0.9, 0.7]. After getting rid of the sparsity constraintand retraining the dense network, the final results of DSD further improved the BLEU score by [2.0,2.1, 2.0, 1.7] over baseline.The BLEU score is not the sole criteria measuring auto-caption system. We visualized the captionsgenerated by DSD training in Figure 3. In the first image, the baseline model mistakes the girl with aboy and the girl’s hair with a rock wall; the sparse model can tell that it’s a girl; and the DSD modelcan further identify the swing. In the the second image, DSD training can more accurately tell theplayer is in a white uniform and trying to make a shot, rather than the baseline just saying he’s ina red uniform and playing with a ball. The performance of DSD training generalizes beyond theseexamples; more image caption results generated by DSD training are provided in the Appendix.4.5 D EEPSPEECHWe explore DSD training on speech recognition tasks using both Deep Speech 1 (DS1) and DeepSpeech 2 (DS2) networks (Hannun et al. (2014); Amodei et al. (2015)).The DS1 model is a 5 layer network with 1 Bidirectional Recurrent layer, as described in Table 6.The training dataset used for this model is the Wall Street Journal (WSJ), which contains 81 hours of6Published as a conference paper at ICLR 2017Table 6: Deep Speech 1 ArchitectureLayer ID 0 1 2 3 4 5Type Conv FC FC Bidirectional Recurrent FC CTCCost#Params 1814528 1049600 1049600 3146752 1049600 29725Table 7: DSD results on Deep Speech 1: Word Error Rate (WER)DeepSpeech 1 WSJ ’92 WSJ ’93 Sparsity Epochs LRDense Iter 0 29.82 34.57 0% 50 8e-4Sparse Iter 1 27.90 32.99 50% 50 5e-4Dense Iter 1 27.90 32.20 0% 50 3e-4Sparse Iter 2 27.45 32.99 25% 50 1e-4Dense Iter 2 27.45 31.59 0% 50 3e-5Baseline 28.03 33.55 0% 150 8e-4Improve(abs) 0.58 1.96 - - -Improve(rel) 2.07% 5.84% - - -speech. The validation set consists of 1 hour of speech. The test sets are from WSJ’92 and WSJ’93and contain 1 hour of speech combined. The Word Error Rate (WER) reported on the test sets for thebaseline models is different from Amodei et al. (2015) due to two factors. First, in DeepSpeech2,the models were trained using much larger data sets containing approximately 12,000 hours ofmulti-speaker speech data. Secondly, WER was evaluated with beam search and a language model inDeepSpeech2; here the network output is obtained using only max decoding to show improvement inthe neural network accuracy, and filtering out the other parts.The first dense phase was trained for 50 epochs. In the sparse phase, weights are pruned in theFully Connected layers and the Bidirectional Recurrent layer only (they are the majority of theweights). Each layer is pruned to achieve the same 50% sparsity and trained for 50 epochs. In thefinal dense phase, the pruned weights are initialized to zero and trained for another 50 epochs. Fora fair comparison of baseline, we used Nesterov SGD to train, reduce the learning rate with eachre-training, and keep all other hyper parameters unchanged. The learning rate is picked using ourvalidation set.We first wanted to compare the DSD results with a baseline model trained for the same number ofepochs. The first 3 rows of Table 7 shows the WER when the DSD model is trained for 50+50+50=150epochs, and the 6th line shows the baseline model trained by 150 epochs (the Same #Epochs asDSD). DSD training improves WER by 0.13 (WSJ ’92) and 1.35 (WSJ ’93) given the same numberof epochs as the conventional training.Given a second DSD iteration, accuracy can be further improved. In the second DSD iteration,each layer is pruned away 25% of the weights. Similar to the first iteration, the sparse model andsubsequent dense model are further retrained for 50 epochs. The learning rate is scaled down for eachre-training step. The results are shown in Table 7. Compared with the fully trained and convergedbaseline, the second DSD iteration improves WER by 0.58 (WSJ ’92) and 1.96 (WSJ ’93), a relativeimprovement of 2.07% (WSJ ’92) and 5.84% (WSJ ’93). So, we can do more DSD iterations(DSDSD) to further improve the performance. Adding more DSD iterations has a diminishing return.4.6 D EEPSPEECH 2To show how DSD works on deeper networks, we evaluated DSD on the Deep Speech 2 (DS2)network, described in Table 8. This network has 7 Bidirectional Recurrent layers with approximately67 million parameters, around 8 times larger than the DS1 model. A subset of the internal Englishtraining set is used. The training set is comprised of 2,100 hours of speech. The validation set iscomprised of 3.46 hours of speech. The test sets are from WSJ’92 and WSJ’93, which contain 1 hourof speech combined.Table 9 shows the results of the two iterations of DSD training. For the first sparse re-training,similar to DS1, 50% of the parameters from the Bidirectional Recurrent Layers and Fully Connected7Published as a conference paper at ICLR 2017Table 8: Deep Speech 2 ArchitectureLayer ID 0 1 2 3 - 8 9 10Type 2DConv 2DConv BR BR FC CTCCost#Params 19616 239168 8507840 9296320 3101120 95054Table 9: DSD results on Deep Speech 2 (WER)DeepSpeech 2 WSJ ’92 WSJ ’93 Sparsity Epochs LRDense Iter 0 11.83 17.42 0% 20 3e-4Sparse Iter 1 10.65 14.84 50% 20 3e-4Dense Iter 1 9.11 13.96 0% 20 3e-5Sparse Iter 2 8.94 14.02 25% 20 3e-5Dense Iter 2 9.02 13.44 0% 20 6e-6Baseline 9.55 14.52 0% 60 3e-4Improve(abs) 0.53 1.08 - - -Improve(rel) 5.55% 7.44% - - -Layers are pruned. The Baseline model is trained for 60 epochs to provide a fair comparison withDSD training. The baseline model shows no improvement after 40 epochs. With one iteration ofDSD training, WER improves by 0.44 (WSJ ’92) and 0.56 (WSJ ’93) compared to the fully trainedbaseline.Here we show again that DSD can be applied multiple times or iteratively for further performancegain. A second iteration of DSD training achieves better accuracy as shown in Table 9. For the secondsparse iteration, 25% of parameters in the Fully Connected layer and Bidirectional Recurrent layersare pruned. Overall DSD training achieves relative improvement of 5.55% (WSJ ’92) and 7.44%(WSJ ’93) on the DS2 architecture. These results are in line with DSD experiments on the smallerDS1 network. We can conclude that DSD re-training continues to show improvement in accuracywith larger layers and deeper networks.5 D ISCUSSIONDense-Sparse-Dense training changes the optimization process and improves the optimization perfor-mance with significant margins by nudging the network with pruning and re-densifying. We conjecturethat the following aspects contribute to the efficacy of DSD training.Escape Saddle Point: Based on previous studies, one of the most profound difficulties of optimizingdeep networks is the proliferation of saddle points (Dauphin et al. (2014)). Advanced optimizationmethods have been proposed to overcome saddle points. For a similar purpose but with a differentapproach, the proposed DSD method overcomes the saddle points by pruning and re-densifyingframework. Pruning the converged model perturbs the learning dynamics and allows the networkto jump away from saddle points, which gives the network a chance to converge at a better local orglobal minimum. This idea is also similar to Simulated Annealing ( Hwang (1988)). While SimulatedAnnealing randomly jumps with decreasing probability on the search graph, DSD deterministicallydeviates from the converged solution achieved in the first dense training phase by removing thesmall weights and enforcing a sparsity support. Similar to Simulated Annealing, which can escapesub-optimal solutions multiple times in the entire optimization process, DSD can also be appliediteratively to achieve further performance gains, as shown in the Deep Speech results.Significantly Better Minima: After escaping saddle point, DSD achieved better minima. Wemeasured both the training loss and validation loss, DSD training decreased the loss and error onboth the training and the validation sets on ImageNet. We have also validated the significance of theimprovements compared with conventional fine-tuning by t-test, shown in the appendix.Regularized and Sparse Training: The sparsity regularization in the sparse training step moves theoptimization to a lower-dimensional space where the loss surface is smoother and tend to be morerobust to noise. More numerical experiments verified that both sparse training and the final DSDreduce the variance and lead to lower error (shown in the appendix).8Published as a conference paper at ICLR 2017Robust re-initialization: Weight initialization plays a big role in deep learning (Mishkin & Matas(2015)). Conventional training has only one chance of initialization. DSD gives the optimization asecond (or more) chance during the training process to re-initialize from a more robust sparse trainingsolution. We re-densify the network from the sparse solution which can be seen as a zero initializationfor pruned weights. Other initialization methods are also worth trying.Break Symmetry: The permutation symmetry of the hidden units makes the weights symmetrical,thus prone to co-adaptation in training. In DSD, pruning the weights breaks the symmetry of thehidden units associated with the weights, and the weights are asymmetrical in the final dense phase.6 C ONCLUSIONWe introduce DSD, a dense-sparse-dense training framework that regularizes neural networks bypruning and then restoring connections. Our method learns which connections are important duringthe initial dense solution. Then it regularizes the network by pruning the unimportant connectionsand retraining to a sparser and more robust solution with same or better accuracy. Finally, the prunedconnections are restored and the entire network is retrained again. This increases the dimensionalityof parameters, and thus model capacity, from the sparser model.DSD training achieves superior optimization performance. We highlight our experiments usingGoogLeNet, VGGNet, and ResNet on ImageNet; NeuralTalk on Flickr-8K; and DeepSpeech-1&2on the WSJ dataset. This shows that the accuracy of CNNs, RNNs, and LSTMs can be significantlyimproved with DSD training. Our numerical results and empirical tests show the inadequacy ofcurrent training methods for which we have provided an effective solution.9Published as a conference paper at ICLR 2017REFERENCESDario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen,Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-to-end speech recognition inenglish and mandarin. arXiv preprint arXiv:1512.02595 , 2015.Emmanuel Candes and Justin Romberg. Sparsity and incoherence in compressive sampling. Inverse problems ,23(3):969, 2007.Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio.Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advancesin neural information processing systems , pp. 2933–2941, 2014.Facebook. Facebook.ResNet.Torch. https://github.com/facebook/fb.resnet.torch, 2016.Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neuralnetwork. In Advances in Neural Information Processing Systems , pp. 1135–1143, 2015.Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning,trained quantization and huffman coding. International Conference on Learning Representations , 2016.Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, SanjeevSatheesh, Shubho Sengupta, Adam Coates, and Andrew Ng. Deep speech: Scaling up end-to-end speechrecognition. arXiv, preprint arXiv:1412.5567 , 2014.Babak Hassibi, David G Stork, et al. Second order derivatives for network pruning: Optimal brain surgeon.Advances in neural information processing systems , pp. 164–164, 1993.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXivpreprint arXiv:1512.03385 , 2015.Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprintarXiv:1503.02531 , 2015.Chii-Ruey Hwang. Simulated annealing: theory and applications. Acta Applicandae Mathematicae , 12(1):108–111, 1988.Yangqing Jia. BVLC caffe model zoo. http://caffe.berkeleyvision.org/model_zoo, 2013.Xiaojie Jin, Xiaotong Yuan, Jiashi Feng, and Shuicheng Yan. Training skinny deep neural networks with iterativehard thresholding methods. arXiv preprint arXiv:1607.05423 , 2016.Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition , 2015.John Langford, Lihong Li, and Tong Zhang. Sparse online learning via truncated gradient. In Advances in neuralinformation processing systems , pp. 905–912, 2009.Yann LeCun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Advances in Neural InformationProcessing Systems , pp. 598–605. Morgan Kaufmann, 1990.Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neuralmachine translation. arXiv preprint arXiv:1508.04025 , 2015.Dmytro Mishkin and Jiri Matas. All you need is a good init. arXiv preprint arXiv:1511.06422 , 2015.J Moody, S Hanson, Anders Krogh, and John A Hertz. A simple weight decay can improve generalization.Advances in neural information processing systems , 4:950–957, 1995.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition.arXiv preprint arXiv:1409.1556 , 2014.Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: Asimple way to prevent neural networks from overfitting. JMLR , 15:1929–1958, 2014.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan,Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEEConference on Computer Vision and Pattern Recognition , pp. 1–9, 2015.Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neural networks usingdropconnect. In ICML , pp. 1058–1066, 2013.Zhaoran Wang, Quanquan Gu, Yang Ning, and Han Liu. High dimensional expectation-maximization algorithm:Statistical optimization and asymptotic normality. arXiv preprint arXiv:1412.8729 , 2014.Xiao-Tong Yuan and Tong Zhang. Truncated power method for sparse eigenvalue problems. The Journal ofMachine Learning Research , 14(1):899–925, 2013.10Published as a conference paper at ICLR 2017A. A PPENDIX : SIGNIFICANCE OF DSD IMPROVEMENTSDSD training improves the baseline model performance by consecutively pruning and re-densifying the networkweights. We conducted more intensive experiments to validate that the improvements are significant and not dueto any randomness in the optimization. In order to evaluate the significance, we repeated the baseline training,DSD training (retraining on baseline) and conventional fine-tuning (retraining on the same baseline) multipletimes. The statistical significance of DSD improvements are quantified on the Cifar-10 dataset using ResNet.1. S IGNIFICANT IMPROVEMENTS ON CIFAR -10 USING RESNET-20Cifar-10 is a smaller image recognition benchmark with 50,000 32x32 color images for training and 10,000 fortesting. Training on Cifar-10 is fast enough that it is feasible to conduct intensive experiments within a reasonabletime to evaluate DSD performance. The baseline models were trained with the standard 164 epochs and initialLR of 0.1 as recommended in the released code (Facebook, 2016). After 164 epochs, we obtained the model witha 8.26% top-1 testing error that is consistent with the Facebook result. Initialized from this baseline model, werepeated 16 times of re-training using DSD training and 16 times using conventional fine-tuning. The DSD usedsparsity of 50% and 90 epochs (45 for sparse training and 45 for re-densing training). As a fair comparison, theconventional fine-tuning is also based on the same baseline model with the same hyper-parameters and settings(90 epochs, 45 LR of 0.001 and 45 LR of 0.0001).Detailed results are listed below. On Cifar-10 and using ResNet-20 architecture, the DSD training on averageachieved Top-1 testing error of 7:89%, which is a 0.37% absolute improvement (4.5% relative improvement)over the baseline model and relatively 1.1% better than the conventional fine-tuning. The experiment also showsthat DSD training can reduce the variance of learning: the trained models after the sparse training and the finalDSD training both have lower standard deviation of errors compared with their counterparts using conventionalfine-tuning.Table 10: Validation of DSD on Cifar10 data using ResNet-20ResNet-20 Avg. Top-1 Err SD. Top-1 Err Sparsity Epochs LRBaseline 8.26% - 0% 164 1e-1Direct Finetune (First half) 8.16% 0.08% 0% 45 1e-3Direct Finetune (Second half) 7.97% 0.04% 0% 45 1e-4DSD (Fist half, Sparse) 8.12% 0.05% 50% 45 1e-3DSD (Second half, Dense) 7.89% 0.03% 0% 45 1e-4Improve from baseline(abs) 0.37% - - - -Improve from baseline(rel) 4.5% - - - -We used t-test (unpaired) to compare the top-1 testing error rate of the models trained using DSD and conventionalmethods. The results demonstrate the DSD training achieves significant improvements from both the baselinemodel (p<0.001) and conventional fine tuning (p<0.001).Figure 4: Significance of DSD improvements over baseline and fine-tuneBased on the results above, DSD significantly improves conventional baseline training and is also significantlybetter and more robust than conventional fine-tuning.11Published as a conference paper at ICLR 2017B. A PPENDIX : M ORE EXAMPLES OF DSD T RAINING IMPROVES THE CAPTIONSGENERATED BY NEURAL TALK (IMAGES FROM FLICKR -8K T ESTSET)Baseline : a man in a red shirt and jeans is riding a bicycle down a street. Sparse : a man in a red shirt and a woman in a wheelchair. DSD : a man and a woman are riding on a street.Baseline : two girls in bathing suits are playing in the water. Sparse : two children are playing in the sand. DSD : two children are playing in the sand.Baseline : a group of people are standing in front of a building. Sparse : a group of people are standing in front of a building. DSD : a group of people are walking in a park.Baseline : a dog runs through the grass. Sparse : a dog runs through the grass. DSD : a white and brown dog is running through the grass.Baseline : a group of football players in red uniforms. Sparse : a group of football players in a field. DSD : a group of football players in red and white uniforms.Baseline : a group of people sit on a bench in front of a building. Sparse : a group of people are standing in front of a building. DSD : a group of people are standing in a fountain.Baseline : a man in a black jacket and a black jacket is smiling. Sparse : a man and a woman are standing in front of a mountain. DSD : a man in a black jacket is standing next to a man in a black shirt.Baseline :a young girl in a red dress is holding a camera. Sparse : a little girl in a pink dress is standing in front of a tree. DSD : a little girl in a red dress is holding a red and white flowers.Baseline : a man in a red jacket is standing in front of a white building. Sparse : a man in a black jacket is standing in front of a brick wall. DSD : a man in a black jacket is standing in front of a white building.Baseline : a man in a red shirt is standing on a rock. Sparse : a man in a red jacket is standing on a mountaintop. DSD : a man is standing on a rock overlooking the mountains.Baseline : a group of people are sitting in a subway station. Sparse : a man and a woman are sitting on a couch. DSD : a group of people are sitting at a table in a room.Baseline : a soccer player in a red and white uniform is running on the field. Sparse : a soccer player in a red uniform is tackling another player in a white uniform. DSD : a soccer player in a red uniform kicks a soccer ball.Baseline : a young girl in a swimming pool. Sparse : a young boy in a swimming pool. DSD : a girl in a pink bathing suit jumps into a pool.Baseline : a soccer player in a red and white uniform is playing with a soccer ball. Sparse : two boys playing soccer. DSD : two boys playing soccer.Baseline : a girl in a white dress is standing on a sidewalk. Sparse : a girl in a pink shirt is standing in front of a white building. DSD : a girl in a pink dress is walking on a sidewalk.Baseline : a boy is swimming in a pool. Sparse : a small black dog is jumping into a pool. DSD : a black and white dog is swimming in a pool.A. Supplementary Material: More Examples of DSD Training Improves the Performance of NeuralTalk Auto-Caption System112Published as a conference paper at ICLR 2017Baseline : a snowboarder flies through the air. Sparse : a person is snowboarding down a snowy hill. DSD : a person on a snowboard is jumping over a snowy hill.Baseline : two young girls are posing for a picture. Sparse : a young girl with a blue shirt is blowing bubbles. DSD : a young boy and a woman smile for the camera.Baseline : a man in a red shirt is sitting in a subway station. Sparse : a woman in a blue shirt is standing in front of a store. DSD : a man in a black shirt is standing in front of a restaurant.Baseline : a surfer is riding a wave. Sparse : a man in a black wetsuit is surfing on a wave. DSD : a man in a black wetsuit is surfing a wave.Baseline : a man in a red shirt is standing on top of a rock. Sparse : a man in a red shirt is standing on a cliff overlooking the mountains. DSD : a man is standing on a rock overlooking the mountains.Baseline : a group of people sit on a bench. Sparse : a group of people are sitting on a bench. DSD : a group of children are sitting on a bench.Baseline : a little boy is playing with a toy. Sparse : a little boy in a blue shirt is playing with bubbles. DSD : a baby in a blue shirt is playing with a toy.Baseline : a brown dog is running through the grassy. Sparse : a brown dog is playing with a ball. DSD : a brown dog is playing with a ball.Baseline : a boy in a red shirt is jumping on a trampoline. Sparse : a boy in a red shirt is jumping in the air. DSD : a boy in a red shirt is jumping off a swing.Baseline : a man is standing on the edge of a cliff. Sparse : a man is standing on the shore of a lake. DSD : a man is standing on the shore of the ocean.Baseline : two people are riding a boat on the beach. Sparse : two people are riding a wave on a beach. DSD : a man in a yellow kayak is riding a wave.Baseline : a black and white dog is running on the beach. Sparse : a black and white dog running on the beach. DSD : a black dog is running on the beach.Baseline : a man and a dog are playing with a ball. Sparse : a man and a woman are playing tug of war. DSD : a man and a woman are playing with a dog.Baseline : a group of people are standing in a room. Sparse : a group of people gather together. DSD : a group of people are posing for a picture.Baseline : a man in a red jacket is riding a bike through the woods. Sparse : a man in a red jacket is doing a jump on a snowboard. DSD : a person on a dirt bike jumps over a hill.Baseline : a man in a red jacket and a helmet is standing in the snow. Sparse : a man in a red jacket and a helmet is standing in the snow. DSD : a man in a red jacket is standing in front of a snowy mountain.213
rJJ3YU5ge
Under review as a conference paper at ICLR 2017IS A PICTURE WORTH A THOUSAND WORDS ?A D EEPMULTI -MODAL FUSION ARCHITECTURE FORPRODUCT CLASSIFICATION IN E -COMMERCETom Zahavy & Shie MannorDepartment of Electrical EngineeringThe Technion - Israel Institute of TechnologyHaifa 32000, Israelftomzahavy@tx,shie@ee g.technion.ac.ilAlessandro Magnani & Abhinandan KrishnanWalmart LabsSunnyvale, CaliforniafAMagnani,AKrishnan g@walmartlabs.comABSTRACTClassifying products into categories precisely and efficiently is a major challengein modern e-commerce. The high traffic of new products uploaded daily and thedynamic nature of the categories raise the need for machine learning models thatcan reduce the cost and time of human editors. In this paper, we propose a decisionlevel fusion approach for multi-modal product classification using text and imageinputs. We train input specific state-of-the-art deep neural networks for each inputsource, show the potential of forging them together into a multi-modal architectureand train a novel policy network that learns to choose between them. Finally, wedemonstrate that our multi-modal network improves the top-1 accuracy %overboth networks on a real-world large-scale product classification dataset that wecollected from Walmart.com. While we focus on image-text fusion that character-izes e-commerce domains, our algorithms can be easily applied to other modalitiessuch as audio, video, physical sensors, etc.1 I NTRODUCTIONProduct classification is a key issue in e-commerce domains. A product is typically representedby metadata such as its title, image, color, weight and so on, and most of them are assigned man-ually by the seller. Once a product is uploaded to an e-commerce website, it is typically placedin multiple categories. Categorizing products helps e-commerce websites to provide costumers abetter shopping experience, for example by efficiently searching the products catalog or by develop-ing recommendation systems. A few examples of categories are internal taxonomies (for businessneeds), public taxonomies (such as groceries and office equipment) and the product’s shelf (a groupof products that are presented together on an e-commerce web page). These categories vary withtime in order to optimize search efficiency and to account for special events such as holidays andsports events. In order to address these needs, e-commerce websites typically hire editors and usecrowdsourcing platforms to classify products. However, due to the high amount of new productsuploaded daily and the dynamic nature of the categories, machine learning solutions for productclassification are very appealing as means to reduce the time and economic costs. Thus, preciselycategorizing items emerges as a significant issue in e-commerce domains.A shelf is a group of products presented together on an e-commerce website page, and usuallycontain products with a given theme/category (e.g., Women boots, folding tables). Product to shelfclassification is a challenging problem due to data size, category skewness, and noisy metadataand labels. In particular, it presents three fundamental challenges for machine learning algorithms.First, it is typically a multi-class problem with thousands of classes. Second, a product may belongto multiple shelves making it a multi-label problem. And last, a product has both an image and atext input making it a multi-modal problem.Products classification is typically addressed as a text classification problem because most metadataof items are represented as textual features (Pyo et al., 2010). Text classification is a classic topicfor natural language processing, in which one needs to assign predefined categories to text inputs.1Under review as a conference paper at ICLR 2017Figure 1: Predicting shelves from product metadata obtained from Walmart.com. Left: productsthat have both an image and a title that contain useful information for predicting the product’s shelf.Center, top: the boots title gives specific information about the boots but does not mention that theproduct is a boot, making it harder to predict the shelf. Center, bottom: the baby toddler shirt’stitle is only refers to the text on the toddler shirt and does not mention that it is a product for babies.Right, top: the umbrella image contains information about its color but it is hard to understand thatthe image is referring to an umbrella. Right, bottom: the lips pencil image looks like a regularpencil, making it hard to predict that it belongs to the moisturizers shelf.Standard methods follow a classical two-stage scheme of extraction of (handcrafted) features, fol-lowed by a classification stage. Typical features include bag-of-words or n-grams, and their TF-IDF.On the other hand, Deep Neural Networks use generic priors instead of specific domain knowledge(Bengio et al., 2013) and have been shown to give competitive results on text classification tasks(Zhang et al., 2015). In particular, Convolutional neural networks (CNNs) (Kim, 2014; Zhang et al.,2015; Conneau et al., 2016) and Recurrent NNs (Lai et al., 2015; Pyo et al., 2010; Xiao & Cho,2016) can efficiently capture the sequentiality of the text. These methods are typically applied di-rectly to distributed embedding of words (Kim, 2014; Lai et al., 2015; Pyo et al., 2010) or characters(Zhang et al., 2015; Conneau et al., 2016; Xiao & Cho, 2016), without any knowledge on the syn-tactic or semantic structures of a language. However, all of these architectures were only appliedon problems with a small amount of labels ( 20) while e-commerce shelf classification problemstypically have thousands of labels with multiple labels per product.In Image classification, CNNs are widely considered the best models, and achieve state-of-the-art results on the ImageNet Large-Scale Visual Recognition Challenge (Russakovsky et al., 2015;Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015). However, as good as theyare, the classification accuracy of machine learning systems is often limited in problems with manyclasses of object categories. One remedy is to leverage data from other sources, such as text data.However, the studies on multi-modal deep learning for large-scale item categorization are still rare tothe best of our belief. In particular in a setting where there is a significant difference in discriminativepower between the two types of signals.In this work, we propose a multi-modal deep neural network model for product classification. Ourdesign principle is to leverage the specific prior for each data type by using the current state-of-2Under review as a conference paper at ICLR 2017the-art classifiers from the image and text domains. The final architecture has 3 main components(Figure 2, Right): a text CNN (Kim, 2014), an image CNN (Simonyan & Zisserman, 2014) anda policy network that learns to choose between them. We collected a large-scale data set of 1:2million products from the Walmart.com website. Each product has a title and an image and needs tobe classified to a shelf (label) with 2890 possible shelves. Examples from this dataset can be seenin Figure 1 and are also available on-line at the Walmart.com website. For most of the products,both the image and the title of each product contain relevant information for customers. However, itis interesting to observe that for some of the products, both input types may not be informative forshelf prediction (Figure 1). This observation motivates our work and raises interesting questions:which input type is more useful for product classification? is it possible to forge the inputs into abetter architecture?In our experiments, we show that the text CNN outperforms the image one. However, for a relativelylarge number of products ( 8%), the image CNN is correct while the text CNN is wrong, indicatinga potential gain from using a multi-modal architecture. We also show that the policy is able to choosebetween the two models and give a performance improvement over both state-of-the-art networks.To the best of our knowledge, this is the first work that demonstrates a performance improvementon top-1 classification accuracy by using images and text on a large-scale classification problem. Inparticular, our main contributions are:We demonstrate that the text classification CNN (Kim, 2014) outperforms the VGG net-work (Simonyan & Zisserman, 2014) on a real-world large-scale product to shelf classifi-cation problem.We analyze the errors made by the different networks and show the potential gain of multi-modality.We propose a novel decision-level fusion policy that learns to choose between the text andimage networks and improve over both.2 M ULTI -MODALITYOver the years, a large body of research has been devoted to improving classification using en-sembles of classifiers (Kittler et al., 1998; Hansen & Salamon, 1990). Inspired by their success,these methods have also been used in multi-modal settings (e.g.,Guillaumin et al. (2010); Poria et al.(2016)), where the source of the signals, or alternatively their modalities, are different. Some exam-ples include audio-visual speech classification (Ngiam et al., 2011), image and text retrieval (Kiroset al.), sentiment analysis and semi-supervised learning (Guillaumin et al., 2010).Combining classifiers from different input sources presents multiple challenges. First, classifiersvary in their discriminative power, thus, an optimal unification method should be able to adaptitself for specific combinations of classifiers. Second, different data sources have different state-of-the-art architectures, typically deep neural networks, which vary in depth, width, and optimizationalgorithm; making it non-trivial to merge them. Moreover, a multi-modal architecture potentiallyhas more local minima that may give unsatisfying results. Finally, most of the publicly availablereal-world big data classification datasets, an essential building block of deep learning systems,typically contain only one data type.Nevertheless, the potential performance boost of multi-modal architectures has motivated re-searchers over the years. Frome et al. (2013) combined an image network (Krizhevsky et al., 2012)with a Skip-gram Language Model in order to improve classification results on ImageNet. However,they were not able to improve the top-1 accuracy prediction, possibly because the text input theyused (image labels) didn’t contain a lot of information. Other works, used multi-modality to learngood embedding but did not present results on classification benchmarks (Lynch et al., 2015; Kiroset al.; Gong et al., 2014). Kannan et al. (2011) suggested to improve text-based product classifica-tion by adding an image signal, training an image classifier and learning a decision rule between thetwo. However, they only experimented with a small dataset and a low number of labels, and it isnot clear how to scale their method for extreme multi-class multi-label applications that characterizereal-world problems in e-commerce.3Under review as a conference paper at ICLR 2017PolicyT ext CNNVGG16T ext ImageClass probabilities Prediction Class probabilities PredictionFinal predictionImage Input T ext InputShared representationMulti-modallayersNetwork predictionImage Input T ext InputNetwork prediction Network predictionNetwork predictionPolicyInputFeature-level fusionDecision-level fusionFigure 2: Multi-modal fusion architectures.Left, top: Feature-level fusion. Each modality is processed in a different pipe. After a certaindepth, the pipes are concatenated followed by multi-modal layers. Left, bottom: Decision-levelfusion. Each modality is processed in a different pipe and gives a prediction. A policy network islearning to decide which classifier to use. Right: The proposed multi-modal architecture.Adding modalities can improve the classification of products that have a non-informative inputsource (e.g., image or text). In e-commerce, for example, classifiers that rely exclusively on textsuffer from short and non-informative titles, differences in style between vendors and overlappingtext across categories (i.e., a word that helps to classify a certain class may appear in other classes).Figure 1 presents a few examples of products that have only one informative input type. These ex-amples suggest that a multi-modal architecture can potentially outperform a classifier with a singleinput type.Most unification techniques for multi-modal learning are partitioned between feature-level fusiontechniques and decision-level fusion techniques (Figure 2, Left).2.1 F EATURE LEVEL FUSIONFeature-level fusion is characterized by three phases: (a) learning a representation, (b) supervisedtraining, and (c) testing. The different unification techniques are distinguished by the availabilityof the data in each phase (Guillaumin et al., 2010). For example, in cross-modality training, therepresentation is learned from all the modalities, but only one modality is available for supervisedtraining and testing. In other cases, all of the modalities are available at all stages but we may want(or not) to limit their usage given a certain budget. Another source for the distinction is the orderin which phases (a) and (b) are made. For example, one may first learn the representation and thenlearn a classifier from it, or learn both the representation and the classifier in parallel. In the deeplearning context, there are two common approaches. In the first approach, we learn an end-to-enddeep NN; the NN has multiple input-specific pipes that include a data source followed by inputspecific layers. After a certain depth, the pipes are concatenated followed by additional layers suchthat the NN is trained end-to-end. In the second approach, input specific deep NNs are learned first,and a multi-modal representation vector is created by concatenating the input specific feature vectors(e.g., the neural network’s last hidden layer). Then, an additional classifier learns to classify fromthe multi-modal representation vector. While multi-modal methods have shown potential to boostperformance on small datasets (Poria et al., 2016), or on top-k accuracy measures (Frome et al.,2013), we are not familiar with works that succeeded with applying it on a large-scale classificationproblem and received performance improvement in top-1 accuracy.2.2 D ECISION -LEVEL FUSIONIn this approach, an input specific classifier is learned for each modality, and the goal is to find adecision rule between them. The decision rule is typically a pre-defined rule (Guillaumin et al.,2010) and is not learned from the data. For example, Poria et al. (2016) chose the classifier with themaximal confidence, while Krizhevsky et al. (2012) average classifier predictions. However, in thiswork we show that learning the decision rule yields significantly better results on our data.4Under review as a conference paper at ICLR 20173 M ETHODS AND ARCHITECTURESIn this section, we give the details of our multi-modal product classification architecture. The ar-chitecture is composed of a text CNN and an image CNN which are forged together by a policynetwork, as can be seen in Figure 2, Right.3.1 M ULTI LABEL COST FUNCTIONOur cost function is the weighted sigmoid cross entropy with logits, a common cost function formulti-label problems. Let xbe the logits, zbe the targets, qbe a positive weight coefficient, used asa multiplier for the positive targets, and (x) =11+exp(x):The loss is given by:Cost(x,z;q) =qzlog((x))(1z)log(1(x)) =(1z)x+ (1 + (q1)z)log(1 +exp(x)):The positive coefficient q;allows one to trade off recall and precision by up- or down-weighting thecost of a positive error relative to a negative error. We found it to have a significant effect in practice.3.2 T EXT CLASSIFICATIONFor the text signal, we use the text CNN architecture of Kim (2014). The first layer embeds wordsinto low-dimensional vectors using random embedding (different than the original paper). The nextlayer performs convolutions over time on the embedded word vectors using multiple filter sizes (3,4 and 5), where we use 128filters from each size. Next, we max-pool-over-time the result of eachconvolution filter and concatenated all the results together. We add a dropout regularization layer(0.5 dropping rate), followed by a fully connected layer, and classify the result using a softmax layer.An illustration of the Text CNN can be seen in Figure 2.3.3 I MAGE CLASSIFICATIONFor the image signal, we use the VGG Network (Simonyan & Zisserman, 2014). The input to thenetwork is a fixed-size 224x224RGB image. The image is passed through a stack of convolutionallayers with a very small receptive field: 3x3. The convolution stride is fixed to 1pixel; the spatialpadding of the convolutional layer is 1pixel. Spatial pooling is carried out by five max-poolinglayers, which follow some of the convolutional layers. Max-pooling is performed over a 2x2pixelwindow, with stride 2. A stack of convolutional layers is followed by three Fully-Connected (FC)layers: the first two have 4096 channels each, the third performs 2890-way product classificationand thus contains 2890 channels (one for each class). All hidden layers are followed by a ReLunon-linearity. The exact details can be seen in Figure 2.3.4 M ULTI -MODAL ARCHITECTUREWe experimented with four types of multi-modal architectures. (1)Learning decision-level fusionpolicies from different inputs. (1a) Policies that use the text and image CNNs class probabilitiesas input (Figure 2). We experimented with architectures that have one or two fully connected layers(the two-layered policy is using 10hidden units and a ReLu non-linearity between them). (1b)Policies that use the text and/or image as input. For these policies, the architecture of policynetwork was either the text CNN or the VGG network. In order to train policies, labels are collectedfrom the image and text networks predictions, i.e., the label is 1if the image network made a correctprediction while the text network made a mistake, and 0otherwise. On evaluation, we use thepolicy predictions to select between the models, i.e., if the policy prediction is 1we use the imagenetwork, and use the text network otherwise. (2)Pre-defined policies that average the predictionsof the different CNNs or choose the CNN with the highest confidence. (3)End-to-end feature-levelfusion, each input type is processed by its specific CNN. We concatenate the last hidden layers of theCNNs and add one or two fully connected layers. All the layers are trained together end-to-end (wealso tried to initialize the input specific weights from pre-trained single-modal networks). (4)Multi-step feature-level fusion. As in (3), we create shared representation vector by concatenating the lasthidden layers. However, we now keep the shared representation fixed and learn a new classifier fromit.5Under review as a conference paper at ICLR 20174 E XPERIMENTS4.1 S ETUPOur dataset contains 1.2 million products (title image and shelf) that we collected from Walmart.com(offered online and can be viewed at the website) and were deemed the hardest to classify by thecurrent production system. We divide the data into training (1.1 million) validation (50k) and test(50k). We train both the image network and the text network on the training dataset and evaluatethem on the test dataset. The policy is trained on the validation dataset and is also evaluated onthe test dataset. The objective is to classify the product’s shelf, from 2890 possible choices. Eachproduct is typically assigned to more than one shelf (3 on average), and the network is consideredaccurate if its most probable shelf is one of them.4.2 T RAINING THE TEXT ARCHITECTUREPreprocess: we build a dictionary of all the words in the training data and embed each word using arandom embedding into a one hundred dimensional vector. We trim titles with more than 40 wordsand pad shorter titles with nulls.We experimented with different batch sizes, dropout rates, and filters stride, but found that the vanillaarchitecture (Kim, 2014) works well on our data. This is consistent with Zhang & Wallace (2015),who showed that text CNNs are not very sensitive to hyperparameters. We tuned the cost functionpositive coefficient parameter q;and found out that the value 30 performed best in practice (we willalso use this value for the image network). The best CNN that we trained classified 70:1%of theproducts from the test set correctly (Table 1).4.3 T RAINING THE IMAGE ARCHITECTUREPreprocess: we re-size all the images into 224 x 224 pixels and reduce the image mean.The VGG network that we trained classified 57% of the products from the test set correctly. This isa bit disappointing if we compare it to the performance of the VGG network on ImageNet ( 75%).There are a few differences between these two datasets that may explain this gap. First, our data has3 times more classes and contains multiple labels per image making the classification harder, andsecond, Figure 1 implies that some of our images are not informative for shelf classification. Someworks claim that the features learned by VGG on ImageNet are global feature extractors (Lynchet al., 2015). We therefore decided to use the weights learned by VGG on ImageNet and learn onlythe last layer. This configuration yielded only 36:7%accuracy. We believe that the reason is thatsome of the ImageNet classes are irrelevant for e-commerce (e.g., vehicles and animals) while somerelevant categories are misrepresented (e.g., electronics and office equipment). It could also be thatour images follow some specific pattern of white background, well-lit studio etc., that characterizese-commerce.4.4 E RROR ANALYSISIs a picture worth a thousand words? Inspecting Figure 3, we can see that the text network out-performed the image network on this dataset, classifying more products correctly. Similar resultswere reported before (Pyo et al., 2010; Kannan et al., 2011) but to the best of our knowledge, thisis the first work that compares state-of-the-art text and image CNNs on a real-world large-scale e-commerce dataset.What is the potential of multi-modality? We identified that for 7:8%of the products the image net-work made a correct prediction while the text network was wrong. This observation is encouragingsince it implies that there is a relative big potential to harness via multi-modality. We find this largegap surprising since different neural networks applied to the same problem tend to make the samemistakes (Szegedy et al., 2013).Unification techniques for multi-modal problems typically use the last hidden layer of each networkas features (Frome et al., 2013; Lynch et al., 2015; Pyo et al., 2010). We therefore decided to visual-ize the activations of this layer using a tSNE map (Maaten & Hinton, 2008). Figure 3, depicts sucha map for the activations of the text model (the image model yielded similar results). In particular,6Under review as a conference paper at ICLR 2017Title is correct, image is not: 21.9%Image is correct, title is not:7.8% Both models are wrong: 22.4%Both models are correct: 47.9%Figure 3: Error analysis using a tSNE map, created from the last hidden layer neural activations ofthe text model.we were looking for regions in the tSNE map where the image predictions are correct and the textis wrong (Figure 3, green). Finding such a region will imply that a policy network can learn gooddecision boundaries. However, we can see that there are no well-defined regions in the tSNE mapswhere the image network is correct and the title is wrong (green), thus implying that it might be hardto identify these products using the activations of the last layers.4.5 M ULTI -MODAL UNIFICATION TECHNIQUESOur error analysis experiment highlights the potential of merging image and text. Still, we foundit hard to achieve the upper bound provided by the error analysis in practice. We now describe thepolicies that managed to achieve performance boost in top-1 accuracy %over the text and imagenetworks, and then provide discussion on other approaches that we tried but didn’t work.Decision-level fusion: We trained policies from different data sources (e.g., title, image, and eachCNN class probabilities), using different architectures and different hyperparameters. Looking atTable 1, we can see that the best policies were trained using the class probabilities (the softmaxprobabilities) of the image and text CNNs as inputs. The amount of class probabilities that wereused (top-1, top-3 or all) did not have a significant effect on the results, indicating that the top-1probability contains enough information to learn good policies. This result makes sense since thetop-1 probability measures the confidence of the network in making a prediction. Still, the top-3probabilities performed slightly better, indicating that the difference between the top probabilitiesmay also matter. We can also see that the 2-layer architecture outperformed the 1-layer, indicatingthat a linear policy is too simple, and deeper models can yield better results. Last, the cost functionpositive coefficient q had a big impact on the results. We can see that for q= 1, the policy networkis more accurate in its prediction however it achieves worse results on shelf classification. For q= 5we get the best results, while higher values of q(e.g., 7or10) resulted in inaccurate policies that didnot perform well in practice.Policy input # layers q Text Image Policy Oracle Policy accuracyCP-1 1 5 70.1 56.7 71.4 (+1.3) 77.5 (+7.8) 86.4CP-1 2 5 70.1 56.6 71.5 (+1.4) 77.6 (+7.5) 84.2CP-all 2 5 70.1 56.6 71.4 (+1.3) 77.6 (+7.5) 84.6CP-3 2 5 70.2 56.7 71.8 (+1.6) 77.7 (+7.5) 84.2CP-3 2 1 70.2 56.7 70.2 (+0) 77.7 (+7.5) 92.5CP-3 2 7 70.0 56.6 71.0 (+1.0) 77.5 (+7.5) 79.1CP-3 2 10 70.1 56.6 70.7 (+0.6) 77.6 (+7.5) 75.0Image - 5 70.1 56.6 68.5(-1.6) 77.6 (+7.5) 80.3Text - 5 70.1 56.6 69.0 (-1.1) 77.6 (+7.5) 83.7Both - 5 70.1 56.6 66.1 (-4) 77.6 (+7.5) 73.7Fixed-Mean - - 70.1 56.7 65.4 (+0) 77.6 (+7.5) -Fixed-Max - - 70.1 56.7 60.1 (-10) 77.7 (+7.6) 38.2Table 1: Decision-level fusion results. Each row presents a different policy configuration (definedby the policy input, the number of layers and the value of q), followed by the accuracy %of theimage, text, policy and oracle (optimal policy) classifiers on the test dataset. The policy accuracycolumn presents the accuracy %of the policy in making correct predictions, i.e., choosing the imagenetwork when it made a correct prediction while the text network didn’t. Numbers in (+)referto the performance gain over the text CNN. Class Probabilities (CP) refer to the number of classprobabilities used as input.7Under review as a conference paper at ICLR 2017While it may not seem surprising that combining text and image will improve accuracy, in practicewe found it extremely hard to leverage this potential. To the best of our knowledge, this is the firstwork that demonstrates a direct performance improvement on top-1 classification accuracy fromusing images and text on a large-scale classification problem.We experimented with pre-defined policies that do not learn from the data. Specifically, we tried toaverage the logits, following (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), and to choosethe network with the maximal confidence following (Poria et al., 2016). Both of these experimentsyielded significantly worse results, probably, since the text network is much more accurate than theimage one (Table 1). We also tried to learn policies from the text and/or the image input, usinga policy network which is either a text CNN, a VGG network or a combination. However, all ofthese experiments resulted in policies that overfit the data and performed worse than the title modelon the test data (Table 1). We also experimented with early stopping criteria, various regularizationmethods (dropout, l1, l2) and reduced model size but none could make the policy network generalize.Feature-level fusion: Training a CNN end-to-end can be tricky. For example, each input sourcehas its own specific architecture, with specific learning rate and optimization algorithm. We exper-imented with training the network end-to-end, but also with first training each part separately andthen learning the concatenated parts. We tried different unification approaches such as gating func-tions (Srivastava et al., 2015), cross products and a different number of fully connected layers afterthe concatenation. These experiments resulted in models that were inferior to the text model. Whilethis may seem surprising, the only successful feature level fusion that we are aware of (Frome et al.,2013), was not able to gain accuracy improvement on top-1 accuracy.5 C ONCLUSIONSIn this work, we investigated a multi-modal multi-class multi-label product classification problemand presented results on a challenging real-world dataset that we collected from Walmart.com. Wediscovered that the text network outperforms the image network on our dataset, and observed a bigpotential of fusing text and image inputs. Finally, we suggested a multi-modal decision-level fusionapproach that leverages state-of-the-art results from image and text classification and forges theminto a multi-modal architecture that outperforms both.State-of-the-art image CNNs are much larger than text CNNs, and take more time to train and torun. Thus, extracting image features during run time, or getting the image network predictions maybe prohibitively expensive. In this context, an interesting observation is that feature level fusionmethods require using the image signal for each product, while decision level fusion methods re-quire using the image network selectively making them more appealing. Moreover, our experimentssuggest that decision-level fusion performs better than feature-level fusion in practice.Finally, we were only able to realize a fraction of the potential of multi-modality. In the future, weplan to investigate deeper policy networks and more sophisticated measures of confidence. We alsoplan to investigate ensembles of image networks (Krizhevsky et al., 2012) and text networks (Pyoet al., 2010). We believe that the insights from training policy networks will eventually lead us totrain end to end differential multi-modal networks.REFERENCESYoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and newperspectives. IEEE transactions on pattern analysis and machine intelligence , 35(8), 2013.Alexis Conneau, Holger Schwenk, Lo ̈ıc Barrault, and Yann Lecun. Very deep convolutional net-works for natural language processing. arXiv preprint arXiv:1606.01781 , 2016.Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. Devise:A deep visual-semantic embedding model. In Advances in neural information processing systems ,2013.Yunchao Gong, Liwei Wang, Micah Hodosh, Julia Hockenmaier, and Svetlana Lazebnik. Improv-ing image-sentence embeddings using large weakly annotated photo collections. In EuropeanConference on Computer Vision , pp. 529–545. Springer, 2014.8Under review as a conference paper at ICLR 2017Matthieu Guillaumin, Jakob Verbeek, and Cordelia Schmid. Multimodal semi-supervised learningfor image classification. In CVPR 2010-23rd IEEE Conference on Computer Vision & PatternRecognition , pp. 902–909. IEEE Computer Society, 2010.Lars Kai Hansen and Peter Salamon. Neural network ensembles. IEEE transactions on patternanalysis and machine intelligence , 12:993–1001, 1990.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. arXiv preprint arXiv:1512.03385 , 2015.Anitha Kannan, Partha Pratim Talukdar, Nikhil Rasiwasia, and Qifa Ke. Improving product clas-sification using images. In 2011 IEEE 11th International Conference on Data Mining . IEEE,2011.Yoon Kim. Convolutional neural networks for sentence classification. arXiv preprintarXiv:1408.5882 , 2014.Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Multimodal neural language models.Josef Kittler, Mohamad Hatef, Robert PW Duin, and Jiri Matas. On combining classifiers. IEEEtransactions on pattern analysis and machine intelligence , 20(3):226–239, 1998.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-lutional neural networks. In Advances in neural information processing systems , 2012.Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. Recurrent convolutional neural networks for textclassification. 2015.Corey Lynch, Kamelia Aryafar, and Josh Attenberg. Images don’t lie: Transferring deep visualsemantic features to large-scale multimodal learning to rank. arXiv preprint arXiv:1511.06746 ,2015.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research , 9(Nov):2579–2605, 2008.Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. Multi-modal deep learning. In Proceedings of the 28th international conference on machine learning(ICML-11) , pp. 689–696, 2011.Soujanya Poria, Erik Cambria, Newton Howard, Guang-Bin Huang, and Amir Hussain. Fusingaudio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing ,174:50–59, 2016.Hyuna Pyo, Jung-Woo Ha, and Jeonghee Kim. Large-scale item categorization in e-commerce usingmultiple recurrent neural networks. 2010.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. arXiv preprint arXiv:1409.1556 , 2014.Rupesh Kumar Srivastava, Klaus Greff, and J ̈urgen Schmidhuber. Highway networks. arXiv preprintarXiv:1505.00387 , 2015.Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow,and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 , 2013.Yijun Xiao and Kyunghyun Cho. Efficient character-level document classification by combiningconvolution and recurrent layers. arXiv preprint arXiv:1602.00367 , 2016.Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas-sification. In Advances in Neural Information Processing Systems , pp. 649–657, 2015.Ye Zhang and Byron Wallace. A sensitivity analysis of (and practitioners’ guide to) convolutionalneural networks for sentence classification. arXiv preprint arXiv:1510.03820 , 2015.9
rJsiFTYex
Under review as a conference paper at ICLR 2017A W AY OUT OF THE ODYSSEY : ANALYZING AND COM-BINING RECENT INSIGHTS FOR LSTM SShayne LongpreSalesforce ResearchPalo Alto, Californiaslongpre@cs.stanford.eduSabeek PradhanStanford UniversityPalo Alto, Californiasabeekp@cs.stanford.eduCaiming Xiong, Richard SocherSalesforce ResearchPalo Alto, Californiafcxiong,rsocher g@salesforce.comABSTRACTLSTMs have become a basic building block for many deep NLP models. In recentyears, many improvements and variations have been proposed for deep sequencemodels in general, and LSTMs in particular. We propose and analyze a series ofaugmentations and modifications to LSTM networks resulting in improved perfor-mance for text classification datasets. We observe compounding improvements ontraditional LSTMs using Monte Carlo test-time model averaging, average pooling,and residual connections, along with four other suggested modifications. Ouranalysis provides a simple, reliable, and high quality baseline model.1 I NTRODUCTIONWhen exploring a new problem, having a simple yet competitive off-the-shelf baseline is fundamentalto new research. For instance, Caruana et al. (2008) showed random forests to be a strong baseline formany high-dimensional supervised learning tasks. For computer vision, off-the-shelf convolutionalneural networks (CNNs) have earned their reputation as a strong baseline (Sharif Razavian et al.,2014) and basic building block for more complex models like visual question answering (Xionget al., 2016). For natural language processing (NLP) and other sequential modeling tasks, recurrentneural networks (RNNs), and in particular Long Short-Term Memory (LSTM) networks, with alinear projection layer at the end have begun to attain a similar status. However, the standard LSTMis in many ways lacking as a baseline. Zaremba (2015), Gal (2015), and others show that largeimprovements are possible using a forget bias, inverted dropout regularization or bidirectionality. Weadd three major additions with similar improvements to off-the-shelf LSTMs: Monte Carlo modelaveraging, embed average pooling, and residual connections. We analyze these and other morecommon improvements.2 LSTM N ETWORKLSTM networks are among the most commonly used models for tasks involving variable-lengthsequences of data, such as text classification. The basic LSTM layer consists of six equations:it= tanh (Wixt+Riht1+bi) (1)jt=(Wjxt+Rjht1+bj) (2)ft=(Wfxt+Rfht1+bf) (3)ot= tanh (Woxt+Roht1+bo) (4)ct=itjt+ftct1 (5)ht=ottanh (ct) (6)1Under review as a conference paper at ICLR 20170 20 40 60 80 100 120 140 160 180 200Monte Carlo Samples0.4950.5000.5050.5100.5150.5200.525SST 5-Class Error RateMonte Carlo SSTMonte Carlo ErrorInverted Dropout Error(a) Monte Carlo for SST fine-grained error0 20 40 60 80 100 120 140 160 180 200Monte Carlo Samples0.1120.1140.1160.1180.1200.1220.1240.1260.1280.130Binary Error RateIMDB: Monte CarloMonte Carlo ErrorInverted Dropout Error (b) Monte Carlo for IMDB binary errorFigure 1: A comparison of the performance of Monte Carlo averaging, over sample size, to regularsingle-sample inverted dropout at test-time.Whereis the sigmoid function, is element-wise multiplication, and vtis the value of variable vat timestept. Each layer receives xtfrom the layer that came before it and ht1andct1from theprevious timestep, and it outputs htto the layer that comes after it and htandctto the next timestep.Thecandhvalues jointly constitute the recurrent state of the LSTM that is passed from one timestepto the next. Since the hvalue completely updates at each timestep while the cvalue maintains part ofits own value through multiplication by the forget gate f,handccomplement each other very well,withhforming a “fast” state that can quickly adapt to new information and cforming a “slow” statethat allows information to be retained over longer periods of time (Zaremba, 2015). While variouspapers have tried to systematically experiment with the 6 core equations constituting an LSTM (Greffet al., 2015; Zaremba, 2015), in general the basic LSTM equations have proven extremely resilientand, if not optimal, at least a local maximum.3 M ONTE CARLO MODEL AVERAGINGIt is common practice when applying dropout in neural networks to scale the weights up at traintime (inverted dropout). This ensures that the expected magnitude of the inputs to any given layerare equivalent between train and test, allowing for an efficient computation of test-time predictions.However, for a model trained with dropout, test-time predictions generated without dropout merelyapproximate the ensemble of smaller models that dropout is meant to provide. A higher fidelitymethod requires that test-time dropout be conducted in a manner consistent with how the model wastrained. To achieve this, we sample kneural nets with dropout applied for each test example andaverage the predictions. With sufficiently large kthis Monte Carlo average should approach the truemodel average (Srivastava et al., 2014). We show in Figure 1 that this technique can yield moreaccurate predictions on test-time data than the standard practice. This is demonstrated over a numberof datasets, suggesting its applicability to many types of sequential architectures. While runningmultiple Monte Carlo samples is more computationally expensive, the overall increase is minimalas the process is only run on test-time forward passes and is highly parallelizable. We show thathigher performance can be achieved with relatively few Monte Carlo samples, and that this numberof samples is similar across different NLP datasets and tasks.We encountered one ambiguity of Monte Carlo model averaging that to our knowledge remainsunaddressed in prior literature: there is relatively little exploration as to where and how the modelaveraging is most appropriately handled. We investigated averaging over the output of the finalrecurrent layer (just before the projection layer), over the output of the projection layer (the pre-softmax unnormalized logits), and the post-softmax normalized probabilities, which is the approachtaken by Gal (2015) for language modeling. We saw no discernible difference in performancebetween averaging the pre-projection and post-projection outputs. Averaging over the post-softmaxprobabilities showed marginal improvements over these two methods, but interestingly only forbidirectional models. We also explored using majority voting among the sampled models. This2Under review as a conference paper at ICLR 2017Embed...RNNSoftmaxw2w2w3w3wN1wN1wNwNMLPNXi=1wiNNXi=1wiNAverage Word Vectorsw1w1wN2wN2Figure 2: An illustration of the embed average pooling extension to a standard RNN model. Theoutput of the multilayer perceptron is concatenated to the final hidden state output by the RNN.involves tallying the maximum post-softmax probabilities and selecting the class that received themost votes. This method differs from averaging the post-softmax probabilities in the same waymax-margin differs from maximum likelihood estimation (MLE), de-emphasizing the points wellinside the decision boundary or the models that predicted a class with extremely high probability.With sufficiently large k, this voting method seemed to work best of the averaging methods we tried,and thus all of our displayed models use this technique. However, for classification problems withmore classes, more Monte Carlo samples might be necessary to guarantee a meaningful plurality ofclass predictions. We conclude that the majority-vote Monte Carlo averaging method is preferablein the case where the ratio of Monte Carlo samples to number of classification labels is large(k=outputsize).The Monte Carlo model averaging experiments, shown in Figure 1, were conducted as follows. Wedrewk= 400 separate test samples for each example, differentiated by their dropout masks. For eachsample size p(whose values, plotted on the x-axis, were in the range from 2to200with step-size2) we selected pof ourksamples randomly without replacement and performed the relevant MonteCarlo averaging technique for that task, as discussed above. We do this m= 20 times for each point,to establish the mean and variance for that number of Monte Carlo iterations/samples p. The varianceis used to visualize the 90% confidence interval in blue, while the red line denotes the test accuracycomputed using the traditional approximation method (inverted dropout at train-time, and no dropoutat test-time).4 E MBED AVERAGE POOLINGReliably retaining long-range information is a well documented weakness of LSTM networks(Karpathy et al., 2015). This is especially the case for very long sequences like the IMDB sentimentdataset (Maas et al., 2011), where deep sequential models fail to capture uni- and bi-gram occurrencesover long sequences. This is likely why n-gram based models, such as a bi-gram NBSVM (Wangand Manning, 2012), outperform RNN models on such datasetes. It was shown by Iyyer et al. (2015)and others that for general NLP classification tasks, the use of a deep, unordered composition (or bag-of-words) of a sequence can yield strong results. Their solution, the deep averaging network (DAN),combines the observed effectiveness of depth, with the unreasonable effectiveness of unorderedrepresentations of long sequences.We suspect that the primary advantage of DANs is their ability to keep track of information thatwould have otherwise been forgotten by a sequential model, such as information early in the sequencefor a unidirectional RNN or information in the middle of the sequence for a bidirectional RNN. Ourembed average pooling supplements the bidirectional RNN with the information from a DAN at arelatively negligible computational cost.3Under review as a conference paper at ICLR 2017LSTMLSTMLSTMSoftmaxh(1)th(1)th(2)th(2)th(3)th(3)txtxt......xt+1xt+1xt1xt1h(1)t1h(1)t1h(1)th(1)th(2)t1h(2)t1h(3)t1h(3)t1h(2)th(2)th(3)th(3)t(a) Res-V1: An illustration of vertical residual connec-tionsLSTMLSTMLSTMSoftmax......xtxtxt1xt1xt+1xt+1h(1)t1h(1)t1h(1)th(1)th(1)th(1)th(2)t1h(2)t1h(2)th(2)th(2)th(2)th(3)t1h(3)t1h(3)th(3)th(3)th(3)t(b) Res-V2: An illustration of vertical and lateral resid-ual connectionsFigure 3: An illustration of vertical (ResV) and lateral residual (ResL) connections added to a 3-layerRNN. A model with only vertical residuals is denoted Res-V1, whereas a model with vertical andlateral residuals is denoted “Res-V2”.As shown in Figure 2, embed average pooling works by averaging the sequence of word vectors andpassing this average through an MLP. The averaging is similar to an average pooling layer in a CNN(hence the name), but with the averaging being done temporally rather than spatially. The output ofthis MLP is concatenated to the final output of the RNN, and the combined vector is then passedinto the projection and softmax layer. We apply the same dropout mask to the word vectors whenpassing them to the RNN as when averaging them, and we apply a different dropout mask on theoutput of the MLP. We experimented with applying the MLP before rather than after averaging theword vectors but found the latter to be most effective.5 R ESIDUAL CONNECTIONSFor feed-forward convolutional neural networks used in computer vision tasks, residual networks, orResNets, have obtained state of the art results (He et al., 2015). Rather than having each layer learn awholly new representation of the data, as is customary for neural networks, ResNets have each layer(or group of layers) learn a residual which is added to the layer’s input and then passed on to the nextlayer. More formally, if the input to a layer (or group of layers) is xand the output of that layer (orgroup of layers) is F(x), then the input to the next layer (or group of layers) is x+F(x), whereas itwould beF(x)in a conventional neural network. This architecture allows the training of far deepermodels. He et al. (2015) trained convolutional neural networks as deep as 151 layers, compared to 16layers used in VGGNets (Simonyan and Zisserman, 2014) or 22 layers used in GoogLeNet (Szegedyet al., 2015), and won the 2015 ImageNet Challenge. Since then, various papers have tried to buildupon the ResNet paradigm (Huang et al., 2016; Szegedy et al., 2016), and various others have tried tocreate convincing theoretical reasons for ResNet’s success (Liao and Poggio, 2016; Veit et al., 2016).4Under review as a conference paper at ICLR 2017We explored many different ways to incorporate residual connections in an RNN. The two mostsuccessful ones, which we call Res-V1 and Res-V2 are depicted in Figure 6. Res-V1 incorporatesonly vertical residuals, while Res-V2 incorporates both vertical and lateral residuals. With verticalresidual connections, the input to a layer is added to its output and then passed to the next layer,as is done in feed-forward ResNets. Thus, whereas the input to a layer is normally the htfromthe previous layer, with vertical residuals the input becomes the ht+xtfrom the previous layer.This maintains many of the attractive properties of ResNets (e.g. unimpeded gradient flow acrosslayers, adding/averaging the contributions of each layer) and thus lends itself naturally to deepernetworks. However, it can interact unpredictably with the LSTM architecture, as the “fast” state ofthe LSTM no longer reflects the network’s full representation of the data at that point. To mitigate thisunpredictability, Res-V2 also includes lateral residual connections. With lateral residual connections,the input to a layer is added to its output and then passed to the next timestep as the fast state of theLSTM. It is equivalent to replacing equation 6 with ht=ottanh (ct) +xt. Thus, applying bothvertical and lateral residuals ensures that the same value is passed both to the next layer as input andto the next timestep as the “fast” state.In addition to these two, we explored various other, ultimately less successful, ways of adding residualconnections to an LSTM, the primary one being horizontal residual connections. In this architecture,rather than adding the input from the previous layer to a layer’s output, we added the fast statefrom the previous timestep. The hope was that adding residual connections across timesteps wouldallow information to flow more effectively across timesteps and thus improve the performance ofRNNs that are deep across timesteps, much as ResNets do for networks that are deep across layers.Thus, we believed horizontal residual connections could solve the problem of LSTMs not learninglong-term dependencies, the same problem we also hoped to mitigate with embed average pooling.Unfortunately, horizontal residuals failed, possibly because they blurred the distinction betweenthe LSTM’s “fast” state and “slow” state and thus prevented the LSTM from quickly adapting tonew data. Alternate combinations of horizontal, vertical, and lateral residual connections were alsoexperimented with but yielded poor results.6 E XPERIMENTAL RESULTS6.1 D ATASETSWe chose two commonly used benchmark datasets for our experiments: the Stanford SentimentTreebank (SST) (Socher et al., 2013) and the IMDB sentiment dataset (Maas et al., 2011). Thisallowed us to compare the performance of our models to existing work and review the flexibility ofour proposed model extensions across fairly disparate types of classification datasets. SST containsrelatively well curated, short sequence sentences, in contrast to IMDB’s comparatively colloquialand lengthy sequences (some up to 2;000tokens). To further differentiate the classification tasks wechose to experiment with fine-grained, five-class sentiment on SST, while IMDB only offered binarylabels. For IMDB, we randomly split the training set of 25;000examples into training and validationsets containing 22;500and2;500examples respectively, as done in Maas et al. (2011).6.2 M ETHODOLOGYOur objective is to show a series of compounding extensions to the standard LSTM baseline thatenhance accuracy. To ensure scientific reliability, the addition of each feature is the only changefrom the previous model (see Figures 4 and 5). The baseline model is a 2-layer stacked LSTM withhidden size 170for SST and 120for IMDB, as used in Tai et al. (2015). All models in this paper usedpublicly available 300 dimensional word vectors, pre-trained using Glove on 840 million tokens ofCommon Crawl Data (Pennington et al., 2014), and both the word vectors and the subsequent weightmatrices were trained using Adam with a learning rate of 104.The first set of basic feature additions were adding a forget bias and using dropout. Adding a bias of1:0to the forget gate (i.e. adding 1.0 to the inside of the sigmoid function in equation 3) improvesresults across NLP tasks, especially for learning long-range dependencies (Zaremba, 2015). Dropout(Srivastava et al., 2014) is a highly effective regularizer for deep models. For SST and IMDB we usedgrid search to select dropout probabilities of 0:5and0:7respectively, applied to the input of eachlayer, including the projection/softmax layer. While forget bias appears to hurt performance in Figure5Under review as a conference paper at ICLR 20175, the combination of dropout and forget bias yielded better results in all cases than dropout withoutforget bias. Our last two basic optimizations were increasing the hidden sizes and then adding shared-weight bidirectionality to the RNN. The hidden sizes for SST and IMDB were increased to 800and360respectively; we found significantly diminishing returns to performance from increases beyondthis. We chose shared-weight bidirectionality to ensure the model size did not increase any further.Specifically, the forward and backward weights are shared, and the input to the projection/softmaxlayer is a concatenation of the forward and backward passes’ final hidden states.All of our subsequent proposed model extensions are described at length in their own sections. Forboth datasets, we used 60Monte Carlo samples, and the embed average pooling MLP had onehidden layer and both a hidden dimension and an output dimension of 300as the output dimensionof the embed average pooling MLP. Note that although the MLP weights increased the size of theirrespective models, this increase is negligible (equivalent to increasing the hidden size for SST from800to804or the hidden size of IMDB from 360to369), and we found that such a size increase hadno discernible effect on accuracy when done without the embed average pooling.6.3 R ESULTSSince each of our proposed modifications operate independently, they are well suited to use incombination as well as in isolation. In Figures 4 and 5 we compound these features on top of themore traditional enhancements. Due to the expensiveness of bidirectional models, Figure 4 alsoshows these compounding features on SST with and without bidirectionality. The validation accuracydistributions show that each augmentation usually provides some small but noticeable improvementon the previous model, as measured by consistent improvements in mean and median accuracy.Baseline: 2-LSTM + Forget Bias + Dropout + Hidden Size + Bidirectional + Monte Carlo + Embed Averaging + Vertical Residual + Lateral ResidualFeatures0.450.460.470.480.490.500.510.520.535-Class Val AccuracySST: Full Compounding Model Features(a) Compounding feature models on 5-Class SST.Baseline: 2-LSTM + Forget Bias + Dropout + Hidden Size + Monte Carlo + Embed Averaging + Vertical Residual + Lateral ResidualFeatures0.450.460.470.480.490.500.510.520.535-Class Val AccuracySST: Compounding Model Features (b) Compounding feature models (minus bidirectional)for 5-Class SST.Figure 4: These box-plots show the performance of compounding model features on fine-grain SSTvalidation accuracy. The red points, red lines, blue boxes, whiskers and plus-shaped points indicatethe mean, median, quartiles, range, and outliers, respectively.We originally suspected that MC would provide marginal yet consistent improvements across datasets,while embed average pooling would especially excel for long sequences like in IMDB, where n-grambased models and deep unordered compositions have benefited from their ability to retain informationfrom disparate parts of the text. The former hypothesis was largely confirmed. However, whileembed average pooling was generally performance-enhancing, the performance boost it yielded forIMDB was not significantly larger than the one it yielded for SST, though that may have been becausethe other enhancements already encompassed most of the advantages provided by deep unorderedcompositions.The only evident exceptions to the positive trend are the variations of residual connections. Which ofRes-V1 (vertical only) and Res-V2 (vertical and residual) outperformed the other depended on thedataset and whether the network was bidirectional. The Res-V2 architecture dominated in experiments4b and 5 while the Res-V1 (only vertical residuals) architecture is most performant in Figure 4a. This6Under review as a conference paper at ICLR 2017Baseline: 2-LSTM + Forget Bias + Dropout + Hidden Size + Bidirectional + Monte Carlo + Embed Averaging + Vertical Residual + Lateral ResidualFeatures0.8700.8750.8800.8850.8900.8950.9000.9050.910Binary Val AccuracyIMDB: Compounding Model FeaturesFigure 5: These box-plots show the performance of compounding model features on binary IMDBvalidation accuracy.Figure 6: Comparing the effects of layer depth between Vanilla RNNs, Res-V1 and Res-V2 modelson fine-grained sentiment classification (SST). As we increase the layers, we decrease the hidden sizeto maintain equivalent model sizes. The points indicate average validation accuracy, while the shadedregions indicate 90% confidence intervals.suggests for short sequences, bidirectionality and lateral residuals conflict. Further analysis of theeffect of residual connections and model depth can be found in Figure 6. In that figure, the number ofparameters, and hence model size, are kept uniform by modifying the hidden size as the layer depthchanged. The hidden sizes used for 1,2,4,6, and 8layer models were 250,170,120,100, and 85respectively, maintaining 550;000total parameters for all models. As the graph demonstrates,7Under review as a conference paper at ICLR 2017Model # Params (M) Train Time / Epoch (sec) Test Acc (%)RNTN (Socher et al., 2013) 45:7CNN-MC (Kim, 2014) 47:4DRNN (Irsoy and Cardie, 2014) 49:8CT-LSTM (Tai et al., 2015) 0:317 51:0DMN (Kumar et al., 2016) 52:1NTI-SLSTM-LSTM (Munkhdalai andYu, 2016) 53:1Baseline 2-LSTM 0:553 2;100 46 :4Large 2-LSTM 8:650 3;150 48 :7Bi-2-LSTM 8:650 6;100 50 :9Bi-2-LSTM+MC+Pooling+ResV 8:740 8;050 52:22-LSTM+MC+Pooling+ResV+ResL 8:740 4;800 51:6Table 1: Test performance on the Stanford Sentiment Treebank (SST) sentiment classification task.Model # Params (M) Train Time / Epoch (sec) Test Acc (%)SVM-bi (Wang and Manning, 2012) 89:2DAN-RAND (Iyyer et al., 2015) 88:8DAN (Iyyer et al., 2015) 89:4NBSVM-bi (Wang and Manning, 2012) 91:2NBSVM-tri, RNN, Sentence-Vec En-semble (Mesnil et al., 2014) 92:6Baseline 2-LSTM 0:318 1;800 85 :3Large 2-LSTM 2:00 2;500 87 :6Bi-2-LSTM 2:00 5;100 88 :9Bi-2-LSTM+MC+Pooling+ResV+ResL 2:08 5;500 90 :1Table 2: Test performance on the IMDB sentiment classification task.normal LSTMs (“Vanilla”) perform drastically worse as they become deeper and narrower, whileRes-V1 and Res-V2 both see their performance stay much steadier or even briefly rise. While depthwound up being far from a panacea for the datasets we experimented on, the ability of an LSTM withresidual connections to maintain its performance as it gets deeper holds promise for other domainswhere the extra expressive power provided by depth might prove more crucial.Selecting the best results for each model, we see results competitive with state-of-the-art performancefor both IMDB1and SST, even though many state-of-the-art models use either parse-tree information(Tai et al., 2015), multiple passes through the data (Kumar et al., 2016) or tremendous train andtest-time computational and memory expenses (Le and Mikolov, 2014). To our knowledge, ourmodels constitute the best performance of purely sequential, single-pass, and computationally feasiblemodels, precisely the desired features of a solid out-of-the-box baseline. Furthermore, for SST, thecompounding enhancement model without bidirectionality, the final model shown in Figure 4b, greatlyexceeded the performance of the large bidirectional model ( 51:6%vs50:9%), with significantly lesstraining time (Table 1). This suggests our enhancements could provide a similarly reasonable andefficient alternative to shared-weight bidirectionality for other such datasets.7 C ONCLUSIONWe explore several easy to implement enhancements to the basic LSTM network that positivelyimpact performance. These include both fairly well established extensions (biasing the forget gate,dropout, increasing the model size, bidirectionality) and several more novel ones (Monte Carlo1For IMDB, we benchmark only against results obtained from training exclusively on the labeled training set.Thus, we omit results from unsupervised models that leveraged the additional 50;000unlabeled examples, suchas Miyato et al. (2016).8Under review as a conference paper at ICLR 2017model averaging, embed average pooling, residual connections). We find that these enhancementsimprove the performance of the LSTM in classification tasks, both in conjunction or isolation, withan accuracy close to state of the art despite being more lightweight and using less information thanthe current state of the art models. Our results suggest that these extensions should be incorporatedinto LSTM baselines.REFERENCESRich Caruana, Nikos Karampatziakis, and Ainur Yessenalina. An empirical evaluation of supervisedlearning in high dimensions. In Proceedings of the 25th international conference on Machinelearning , pages 96–103. ACM, 2008.Yarin Gal. A theoretically grounded application of dropout in recurrent neural networks. arXivpreprint arXiv:1512.05287 , 2015.Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn ́ık, Bas R Steunebrink, and J ̈urgen Schmidhuber.Lstm: A search space odyssey. arXiv preprint arXiv:1503.04069 , 2015.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition. CoRR , abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385 .Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks withstochastic depth. arXiv preprint arXiv:1603.09382 , 2016.Ozan Irsoy and Claire Cardie. Modeling compositionality with multiplicative recurrent neuralnetworks. arXiv preprint arXiv:1412.6577 , 2014.Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum ́e III. Deep unordered composi-tion rivals syntactic methods for text classification. In Association for Computational Linguistics ,2015.Andrej Karpathy, Justin Johnson, and Fei-Fei Li. Visualizing and understanding recurrent networks.CoRR , abs/1506.02078, 2015.Yoon Kim. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 ,2014.Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, VictorZhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks fornatural language processing. In ICML , 2016.Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICML ,volume 14, pages 1188–1196, 2014.Qianli Liao and Tomaso A. Poggio. Bridging the gaps between residual learning, recurrent neuralnetworks and visual cortex. CoRR , abs/1604.03640, 2016. URL http://arxiv.org/abs/1604.03640 .Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y . Ng, and ChristopherPotts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting ofthe Association for Computational Linguistics: Human Language Technologies , pages 142–150,Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P11-1015 .Gr ́egoire Mesnil, Tomas Mikolov, Marc’Aurelio Ranzato, and Yoshua Bengio. Ensemble of gen-erative and discriminative techniques for sentiment analysis of movie reviews. arXiv preprintarXiv:1412.5335 , 2014.Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Virtual adversarial training for semi-supervisedtext classification. arXiv preprint arXiv:1605.07725 , 2016.Tsendsuren Munkhdalai and Hong Yu. Neural tree indexers for text understanding. CoRR ,abs/1607.04492, 2016. URL http://arxiv.org/abs/1607.04492 .9Under review as a conference paper at ICLR 2017Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for wordrepresentation. In EMNLP , volume 14, pages 1532–43, 2014.Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn featuresoff-the-shelf: An astounding baseline for recognition. In The IEEE Conference on ComputerVision and Pattern Recognition (CVPR) Workshops , June 2014.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. CoRR , abs/1409.1556, 2014. URL http://arxiv.org/abs/1409.1556 .Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng,and Christopher Potts. Recursive deep models for semantic compositionality over a sentimenttreebank. In Proceedings of the conference on empirical methods in natural language processing(EMNLP) , volume 1631, page 1642. Citeseer, 2013.Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine LearningResearch , 15(1):1929–1958, 2014.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages 1–9,2015.Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and theimpact of residual connections on learning. CoRR , abs/1602.07261, 2016. URL http://arxiv.org/abs/1602.07261 .Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representationsfrom tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075 , 2015.Andreas Veit, Michael J. Wilber, and Serge J. Belongie. Residual networks are exponential ensemblesof relatively shallow networks. CoRR , abs/1605.06431, 2016. URL http://arxiv.org/abs/1605.06431 .Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topicclassification. In Proceedings of the 50th Annual Meeting of the Association for ComputationalLinguistics: Short Papers-Volume 2 , pages 90–94. Association for Computational Linguistics,2012.Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual andtextual question answering. In ICML , 2016.Wojciech Zaremba. An empirical exploration of recurrent network architectures. 2015.10
Byiy-Pqlx
Published as a conference paper at ICLR 2017LIE-ACCESS NEURAL TURING MACHINESGreg Yang and Alexander M. Rushfgyang@college,srush@seas g.harvard.eduHarvard UniversityCambridge, MA 02138, USAABSTRACTExternal neural memory structures have recently become a popular tool for algo-rithmic deep learning (Graves et al., 2014; Weston et al., 2014). These models gen-erally utilize differentiable versions of traditional discrete memory-access struc-tures (random access, stacks, tapes) to provide the storage necessary for computa-tional tasks. In this work, we argue that these neural memory systems lack specificstructure important for relative indexing, and propose an alternative model, Lie-access memory, that is explicitly designed for the neural setting. In this paradigm,memory is accessed using a continuous head in a key-space manifold. The head ismoved via Lie group actions, such as shifts or rotations, generated by a controller,and memory access is performed by linear smoothing in key space. We argue thatLie groups provide a natural generalization of discrete memory structures, such asTuring machines, as they provide inverse and identity operators while maintainingdifferentiability. To experiment with this approach, we implement a simplifiedLie-access neural Turing machine (LANTM) with different Lie groups. We findthat this approach is able to perform well on a range of algorithmic tasks.1 I NTRODUCTIONRecent work on neural Turing machines (NTMs) (Graves et al., 2014; 2016) and memory networks(MemNNs) (Weston et al., 2014) has repopularized the use of explicit external memory in neuralnetworks and demonstrated that these networks can be effectively trained in an end-to-end fash-ion. These methods have been successfully applied to question answering (Weston et al., 2014;Sukhbaatar et al., 2015; Kumar et al., 2015), algorithm learning (Graves et al., 2014; Kalchbrenneret al., 2015; Kaiser & Sutskever, 2015; Kurach et al., 2015; Zaremba & Sutskever, 2015; Grefen-stette et al., 2015; Joulin & Mikolov, 2015), machine translation (Kalchbrenner et al., 2015), andother tasks. This methodology has the potential to extend deep networks in a general-purpose waybeyond the limitations of fixed-length encodings such as standard recurrent neural networks (RNNs).A shared theme in many of these works (and earlier exploration of neural memory) is to re-frametraditional memory access paradigms to be continuous and possibly differentiable to allow for back-propagation. In MemNNs, traditional random-access memory is replaced with a ranking approachthat finds the most likely memory. In the work of Grefenstette et al. (2015), classical stack-,queue- , and deque-based memories are replaced by soft-differentiable stack, queue, and deque data-structures. In NTMs, sequential local-access memory is simulated by an explicit tape data structure.This work questions the assumption that neural memory should mimic the structure of traditionaldiscrete memory. We argue that a neural memory should provide the following: (A) differentiabilityfor end-to-end training and (B) robust relative indexing (perhaps in addition to random-access).Surprisingly many neural memory systems fail one of these conditions, either lacking Criterion B,discussed below, or employing extensions like REINFORCE to work around lack of differentiability(Zaremba & Sutskever, 2015).We propose instead a class of memory access techniques based around Lie groups, i.e. groups withdifferentiable operations, which provide a natural structure for neural memory access. By definition,their differentiability satisfies the concerns of Criterion A. Additionally the group axioms provideidentity, invertibility, and associativity, all of which are desirable properties for a relative indexingscheme (Criterion B), and all of which are satisfied by standard Turing machines. Notably though,1Published as a conference paper at ICLR 2017simple group properties like invertibility are not satisfied by neural Turing machines, differentiableneural computers, or even by simple soft-tape machines. In short, in our method, we constructmemory systems with keys placed on a manifold, and where relative access operations are providedby Lie groups.To experiment with this approach, we implement a neural Turing machine with an LSTM con-troller and several versions of Lie-access memory, which we call Lie-access neural Turing machines(LANTM). The details of these models are exhibited in Section 4.1Our main experimental resultsare presented in Section 5. The LANTM model is able to learn non-trivial algorithmic tasks suchas copying and permutating sequences with higher accuracy than more traditional memory-basedapproaches, and significantly better than fixed memory LSTM models. The memory structures andkey transformation learned by the model resemble interesting continuous space representations oftraditional discrete memory data structures.2 B ACKGROUND : RECURRENT NEURAL NETWORKS WITH MEMORYThis work focuses particularly on recurrent neural network (RNN) controllers of abstract neuralmemories. Formally, an RNN is a differentiable function RNN :XH!H , whereXis anarbitrary input space and His the hidden state space. On input (x(1);:::;x(T))2XTand withinitial stateh(0)2H, the RNN produces states h(1);:::;h(T)based on the recurrence,h(t):= RNN(x(t);h(t1)):These states can be used for downstream tasks, for example sequence prediction which producesoutputs (y(1);:::;y(T))based on an additional transformation and prediction layer y(t)=F(h(t))such as a linear-layer followed by a softmax. RNNs can be trained end-to-end by backpropagation-through-time (BPTT) (Werbos, 1990). In practice, we use long short-term memory (LSTM) RNNs(Hochreiter & Schmidhuber, 1997). LSTM’s hidden state consists of two variables (c(t);h(t)), whereh(t)is also the output to the external world; we however use the above notation for simplicity.An RNN can also serve as the controller for an external memory system (Graves et al., 2014; Grefen-stette et al., 2015; Zaremba & Sutskever, 2015), which enables: (1) the entire system to carry stateover time from both the RNN and the external memory, and (2) the RNN controller to collect read-ings from and compute additional instructions to the external memory. Formally, we extend therecurrence to,h(t):= RNN([x(t);(t1)];h(t1));(t);(t):= RW((t1);h(t));where is the abstract memory state, and (t)is the value read from memory, and his used as anabstract controller command to a read/write function RW. Writing occurs in the mutation of ateach time step. Throughout this work, will take the form of an ordered set f(ki;vi;si)giwhereki2K is an arbitrary key, vi2Rmis a memory value, and si2R+is a memory strength.In order for the model to be trainable with backpropagation, the memory function RW must alsobe differentiable. Several forms of differentiable memory have been proposed in the literature. Webegin by describing two simple forms: (neural) random-access memory and (neural) tape-basedmemory. For this section, we focus on the read step and assume is fixed.Random-Access Memory Random-access memory consists of using a now standard attention-mechanism or MemNN to read a memory (our description follows Miller et al. (2016)). The con-troller hidden state is used to output a random-access pointer, q0(h)that determines a weighting ofmemory vectors via dot products with the corresponding keys. This weighting in turn determinesthe read values via linear smoothing based on a function w,wi(q;) :=siexphq;kiiPjsjexphq;kji:=Xiwi(q0(h);)vi:The final read memory is based on how “close” the read pointer was to each of the keys, wherecloseness in key space is determined by w.1Our implementations are available at https://github.com/harvardnlp/lie-access-memory2Published as a conference paper at ICLR 2017Tape-Based Memory Neural memories can also be extended to support relative access by main-taining read state. Following notation from Turing machines, we call this state the head ,q. In thesimplest case the recurrence now has the form,0;q0;= RW(;q;h);and this can be extended to support multiple heads.In the simplest case of soft tape-based memory (a naive version of the much more complicated neuralTuring machine), the keys kiindicate one-hot positions along a tape with ki=i. The headqis aprobability distribution over tape positions. It determines the read value by directly specifying theweights. The controller can only “shift” the head by outputting a kernel K(h) = (K1;K0;K+1)in the probability simplex 2and applying convolution.q0(q;h) :=qK(h); i.e. q0j=qj1K+1+qjK0+qj+1K1We can view this as the soft version of a single-step discrete Turing machine where the kernel cansoftly shift the “head” of the machine one to the left, one to the right, or remain in the same location.The value returned can then be computed with linear smoothing as above,wi(q;) :=sihq;kiiPjsjhq;kji:=Xiwi(q0(q;h);)vi:3 L IEGROUPS FOR MEMORYLet us now take a brief digression and consider the standard (non-neural) Turing machine (TM) andthe movement of its head over a tape. A TM has a head q2Zindicating the position on a tape.Between reads, the head can move any number of steps left or right. Moving a+bsteps and thencsteps eventually puts the head at the same location as moving asteps and then b+csteps — i.e.the head movement is associative . In addition, the machine should be able to reverse a head shift,for example, in a stack simulation algorithm, going from push to pop — i.e. each head movementshould also have a corresponding inverse . Finally, the head should also be allowed to stay put, forexample, to read a single data item and use it for multiple time points, an identity .These movements correspond directly to group actions: the possible head movements should beassociative, and contain inverse and identity elements. This group acts on the set of possible headlocations. In a TM, the set of Z-valued head movement acts on the set of locations on the Z-indexedinfinite tape. By our reasoning above, if a Turing machine is to store data contents at points in ageneral spaceK(instead of an infinite Z-indexed tape), then its head movements should form agroup and act onKvia group actions.For a neural memory system, we desire the network to be (almost everywhere) differentiable. Thenotion of “differentiable” groups is well-studied in mathematics, where they are known as Liegroups , and “differentiable group actions” are correspondingly called Lie group actions . In ourcase, using Lie group actions as generalized head movements on a general key space (more accu-rately, manifolds) would most importantly mean that we can take derivatives of these movementsand perform the usual backpropagation algorithm.4 L IE-ACCESS NEURAL TURING MACHINESThese properties motivate us to propose Lie access as an alternative formalism to popular neuralmemory systems, such as probabilistic tapes, which surprisingly do not satisfy invertibility and oftendo not provide an identity.2Our Lie-access memory will consist of a set of points in a manifold K.2The Markov kernel convolutional soft head shift mechanism proposed in Graves et al. (2014) and sketchedin Section 2 does not in general have inverses. Indeed, the authors reported problems with the soft head losing“sharpness” over time, which they dealt with by sharpening coefficients. In the followup work, Graves et al.(2016) utilize a temporal memory link matrix for actions. They note, “the operation Lwsmoothly shifts thefocus forwards to the locations written ... whereas L>wshifts the focus backwards” but do not enforce this asa true inverse. They also explicitly do not include an identity, noting “Self-links are excluded (the diagonal ofthe link matrix is always 0)”; however, they could ignore the link matrix with an interpolation gate, which ineffect acts as the identity.3Published as a conference paper at ICLR 2017We replace the discrete head with a continuous head q2K . The head moves based on a set ofLie group actions a2A generated by the controller. To read memories, we will rely on a distancemeasure in this space, d:KK! R0.3Together these properties describe a general class ofpossible neural memory architectures.Formally a Lie-access neural Turing machine (LANTM) computes the following function,0;q0;q0(w);:= RW(;q;q (w);h)whereq;q (w)2K are resp. read and write heads, and is the memory itself. We implement , asabove, as a weighted dictionary =f(ki;vi;si)gi.4.1 A DDRESSING PROCEDUREThe LANTM maintains a read head qwhich at every step is first updated to q0and then used to readfrom the memory table. This update occurs by selecting a Lie group action from Awhich then actssmoothly on the key space K. We parametrize the action transformation, a:H7!A by the hiddenstate to produce the Lie action, a(h)2A. In the simplest case, the head is then updated based onthis action (heredenotes group action): q0:=a(h)q.For instance, consider two possible Lie groups:(1) A shift group R2acting additively on R2. This means thatA=R2so thata(h) = (;)actsupon a head q= (x;y)by,a(h)q= (;) + (x;y) = (x+;y+):(2) A rotation group SO(3)acting on the sphere S2=fv2R3:kvk= 1g. Each rotation can bedescribed by its axis (a unit vector) and angle . An action (;)qis just the appropriate rotationof the pointq, and is given by Rodrigues’ rotation formula,a(h)q= (;)q=qcos+ (q) sin+h;qi(1cos):Heredenotes cross product.4.2 R EADING AND WRITING MEMORIESRecall that memories are stored in , each with a key, ki, memory vector, vi, and strength, si, andthat memories are read using linear smoothing over vectors based on a key weighting function w,:=Piwi(q0;)vi. While there are many possible weighting schemes, we use one based onthe distance of each memory address from the head in key-space assuming a metric donK. Weconsider two different weighting functions (1) inverse-square and (2) softmax. There first uses thepolynomial law and the second an annealed softmax of the squared distances:w(1)i(q;) :=sid(q;ki)2Pjsjd(q;kj)2w(2)i(q;;T) :=siexp(d(q;ki)2=T)Pjsjexp(d(q;kj)2=T);where we use the convention that it takes the limit value when q!kiandTis atemperature thatrepresents the certainty of its reading, i.e. higher Tcreates more uniform w.The writing procedure is similar to reading. The LANTM maintains a separate write headq(w)thatmoves analogously to the read head, i.e. with action function a(w)(h)and updated value q0(w). Ateach call to RW, a new memory is automatically appended to withk=q0(w). The corresponding3This metric should satisfy a compatibility relation with the Lie group action. When points x;y2Xare simultaneously moved by the same Lie group action v, their distance should stay the same (One possiblemathematical formalization is that Xshould be a Riemannian manifold and the Lie group should be a subgroupofX’s isometry group.): d(vx;vy ) =d(x;y):This condition ensures that if the machine writes a sequence ofdata along a “straight line” at points x;vx;v2x;:::;vkx, then it can read the same sequence by emitting a readlocationyclose toxand then follow the “ v-trail”y;vy;v2y;:::;vky.4Published as a conference paper at ICLR 2017mem. vec.viread valueaddresskikey manifold Kread keyqweight schemeFigure 1: Retrieval of value from memory via a key. Weightings with unit sum are assigned to differentmemories depending on the distances from the addresses to the read key. Linear smoothing over values is usedto emit the final read value. Both inverse-square and softmax schemes follow this method, but differ in theircomputations of the weightings.memoryvand strength sare created by MLP’s v(h)2Rmands(h)2[0;1]takinghas input. Afterwriting, the new memory set is,0:= [f(q0(w);v(h);s(h))g:No explicit erase mechanism is provided, but to erase a memory (k;v;s ), the controller may intheory write (k;v;s).4.3 C OMBINING WITH RANDOM ACCESSFinally we combine this relative addressing procedure with direct random-access to give the modelthe ability for absolute address access. We do this by outputting an absolute address each stepand simply interpolating with our current head. Write t(h)2[0;1]for the interpolation gate and~q(h)2K for our proposed random-access layer. For key space manifolds KlikeRn,4there’s awell defined straight-line interpolation between two points, so we can setq0:=a(tq+ (1t)~q)where we have omitted the implied dependence on h. For other manifolds like the spheres Snthat have well-behaved projection functions :Rn!Sn, we can just project the straight-lineinterpolation to the sphere:q0:=a(tq+ (1t)~q):In the case of a sphere Sn,is justL2-normalization.55 E XPERIMENTSWe experiment with Lie-access memory on a variety of algorithmic learning tasks. We are partic-ularly interested in: (a) how Lie-access memory can be trained, (b) whether it can be effectivelyutilized for algorithmic learning, and (c) what internal structures the model learns compared to sys-tems based directly on soft discrete memory. In particular Lie access is not equipped with an explicitstack or tape, so it would need to learn continuous patterns that capture these properties.Setup. Our experiments utilize an LSTM controller in a version of the encoder-decoder setup(Sutskever et al., 2014), i.e. an encoding input pass followed by a decoding output pass. The encoderreads and writes memories at each step; the decoder only reads memories. The encoder is given hsi,4Or in general, manifolds with convex embeddings in Rn.5Technically, in the sphere case, dom=Rdf0g. But in practice one almost never gets 0 from astraight-line interpolation, so computationally this makes little difference.5Published as a conference paper at ICLR 2017followed by an the input sequence, and then h=sito terminate input. The decoder is not re-fed itsoutput or the correct symbol, i.e. we do not use teacher forcing, so x(t)is a fixed placeholder inputsymbol. The decoder must correctly emit an end-of-output symbol h=eito terminate.Models and Baselines. We implement three main baseline models including: (a) a standard LSTMencoder-decoder, without explicit external memory, (b) a random access memory network, RAMusing the key-value formulation as described in the background, roughly analogous to an attention-based encoder-decoder, and (c) an interpolation of a RAM/Tape -based memory network as describedin the background, i.e. a highly simplified version of a true NTM (Graves et al., 2014) with asharpening parameter. Our models include four versions of Lie-access memory. The main model,LANTM , has an LSTM controller, with a shift group A=R2acting additively on key space K=R2. We also consider a model SLANTM with spherical memory, utilizing a rotation group A=SO(3)acting on keys in the sphere K=S2. For both of the models, the distance function dis theEuclidean (L2) distance, and we experiment with smoothing using inverse-square (default) and withan annealed softmax .6Model Setup. For all tasks, the LSTM baseline has 1 to 4 layers, each with 256 cells. Each ofthe other models has a single-layer, 50-cell LSTM controller, with memory width (i.e. the size ofeach memory vector) 20. Other parameters such as learning rate, decay, and intialization are foundthrough grid search. Further hyperparameter details are give in the appendix.Tasks. Our experiments are on a series of algorithmic tasks shown in Table 1a. The C OPY, RE-VERSE , and B IGRAM FLIPtasks are based on Grefenstette et al. (2015); the D OUBLE and I NTER -LEAVED ADDtasks are designed in a similar vein. Additionally we also include three harder tasks:ODDFIRST , REPEAT COPY, and P RIORITY SORT. In O DDFIRST , the model must output the odd-indexed elements first, followed by the even-indexed elements. In R EPEAT COPY, each model mustrepeat a sequence of length 20, Ntimes. In P RIORITY SORT, each item of the input sequence isgiven a priority, and the model must output them in priority order.We train each model in two regimes, one with a small number of samples (16K) and one with a largenumber of samples (320K). In the former case, the samples are iterated through 20 times, while inthe latter, the samples are iterated through only once. Thus in both regimes, the total training timesare the same. Training is done by minimizing negative log likelihood with RMSProp.Prediction is performed via argmax/greedy prediction at each step. To evaluate the performance ofthe models, we compute the fraction of tokens correctly predicted and the fraction of all answerscompletely correctly predicted, respectively called fine and coarse scores. We assess the models on3.2K randomly generated out-of-sample 2x length examples, i.e. with sequence lengths 2k(or repeatnumber 2Nin the case of R EPEAT COPY) to test the generalization of the system. More precisely,for all tasks other than repeat copy, during training, the length kis varied in the interval [lk;uk](asshown in table 1ba). During test time, the length kis varied in the range [uk+ 1;2uk]. For repeatcopy, the repetition number Nis varied similarly, instead of k.Results. Main results comparing the different memory systems and read computations on a seriesof tasks are shown in Table 1b. Consistent with previous work the fixed-memory LSTM systemfails consistently when required to generalize to the 2x samples, unable to solve any 2x problemcorrectly, and only able to predict at most 50% of the symbols for all tasks except interleavedaddition, regardless of training regime. The RAM (attention-based) and the RAM/tape hybrid aremuch stronger baselines, answering more than 50% of the characters correctly for all but the 6-O DDFIRST task. Perhaps surprisingly, RAM and RAM/tape learned the 7-R EPEAT COPY task withalmost perfect generalization scores when trained in the large sample regime. In general, it does notseem that the simple tape memory confers much advantage to the RAM model, as the generalizationperformances of both models are similar for the most part, which motivates more advanced NTMenhancements beyond sharpening.The last four columns illustrate the performance of the LANTM models. We found the inverse-square LANTM and SLANTM models to be the most effective, achieving >90% generalization6Note that the read weight calculation of a SLANTM with softmax is essentially the same as the RAMmodel: For head q,exp(d(q;ki)2=T) = exp(kqkik2=T) = exp((22hq;kii)=T), wherethe last equality comes from kqk=kkik= 1 (key-space is on the sphere). Therefore the weightswi=siexp(d(q;ki)2=T)Pjsjexp(d(q;kj)2=T)=siexp(2hq;kii=T)Pjsjexp(2hq;kji=T), which is the RAM weighting scheme.6Published as a conference paper at ICLR 2017Task Input Output Size kjVj1 - C OPY a1a2a3ak a1a2a3ak [2;64] 1282 - R EVERSE a1a2a3ak akak1ak2a1 [2;64] 1283 - B IGRAM FLIP a1a2a3a4a2k1a2ka2a1a4a3a2ka2k1 [1;16] 1284 - D OUBLE a1a2ak 2jaka1j [2;40] 105 - I NTERLEAVED ADDa1a2a3a4a2k1a2kja2ka2k2a2j+ja2k1a1j [2;16] 106 - O DDFIRST a1a2a3a4a2k1a2ka1a3a2k1a2a4a2k [1;16] 1287 - R EPEAT COPY Na1a20 a1a20a1a20(Ntimes)N2[1;5] 1288 - P RIORITY SORT 5a52a29a9 a1a2a3ak [2;10] 128(a) Task descriptions and parameters. jaka1jmeans the decimal number repesented by decimal digitsaka1. Arithmetic tasks have all numbers formatted with the least significant digits on the left and with zeropadding. The D OUBLE task takes an integer x2[0;10k)padded tokdigits and outputs 2xink+ 1 digits,zero padded to k+ 1digits. The I NTERLEAVED ADDtask takes two integers x;y2[0;10k)padded tokdigitsand interleaved, forming a length 2kinput sequence and outputs x+yzero padded to k+ 1digits. The lasttwo tasks use numbers in unary format: Nis the shorthand for a length Nsequence of a special symbol @,encodingNin unary, e.g. 3 = @@@ .Base Memory LieLSTM RAM RAM/Tape LANTM LANTM-s SLANTM SLANTM-sS L S L S L S L S L S L S L1 16/0 21/0 61/0 61/1 70/2 70/1 ? ? ? ? ? ? ? ?2 26/0 32/0 58/2 54/2 24/1 43/2 ? ? 97/44 98/88 99/96 ? ? ?3 30/0 39/0 56/5 54/9 64/8 69/9 ? ? ? 99/94 99/99 97/67 93/60 90/434 44/0 47/0 72/8 74/15 70/12 71/6 ? ? ? ? ? ? ? ?5 60/0 61/0 74/13 76/17 77/23 67/19 99/93 99/93 90/38 94/57 99/91 99/97 98/78 ?6 29/0 42/0 31/5 46/4 43/8 62/8 99/91 99/95 90/29 50/0 49/7 56/8 74/15 76/167 24/0 37/0 98/56 99/98 71/18 99/93 67/0 70/0 17/0 48/0 99/91 99/78 96/41 99/518 46/0 53/0 60/5 80/22 78/15 66/9 87/35 98/72 99/95 99/99 ? 99/99 98/79 ?(b) Main results. Numbers represent the accuracy percentages on the fine/coarse evaluations on the out-of-sample 2tasks. The S and L columns resp. indicate small and large sample training regimes. Symbol ?indicates exact 100% accuracy (Fine scores above 99.5 are not rounded up). Baselines are described in thebody. LANTM and SLANTM use inverse-square while LANTM-s and SLANTM-s use softmax weightingscheme. The best scores, if not 100% (denoted by stars), are bolded for each of the small and large sampleregimes.accuracy on most tasks, and together they solve all of the tasks here with >90% coarse score. Inparticular, LANTM is able to solve the 6-O DDFIRST problem when no other model can correctlysolve 20% of the 2x instances; SLANTM on the other hand is the only Lie access model able tosolve the 7-R EPEAT COPY problem.The best Lie access model trained with the small sample regime beats or is competitive with any ofthe baseline trained under the large sample regime. In all tasks other than 7-R EPEAT COPY, the gapin the coarse score between the best Lie access model in small sample regime and the best baselinein any sample regime is 70%. However, in most cases, training under the large sample regimedoes not improve much. For a few tasks, small sample regime actually produces a model with bettergeneralization than large sample regime. We observed in these instances, the generalization errorcurve under a large sample regime reaches an optimum at around 2/3 to 3/4 of training time, andthen increases almost monotonically from there. Thus, the model likely has found an algorithm thatworks only for the training sizes; in particular, this phenomenon does not seem to be due to lack oftraining time.6 D ISCUSSIONQualitative Analysis. We did further visual analysis of the different Lie-access techniques to seehow the models were learning the underlying tasks, and to verify that they were using the relativeaddressing scheme. Figure 2 shows two diagrams of the LANTM model of the tasks of priority sortand repeat copy. Figure 3 shows two diagrams of the SLANTM model for the same two tasks. Fig-7Published as a conference paper at ICLR 2017@@79@@@@98@5@@@107119dec./uni00A0reads119 /uni00A05 79 107 98 /uni00A0$(a) (b)Figure 2: Analysis of the LANTM model. (a)PCA projection from key space R2to 1D for the memories and read heads qof LANTM for the unary 8-P RIORITY SORT task. In this task, the encoder reads a priority,encoded in unary, and then a value; the decoder must output these values in priority order. In this examplethe sequence is [@;@;79;@;@;@;@;98;@;5;@;@;@;107;@;119], where the special symbol @ is a unaryencoding of the priority. From top to bottom, each row indicates the movement of the encoder write head q(w)as it is fed each input character. Fill indicates the strength siof memory write (black indicates high strength).Position of a dot within its row indicates the PCA projection of the key ki. The last line indicates the movementof decoder read head q. Interestingly, we note that, instead of writing to memory, the controller remembersthe item 119 itself. (b)Raw coordinates in key space R2of writes (red) and reads (blue) from LANTM on7-R EPEAT COPY. Red line indicates the writes, which occur along a straight line during the encoding phase.Blue line indicates the reads, which zip back and forth in the process of copying the input sequence 6 times.Enc./uni00A0Writes Dec./uni00A0Reads287443102883980/uni00A076273(a) (b)Figure 3: Analysis of the SLANTM model. (a)PCA projection from the spherical key space S2to 2D of thememories and read heads qof SLANTM for the task of 7-R EPEAT COPY. Here the model is to repeatedlyoutput the sequence 10 times. Input is 10 repetitions of special symbol @ followed by [28, 74, 43, 102, 88, 39,... ]. Left: the positions of write head q(w)during the encoding phase. Fill indicates strength si(black meanshigh strength); number indicates the character stored. SLANTM traverses in a circle clockwise starting at point28, and stores data at regular intervals. Right : the positions of read head qduring the decoding phase. Startingfrom the blue dot, the reads move clockwise around the sphere, and end at the red dot. For the sake of clarity,read positions are indicated by bends in the blue line, instead of by dots. Intriguingly, the model implementsa cyclic list data structure, taking advantage of the spherical structure of the memory. (b)Raw coordinates inkey spaceS2of writes (red) and reads (blue) from SLANTM on a non-unary encoded variant of the prioritysort task. Red line indicates the movements of the write-head q(w)to place points along a sub-manifold of K(an arc ofS2) during the encoding phase. Notably, this movement is not sequential, but random-access, so asto store elements in correct priority order. Blue line indicates the simple traversal of this arc during decoding.8Published as a conference paper at ICLR 2017Figure 4: Memory access pattern of LANTM on 6-O DDFIRST . Left: In the middle of training. LANTMlearns to store data in a zigzag such that odd-indexed items fall on one side and even-indexed items fall on theother. However reading is only half correct. Right: After training. During reading, the model simply reads theodd-indexed items in a straight line, followed by the even-indexed items in a parallel line.ure 4 shows the memory access pattern of LANTM on 6-O DDFIRST task. Additionally, animationstracing the evolution of the memory access pattern of models over training time can be found athttp://nlp.seas.harvard.edu/lantm . They demonstrate that the models indeed learn relativeaddressing and internally are constructing geometric data structures to solve these algorithmic tasks.Unbounded storage One possible criticism of the LANTM framework could be that the amountof information stored increases linearly with time, which limits the usefulness of this framework forlong timescale tasks. This is indeed the case with our implementations, but need not be the case ingeneral. There can be many ways of limiting physical memory usage. For example, a simple way isto discard the least recently used memory, as in the work of Graves et al. (2016) and Gulcehre et al.(2016). Another way is to approximate with fixed number of bits the read function that takes a headposition and returns the read value. For example, noting that this function is a rational function onthe head position, keys, and memory vectors, we can approximate the numerators and denominatorswith a fixed degree polynomial.Content address Our Lie-access framework is not mutually exclusive from content addressingmethods. For example, in each of our implementations, we could have the controllers output both aposition in the key space and a content addresser of the same size as memory vectors, and interpo-lated the read values from Lie-access and the read values from content addressing.7 C ONCLUSIONThis paper introduces Lie-access memory as an alternative neural memory access paradigm, andexplored several different implementations of this approach. LANTMs follow similar axioms asdiscrete Turing machines while providing differentiability. Experiments show that simple modelscan learn algorithmic tasks. Internally these models naturally learn equivalence of standard datastructures like stack and cyclic lists. In future work we hope to experiment with more groups and toscale these methods to more difficult reasoning tasks. For instance, we hope to build a general pur-pose encoder-decoder model for tasks like question answering and machine translation that makesuse of differentiable relative-addressing schemes to replace RAM-style attention.9Published as a conference paper at ICLR 2017REFERENCESAlex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing Machines. arXiv:1410.5401 [cs] ,October 2014. URL http://arxiv.org/abs/1410.5401 . arXiv: 1410.5401.Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi ́nska, Sergio G ́omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou,et al. Hybrid computing using a neural network with dynamic external memory. Nature , 538(7626):471–476, 2016.Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning toTransduce with Unbounded Memory. arXiv:1506.02516 [cs] , June 2015. URL http://arxiv.org/abs/1506.02516 . arXiv: 1506.02516.Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. Dynamic Neural TuringMachine with Soft and Hard Addressing Schemes. arXiv:1607.00036 [cs] , June 2016. URLhttp://arxiv.org/abs/1607.00036 . arXiv: 1607.00036.Sepp Hochreiter and Jrgen Schmidhuber. Long Short-Term Memory. Neural Comput. , 9(8):1735–1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL http://dx.doi.org/10.1162/neco.1997.9.8.1735 .Armand Joulin and Tomas Mikolov. Inferring Algorithmic Patterns with Stack-Augmented Recur-rent Nets. arXiv:1503.01007 [cs] , March 2015. URL http://arxiv.org/abs/1503.01007 .arXiv: 1503.01007.ukasz Kaiser and Ilya Sutskever. Neural GPUs Learn Algorithms. arXiv:1511.08228 [cs] , Novem-ber 2015. URL http://arxiv.org/abs/1511.08228 . arXiv: 1511.08228.Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid Long Short-Term Memory.arXiv:1507.01526 [cs] , July 2015. URL http://arxiv.org/abs/1507.01526 . arXiv:1507.01526.Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, andRichard Socher. Ask Me Anything: Dynamic Memory Networks for Natural Language Process-ing. arXiv:1506.07285 [cs] , June 2015. URL http://arxiv.org/abs/1506.07285 . arXiv:1506.07285.Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural Random-Access Machines.arXiv:1511.06392 [cs] , November 2015. URL http://arxiv.org/abs/1511.06392 . arXiv:1511.06392.John Lee. Introduction to Smooth Manifolds . Number 218 in Graduate Texts in Mathematics.Springer, 2 edition, 2012. ISBN 978-1-4419-9981-8.A. Marthinsen. Interpolation in Lie Groups. SIAM Journal on Numerical Analysis , 37(1):269–285,January 1999. ISSN 0036-1429. doi: 10.1137/S0036142998338861. URL http://epubs.siam.org/doi/abs/10.1137/S0036142998338861 .Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and JasonWeston. Key-value memory networks for directly reading documents. CoRR , abs/1606.03126,2016. URL http://arxiv.org/abs/1606.03126 .Tatiana Shingel. Interpolation in special orthogonal groups. IMA Journal of Numerical Analysis ,29(3):731–745, July 2009. ISSN 0272-4979, 1464-3642. doi: 10.1093/imanum/drn033. URLhttp://imajna.oxfordjournals.org/content/29/3/731 .Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-To-End Memory Net-works. arXiv:1503.08895 [cs] , March 2015. URL http://arxiv.org/abs/1503.08895 .arXiv: 1503.08895.Ilya Sutskever, Oriol Vinyals, and Quoc V . Le. Sequence to Sequence Learning with Neural Net-works. arXiv:1409.3215 [cs] , September 2014. URL http://arxiv.org/abs/1409.3215 .arXiv: 1409.3215.10Published as a conference paper at ICLR 2017Paul J. Werbos. Backpropagation through time: what it does and how to do it. Proceedings of theIEEE , 78(10):1550–1560, 1990. URL http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=58337 .Jason Weston, Sumit Chopra, and Antoine Bordes. Memory Networks. arXiv:1410.3916 [cs, stat] ,October 2014. URL http://arxiv.org/abs/1410.3916 . arXiv: 1410.3916.Wojciech Zaremba and Ilya Sutskever. Reinforcement Learning Neural Turing Machines - Re-vised. arXiv:1505.00521 [cs] , May 2015. URL http://arxiv.org/abs/1505.00521 . arXiv:1505.00521.11Published as a conference paper at ICLR 2017AppendicesA E XPERIMENTAL DETAILSWe obtain our results by performing a grid search over the hyperparameters specified in Table A.1and also over seeds 1 to 3, and take the best scores. We bound the norm of the LANTM head shiftsby 1, whereas we try both bounding and not bounding the angle of rotation in our grid for SLANTM.We initialize the Lie access models to favor Lie access over random access through the interpolationmechanism discussed in section 4.3.The RAM model read mechanism is as discussed in section 2, and writing is done by appendingnew(k;v;s )tuples to the memory . The only additions to this model in RAM/tape is that left andright keys are now computed using shifted convolution with the read weights:kL:=Xiwi+1kikR:=Xiwi1kiand these keys kLandkRare available (along with the random access key output by the controller) tothe controller on the next turn to select from via interpolation. We also considered weight sharpeningin the RAM/Tape model according to Graves et al. (2014): the controller can output a sharpeningcoefficient1each turn, so that the final weights are ~wi=wiPjwj. We included this as a featureto grid search over.rnn size embed decay delay init learning rate key dim customLANTM(-s) 501 14f300, 600g f 1, *g f 1, 2, 4ge-2 2 -SLANTM(-s) 501 14f300, 600g f 1, *g f 1, 2, 4ge-2 3 \boundRAM(/tape) 501 14f300, 600g f 1, *g f 1, 2, 4ge-2f2, 20g sharpenLSTM 256f1 to 4g128f500, 700g * 2e-f1 to 4g - -Table A.1: Parameter grid for grid search. LANTM(-s) means LANTM with invnorm or SoftMax;similarly for SLANTM(-s). RAM(/tape) means the ram and hybrid ram/tape models. Initialization:both initialization options set the forget gate of the LSTMs to 1. The number 1 in the init columnmeans initialization of all other parameters uniformly from [1;1]. The symbol * in init columnmeans initialization of all linear layers were done using the torch default, which initializes weightsuniformly from (;), whereis(input size )1=2. For models with memory, this means that theLSTM input to hidden layer is initialized approximately from [0:07;0:07](other than forget gate).Angle bound is a setting only available in SLANTM. If angle bound is true, we bound the angle ofrotation by a learnable magnitude value. Sharpening is a setting only available in RAM/tape, and itworks as explained in the main text.We found that weight sharpening only confers small advantage over vanilla on the C OPY, BIGRAMFLIP, and D OUBLE tasks, but deteriorates performance on all other tasks.B A CTION INTERPOLATIONWe also experimented with adding an interpolation between the last action a(t1)with a candidateactiona(h)via a gater(h)2[0;1]to produce the final action a(t). Then the final equation of thenew head isq0:=a(t)(tq+ (1t)~q):This allows the controller to easily move in “a straight line” by just saturating both tandr.For example, for the translation group we have straight-line interpolation, a(t):=ra+(1r)a(t1).For the rotation group SO(3), each rotation is represented by its axis 2S2and angle2(;],12Published as a conference paper at ICLR 2017LANTM LANTM-s SLANTM SLANTM-sS L S L S L S L1?:?/?:? ? :?/?:? ? :?/?:? ? :?/?:? ? :?/?:? ? :?/?:? ? :99/?:83?:99/?:992?:?/?:? ? :?/?:?97:85/44:60 98:91/88:55 99:99/96:98 ?:?/?:? ? :?/?:? ? :?/?:?3?:?/?:? ? :99/?:77?:99/?:93 99:92/94:17 99: ?/99:?97:99/67:73 93:99/60:62 90:92/43:574?:?/?:? ? :?/?:? ? :?/?:? ? :?/?:? ? :?/?:? ? :?/?:? ? :?/?:? ? :?/?:?5 99:?/93:?99:?/93:?90:99/38:80 94:99/57:84 99:96/91:61 99:99/97:99 98:99/78:99 ?:?/?:?6 99:50/91:0 99:54/95:0 90:56/29:0 50:57/0:0 49:73/7:33 56:76/8:27 74:92/15:45 76:81/16:317 67:52/0:0 70:22/0:0 17:82/0:0 48:98/0:8 99: ?/91:?99:97/78:21 96:90/41:22 99:99/51:998 87:97/35:76 98:93/72:38 99:81/95:24 99:50/99:0 ?:99/?:99 99:99/99:95 98:95/79:60 ?:98/?:80Table B.2: Comparison between scores of model with action interpolation and without action in-terpolation. Numbers represent the accuracy percentages on the fine/coarse evaluations on the out-of-sample 2tasks. The S and L columns resp. indicate small and large sample training regimes.Symbol?indicates exact 100% accuracy (Fine scores above 99.5 are not rounded up). Each entryis of the format A:B/C:D, where A and C are respectively the fine and coarse scores of the modelwithout action interpolation (same as in table 1b), and B and C are those for the model with actioninterpolation.lantm lantm/uni00ADs slantm slantm/uni00ADs8 7 6 5 4 3 2 10.500.250.000.250.50lantm lantm/uni00ADs slantm slantm/uni00ADs8 7 6 5 4 3 2 10.80.40.00.40.8Figure B.1: The additive difference between the fine (left) and coarse (right) scores of models with-out action interpolation vs models with action interpolation. Positive value means model withoutinterpolation performs better. For each model, the left column displays the difference in small sam-ple regime, while the right column displays the difference in large sample regime.and we just interpolate each separately (t):=(r+(1r)(t1))and(t):=r+(1r)(t1).whereisL2-normalization.7We perform the same experiments, with the same grid as specified in the last section, and with theinitial action interpolation gates biased toward the previous action. The results are given in table B.2.Figure B.1 shows action interpolation’s impact on performance. Most notably, interpolation seemsto improve performance of most models in the 5-I NTERLEAVED ADDtask and of the sphericalmemory models in the 6-O DDFIRST task, but causes failure to learn in many situations, mostsignificantly, the failure of LANTM to learn 6-O DDFIRST .7There is, in fact, a canonical way to interpolate the most common Lie groups, including all of the groupsmentioned above, based on the exponential map and the Baker-Campbell-Hausdorff formula (Lee, 2012), butthe details are outside the scope of this paper and the computational cost, while acceptable in control theorysettings, is too hefty for us. Interested readers are referred to Shingel (2009) and Marthinsen (1999).13
Bygq-H9eg
Under review as a conference paper at ICLR 2017ANANALYSIS OF DEEPNEURAL NETWORK MODELSFOR PRACTICAL APPLICATIONSAlfredo Canziani & Eugenio CulurcielloWeldon School of Biomedical EngineeringPurdue Universityfcanziani,euge g@purdue.eduAdam PaszkeFaculty of Mathematics, Informatics and MechanicsUniversity of Warsawa.paszke@students.mimuw.edu.plABSTRACTSince the emergence of Deep Neural Networks (DNNs) as a prominent techniquein the field of computer vision, the ImageNet classification challenge has played amajor role in advancing the state-of-the-art. While accuracy figures have steadilyincreased, the resource utilisation of winning models has not been properly takeninto account. In this work, we present a comprehensive analysis of important met-rics in practical applications: accuracy, memory footprint, parameters, operationscount, inference time and power consumption. Key findings are: (1) power con-sumption is independent of batch size and architecture; (2) accuracy and inferencetime are in a hyperbolic relationship; (3) energy constraint are an upper bound onthe maximum achievable accuracy and model complexity; (4) the number of oper-ations is a reliable estimate of the inference time. We believe our analysis providesa compelling set of information that helps design and engineer efficient DNNs.1 I NTRODUCTIONSince the breakthrough in 2012 ImageNet competition (Russakovsky et al. , 2015) achieved byAlexNet (Krizhevsky et al. , 2012) — the first entry that used a Deep Neural Network (DNN) —several other DNNs with increasing complexity have been submitted to the challenge in order toachieve better performance.In the ImageNet classification challenge, the ultimate goal is to obtain the highest accuracy in amulti-class classification problem framework, regardless of the actual inference time. We believethat this has given rise to several problems. Firstly, it is now normal practice to run several trainedinstances of a given model over multiple similar instances of each validation image. This practice,also know as model averaging or ensemble of DNNs, dramatically increases the amount of com-putation required at inference time to achieve the published accuracy. Secondly, model selection ishindered by the fact that different submissions are evaluating their (ensemble of) models a differentnumber of times on the validation images, and therefore the reported accuracy is biased on the spe-cific sampling technique (and ensemble size). Thirdly, there is currently no incentive in speeding upinference time, which is a key element in practical applications of these models, and affects resourceutilisation, power-consumption, and latency.This article aims to compare state-of-the-art DNN architectures, submitted for the ImageNet chal-lenge over the last 4 years, in terms of computational requirements and accuracy. We compare thesearchitectures on multiple metrics related to resource utilisation in actual deployments: accuracy,memory footprint, parameters, operations count, inference time and power consumption. The pur-pose of this paper is to stress the importance of these figures, which are essential hard constraintsfor the optimisation of these networks in practical deployments and applications.2 M ETHODSIn order to compare the quality of different models, we collected and analysed the accuracy valuesreported in the literature. We immediately found that different sampling techniques do not allow fora direct comparison of resource utilisation. For example, central-crop (top-5 validation) errors of a1Under review as a conference paper at ICLR 2017AlexNetBN-AlexNetBN-NINENetGoogLeNet ResNet-18VGG-16VGG-19ResNet-34ResNet-50ResNet-101ResNet-152Inception-v3Inception-v450556065707580Top-1 accuracy [%]0 5 10 15 20 25 30 35 40Operations [G-Ops]50556065707580Top-1 accuracy [%]BN-NINInception-v3Inception-v4BN-AlexNetAlexNetVGG-16 VGG-19ResNet-18ResNet-34ResNet-50ResNet-101ResNet-152GoogLeNetENet5M 35M 65M 95M 125M 155MFigure 1: Top1 vs.network. Single-crop top-1 vali-dation accuracies for top scoring single-model archi-tectures. We introduce with this chart our choice ofcolour scheme, which will be used throughout thispublication to distinguish effectively different archi-tectures and their correspondent authors. Notice thatnetworks of the same group share the same hue, forexample ResNet are all variations of pink.Figure 2: Top1 vs.operations, size/parameters.Top-1 one-crop accuracy versus amount of operationsrequired for a single forward pass. The size of theblobs is proportional to the number of network pa-rameters; a legend is reported in the bottom right cor-ner, spanning from 5106to155106params. Boththese figures share the same y-axis, and the grey dotshighlight the centre of the blobs.single run of VGG-161(Simonyan & Zisserman, 2014) and GoogLeNet (Szegedy et al. , 2014) are8:70% and10:07% respectively, revealing that VGG-16 performs better than GoogLeNet. Whenmodels are run with 10-crop sampling,2then the errors become 9:33% and9:15% respectively, andtherefore VGG-16 will perform worse than GoogLeNet, using a single central-crop. For this reason,we decided to base our analysis on re-evaluations of top-1 accuracies3for all networks with a singlecentral-crop sampling technique (Zagoruyko, 2016).For inference time and memory usage measurements we have used Torch7 (Collobert et al. , 2011)with cuDNN-v5 (Chetlur et al. , 2014) and CUDA-v8 back-end. All experiments were conducted ona JetPack-2.3 NVIDIA Jetson TX1 board (nVIDIA): an embedded visual computing system witha 64-bit ARM RA57 CPU, a 1 T-Flop/s 256-core NVIDIA Maxwell GPU and 4 GB LPDDR4of shared RAM. We use this resource-limited device to better underline the differences betweennetwork architecture, but similar results can be obtained on most recent GPUs, such as the NVIDIAK40 or Titan X, to name a few. Operation counts were obtained using an open-source tool that wedeveloped (Paszke, 2016). For measuring the power consumption, a Keysight 1146B Hall effectcurrent probe has been used with a Keysight MSO-X 2024A 200 MHz digital oscilloscope with asampling period of 2 sand50 kSa =ssample rate. The system was powered by a Keysight E3645AGPIB controlled DC power supply.3 R ESULTSIn this section we report our results and comparisons. We analysed the following DDNs: AlexNet(Krizhevsky et al. , 2012), batch normalised AlexNet (Zagoruyko, 2016), batch normalised NetworkIn Network (NIN) (Lin et al. , 2013), ENet (Paszke et al. , 2016) for ImageNet (Culurciello, 2016),GoogLeNet (Szegedy et al. , 2014), VGG-16 and -19 (Simonyan & Zisserman, 2014), ResNet-18,-34, -50, -101 and -152 (He et al. , 2015), Inception-v3 (Szegedy et al. , 2015) and Inception-v4(Szegedy et al. , 2016) since they obtained the highest performance, in these four years, on theImageNet (Russakovsky et al. , 2015) challenge.1In the original paper this network is called VGG-D, which is the best performing network. Here we preferto highlight the number of layer utilised, so we will call it VGG-16 in this publication.2From a given image multiple patches are extracted: four corners plus central crop and their horizontalmirrored twins.3Accuracy and error rate always sum to 100, therefore in this paper they are used interchangeably.2Under review as a conference paper at ICLR 20171 2 4 8 16 32 64Batch size [ / ]5102050100200Foward time per image [ms]BN-NINGoogLeNetInception-v3Inception-v4AlexNetBN-AlexNetVGG-16VGG-19ResNet-18ResNet-34ResNet-50ResNet-101ResNet-152ENet1 2 4 8 16 32 64Batch size [ / ]891011121314Net power consumption [W] BN-NINGoogLeNetInception-v3Inception-v4AlexNetBN-AlexNetVGG-16VGG-19ResNet-18ResNet-34ResNet-50ResNet-101ResNet-152ENetFigure 3: Inference time vs.batch size. Thischart show inference time across different batch sizeswith a logarithmic ordinate and logarithmic abscissa.Missing data points are due to lack of enough systemmemory required to process larger batches. A speedup of 3is achieved by AlexNet due to better optimi-sation of its fully connected layers for larger batches.Figure 4: Power vs.batch size. Net power consump-tion (due only to the forward processing of severalDNNs) for different batch sizes. The idle power ofthe TX1 board, with no HDMI screen connected, was1:30 W on average. The max frequency componentof power supply current was 1:4 kHz , correspondingto a Nyquist sampling frequency of 2:8 kHz .3.1 A CCURACYFigure 1 shows one-crop accuracies of the most relevant entries submitted to the ImageNet chal-lenge, from the AlexNet (Krizhevsky et al. , 2012), on the far left, to the best performing Inception-v4(Szegedy et al. , 2016). The newest ResNet and Inception architectures surpass all other architecturesby a significant margin of at least 7%.Figure 2 provides a different, but more informative view of the accuracy values, because it alsovisualises computational cost and number of network’s parameters. The first thing that is very ap-parent is that VGG, even though it is widely used in many applications, is by far the most expensivearchitecture — both in terms of computational requirements and number of parameters. Its 16- and19-layer implementations are in fact isolated from all other networks. The other architectures form asteep straight line, that seems to start to flatten with the latest incarnations of Inception and ResNet.This might suggest that models are reaching an inflection point on this data set. At this inflectionpoint, the costs — in terms of complexity — start to outweigh gains in accuracy. We will later showthat this trend is hyperbolic.3.2 I NFERENCE TIMEFigure 3 reports inference time per image on each architecture, as a function of image batch size(from 1 to 64). We notice that VGG processes one image in a fifth of a second, making it a less likelycontender in real-time applications on an NVIDIA TX1. AlexNet shows a speed up of roughly 3going from batch of 1 to 64 images, due to weak optimisation of its fully connected layers. It is avery surprising finding, that will be further discussed in the next subsection.3.3 P OWERPower measurements are complicated by the high frequency swings in current consumption, whichrequired high sampling current read-out to avoid aliasing. In this work, we used a 200 MHz digitaloscilloscope with a current probe, as reported in section 2. Other measuring instruments, such asan AC power strip with 2 Hz sampling rate, or a GPIB controlled DC power supply with 12 Hzsampling rate, did not provide enough bandwidth to properly conduct power measurements.In figure 4 we see that the power consumption is mostly independent with the batch size. Low powervalues for AlexNet (batch of 1) and VGG (batch of 2) are associated to slower forward times perimage, as shown in figure 3.3Under review as a conference paper at ICLR 20171 2 4 8 16 32 64Batch size [ / ]20030050010002000Maximum net memory utilisation [MB]BN-NINGoogLeNetInception-v3AlexNetBN-AlexNetVGG-16VGG-19ResNet-18ResNet-34ResNet-50ResNet-1010 100 200 300 400 500Parameters [MB]100200300400500600700800Maximum net memory utilisation [MB]Batch of 1 image1.30Figure 5: Memory vs.batch size. Maximum sys-tem memory utilisation for batches of different sizes.Memory usage shows a knee graph, due to the net-work model memory static allocation and the variablememory used by batch size.Figure 6: Memory vs.parameters count. De-tailed view on static parameters allocation and cor-responding memory utilisation. Minimum memoryof200 MB , linear afterwards with slope 1:30.0 20 40 60 80 100 120 140 160Foward time per image [ms]0510152025303540Operations [G-Ops]Batch of 1 image0 20 40 60 80 100 120 140 160Foward time per image [ms]Batch of 16 imagesFigure 7: Operations vs.inference time, size /parameters. Relationship between operations and inferencetime, for batches of size 1 and 16 (biggest size for which all architectures can still run). Not surprisingly, wenotice a linear trend, and therefore operations count represent a good estimation of inference time. Furthermore,we can notice an increase in the slope of the trend for larger batches, which correspond to shorter inferencetime due to batch processing optimisation.3.4 M EMORYWe analysed system memory consumption of the TX1 device, which uses shared memory for bothCPU and GPU. Figure 5 shows that the maximum system memory usage is initially constant andthen raises with the batch size. This is due the initial memory allocation of the network model —which is the large static component — and the contribution of the memory required while processingthe batch, proportionally increasing with the number of images. In figure 6 we can also notice thatthe initial allocation never drops below 200 MB , for network sized below 100 MB , and it is linearafterwards, with respect to the parameters and a slope of 1:30.3.5 O PERATIONSOperations count is essential for establishing a rough estimate of inference time and hardware circuitsize, in case of custom implementation of neural network accelerators. In figure 7, for a batch of16 images, there is a linear relationship between operations count and inference time per image.Therefore, at design time, we can pose a constraint on the number of operation to keep processingspeed in a usable range for real-time applications or resource-limited deployments.4Under review as a conference paper at ICLR 20179 10 11 12 13Net power consumption [W]0510152025303540Operations [G-Ops]Batch of 1 image9 10 11 12 13Net power consumption [W]Batch of 16 imagesFigure 8: Operations vs.power consumption, size /parameters. Independency of power and operations isshown by a lack of directionality of the distributions shown in these scatter charts. Full resources utilisationand lower inference time for AlexNet architecture is reached with larger batches.0 20 40 60 80 100 120 140Images per second [Hz]50556065707580Accuracy [%]Batch of 1 image0 20 40 60 80 100 120 140Images per second [Hz]Batch of 16 imagesFigure 9: Accuracy vs.inferences per second, size /operations. Non trivial linear upper bound is shownin these scatter plots, illustrating the relationship between prediction accuracy and throughput of all examinedarchitectures. These are the first charts in which the area of the blobs is proportional to the amount of operations,instead of the parameters count. We can notice that larger blobs are concentrated on the left side of the charts,in correspondence of low throughput, i.e.longer inference times. Most of the architectures lay on the linearinterface between the grey and white areas. If a network falls in the shaded area, it means it achieves exceptionalaccuracy or inference speed. The white area indicates a suboptimal region. E.g. both AlexNet architecturesimprove processing speed as larger batches are adopted, gaining 80 Hz .3.6 O PERATIONS AND POWERIn this section we analyse the relationship between power consumption and number of operationsrequired by a given model. Figure 8 reports that there is no specific power footprint for different ar-chitectures. When full resources utilisation is reached, generally with larger batch sizes, all networksconsume roughly an additional 11:8 W, with a standard deviation of 0:7 W. Idle power is 1:30 W .This corresponds to the maximum system power at full utilisation. Therefore, if energy consumptionis one of our concerns, for example for battery-powered devices, one can simply choose the slowestarchitecture which satisfies the application minimum requirements.3.7 A CCURACY AND THROUGHPUTWe note that there is a non-trivial linear upper bound between accuracy and number of inferencesper unit time. Figure 9 illustrates that for a given frame rate, the maximum accuracy that can beachieved is linearly proportional to the frame rate itself. All networks analysed here come fromseveral publications, and have been independently trained by other research groups. A linear fit ofthe accuracy shows all architecture trade accuracy vs. speed. Moreover, chosen a specific inferencetime, one can now come up with the theoretical accuracy upper bound when resources are fully5Under review as a conference paper at ICLR 2017VGG-19 VGG-16 AlexNetBN-AlexNet ResNet-152 ResNet-101 Inception-v4ResNet-50Inception-v3ResNet-34 ResNet-18BN-NINGoogLeNetENet024681012Top-1 accuracy density [%/M-Params]Figure 10: Accuracy per parameter vs.network. Information density (accuracy per parameters) is an effi-ciency metric that highlight that capacity of a specific architecture to better utilise its parametric space. Modelslike VGG and AlexNet are clearly oversized, and do not take fully advantage of their potential learning abil-ity. On the far right, ResNet-18, BN-NIN, GoogLeNet and ENet (marked by grey arrows) do a better job at“squeezing” all their neurons to learn the given task, and are the winners of this section.utilised, as seen in section 3.6. Since the power consumption is constant, we can even go one stepfurther, and obtain an upper bound in accuracy even for an energetic constraint, which could possiblybe an essential designing factor for a network that needs to run on an embedded system.As the spoiler in section 3.1 gave already away, the linear nature of the accuracy vs.throughputrelationship translates into a hyperbolical one when the forward inference time is considered instead.Then, given that the operations count is linear with the inference time, we get that the accuracy hasan hyperbolical dependency on the amount of computations that a network requires.3.8 P ARAMETERS UTILISATIONDNNs are known to be highly inefficient in utilising their full learning power (number of parameters/ degrees of freedom). Prominent work (Han et al. , 2015) exploits this flaw to reduce networkfile size up to 50, using weights pruning, quantisation and variable-length symbol encoding. It isworth noticing that, using more efficient architectures to begin with may produce even more compactrepresentations. In figure 10 we clearly see that, although VGG has a better accuracy than AlexNet(as shown by figure 1), its information density is worse. This means that the amount of degreesof freedom introduced in the VGG architecture bring a lesser improvement in terms of accuracy.Moreover, ENet (Paszke et al. , 2016) — which we have specifically designed to be highly efficientand it has been adapted and retrained on ImageNet (Culurciello, 2016) for this work — achieves thehighest score, showing that 24less parameters are sufficient to provide state-of-the-art results.4 C ONCLUSIONSIn this paper we analysed multiple state-of-the-art deep neural networks submitted to the ImageNetchallenge, in terms of accuracy, memory footprint, parameters, operations count, inference timeand power consumption. Our goal is to provide insights into the design choices that can lead toefficient neural networks for practical application, and optimisation of the often-limited resources inactual deployments, which lead us to the creation of ENet — or Efficient-Network — for ImageNet.We show that accuracy and inference time are in a hyperbolic relationship: a little increment inaccuracy costs a lot of computational time. We show that number of operations in a network modelcan effectively estimate inference time. We show that an energy constraint will set a specific upperbound on the maximum achievable accuracy and model complexity, in terms of operations counts.Finally, we show that ENet is the best architecture in terms of parameters space utilisation, squeezingup to 13more information per parameter used respect to the reference model AlexNet, and 24respect VGG-19.6Under review as a conference paper at ICLR 2017ACKNOWLEDGMENTSThis paper would have not look so pretty without the Python Software Foundation , thematplot-lib library and the communities of stackoverflow and T EX of StackExchange which I ought tothank. This work is partly supported by the Office of Naval Research (ONR) grants N00014-12-1-0167, N00014-15-1-2791 and MURI N00014-10-1-0278. We gratefully acknowledge the supportof NVIDIA Corporation with the donation of the TX1, Titan X, K40 GPUs used for this research.REFERENCESSharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and EvanShelhamer. cuDNN: Efficient Primitives for Deep Learning. arXiv.org arXiv:1410.0759 , 2014.Ronan Collobert, Koray Kavukcuoglu, and Cl ́ement Farabet. Torch7: A matlab-like environment for machinelearning. In BigLearn, NIPS Workshop , number EPFL-CONF-192376, 2011.Eugenio Culurciello. Training enet. https://culurciello.github.io/tech/2016/06/20/training-enet.html , 2016.Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks withpruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 , 2015.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXivpreprint arXiv:1512.03385 , 2015.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neuralnetworks. In Advances in neural information processing systems , pp. 1097–1105, 2012.Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400 , 2013.nVIDIA. Jetson tx1 module. http://www.nvidia.com/object/jetson-tx1-module.html .Adam Paszke. torch-opcounter. https://github.com/apaszke/torch-opCounter , 2016.Adam Paszke, Abhishek Chaurasia, Sangpil Kim, and Eugenio Culurciello. Enet: A deep neural networkarchitecture for real-time semantic segmentation. arXiv preprint arXiv:1606.02147 , 2016.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, An-drej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge.International Journal of Computer Vision , 115(3):211–252, 2015.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition.arXiv preprint arXiv:1409.1556 , 2014.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Er-han, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv preprintarXiv:1409.4842 , 2014.Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking theinception architecture for computer vision. arXiv preprint arXiv:1512.00567 , 2015.Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact ofresidual connections on learning. arXiv preprint arXiv:1602.07261 , 2016.Sergey Zagoruyko. imagenet-validation.torch. https://github.com/szagoruyko/imagenet-validation.torch , 2016.7
rkFd2P5gl
Under review as a conference paper at ICLR 2017LEVERAGING ASYNCHRONICITY IN GRADIENTDESCENT FOR SCALABLE DEEPLEARNINGJeff Daily, Abhinav Vishnu, Charles SiegelPacific Northwest National Laboratory902 Battelle BlvdRichland, WA 99352fjeff.daily,abhinav.vishnu,charles.siegel g@pnnl.govABSTRACTIn this paper, we present multiple approaches for improving the performance ofgradient descent when utilizing mutiple compute resources. The proposed ap-proaches span a solution space ranging from equivalence to running on a singlecompute device to delaying gradient updates a fixed number of times. We presenta new approach, asynchronous layer-wise gradient descent that maximizes overlapof layer-wise backpropagation (computation) with gradient synchronization (com-munication). This approach provides maximal theoretical equivalence to the defacto gradient descent algorithm, requires limited asynchronicity across multipleiterations of gradient descent, theoretically improves overall speedup, while mini-mizing the additional space requirements for asynchronicity. We implement all ofour proposed approaches using Caffe – a high performance Deep Learning library– and evaluate it on both an Intel Sandy Bridge cluster connected with Infini-Band as well as an NVIDIA DGX-1 connected with NVLink. The evaluations areperformed on a set of well known workloads including AlexNet and GoogleNeton the ImageNet dataset. Our evaluation of these neural network topologies in-dicates asynchronous gradient descent has a speedup of up to 1.7x compared tosynchronous.1 I NTRODUCTIONDeep Learning (DL) algorithms are a class of Machine Learning and Data Mining (MLDM) algo-rithms, which use an inter-connection of neurons andsynapses to emulate the computational struc-ture of a mammalian brain. DL algorithms have demonstrated resounding success in many com-puter vision tasks and science domains such as high energy physics, computational chemistry andhigh performance computing use-cases. Several DL implementations such as TensorFlow, Caffe,Theano, and Torch have become available. These implementations are primarily geared towardscompute nodes that may contain multi-core architecture (such as Intel Xeon/KNC/KNL) and ormany-core architectures (GPUs).DL algorithms are under-going a tremendous revolution of their own. Widely used DL algorithmssuch as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are com-putationally expensive. Their computational requirements are further worsened by: 1) Very deepneural networks such as recently proposed 1000-layer complex Residual Networks ( ResNet ), 2) In-creasing volume of data produced by simulations, experiments and handheld devices. An importantsolution to these problems is the design and implementation of DL algorithms that are capable ofexecution on distributed memory large scale cluster/cloud computing systems. A few distributed DLimplementations such as CaffeonSpark, Distributed TensorFlow, CNTK, Machine Learning Toolkiton Extreme Scale (MaTEx), and FireCaffe have become available. Implementations such as CNTK,FireCaffe and MaTEx use MPI (Gropp et al., 1996; Geist et al., 1996) – which makes them a naturalfit for high-end systems.DL algorithms primarily use gradient descent – an iterative technique in which the weights ofsynapes are updated using the difference between the ground truth (actual value) and the predictedvalue (using the current state of the neural network). The larger the difference, the steeper the de-1Under review as a conference paper at ICLR 2017scent to a minima (a low value of minima generates the solution). An important type of gradientdescent is batch gradient descent – where a random subset of samples are used for iterative feed-forward (calculation of predicted value) and back-propagation (update of synaptic weights). A smallbatch is prone to severe pertubations to the descent, while a large batch results in slow convergence.Hence, a data scientist tends to use a fairly average batch – which finds the balance between thesetwo conflicting metrics.A large scale parallelization of gradient descent must maximize the equivalence to the default algo-rithm, such that the convergence property is maintained. Consider a scenario where a batch ( b) inthe original algorithm is split across multiple compute nodes ( n) – an example of data parallelism .To provide equivalence to the default algorithm, the batch must be split equally tobn, although thecommunication which would require an all-to-all reduction would increase as (logn). Naturally,asnis increased and bis held constant ( strong scaling ), this becomes prohibitive, whereas keepingthe batch size per node b=nconstant ( weak scaling ) increases the convergence time.Several researchers have proposed methods to alleviate the communication requirements of dis-tributed gradient descent. Parameter-server based approaches use a server to hold the latest versionof the model while clients send computed gradients and request the latest model. This approach hasbeen proposed and extended by several researchers. While theoretically this provides O(1)time-complexity since all batch updates can be computed simultaneously, this approach fails to scalebeyond a few compute nodes when considering the time to convergence relative to having run thecomputation on a single device. Others have proven divergence from the original algorithm. RemoteDirect Memory Access (RDMA) based approaches have been proposed, but they also diverge fromthe original algorithm. Several other implementations are primarily geared towards shared memorysystems, and address the thread contention issue for gradient descent.Our objective is to design a non-parameter-server based technique, which maximizes the equivalenceto the default algorithm, while leveraging high performance architectures – including computationalunits such as GPUs and high performance interconnects such as InfiniBand, Intel Omni-path archi-tectures by using MPI.1.1 C ONTRIBUTIONSSpecifically, we make the following contributions in this paper:We design a baseline asynchronous gradient descent, which delays the gradient updates ofthe entire model by one or more iterations adaptively on the basis of available overlap anduser-defined input.We propose a layer-wise gradient descent method, which overlaps weight updates of alayer with inter-node synchronization of other layers. The proposed method is exactlyequiavalent to the default sequential algorithm.We implement our approaches and other baseline techniques using the Machine LearningToolkit for Extreme Scale (MaTEx), which consists of a distributed memory implementa-tion of Caffe using MPI (Gropp et al., 1996; Geist et al., 1996).We evaluate our approaches and other baseline implementations on a large scale CPU-basedInfiniBand cluster as well as on NVIDIA’s DGX-1 multi-GPU system. We use several wellstudied datasets and DNN topologies such as ImageNet (1.3M images, 250GB dataset)with AlexNet and GoogleNet DNNs.Our evaluation indicates the efficacy of the proposed approach. Specifically, the best asynchronousapproach is up to 1.7x faster than the synchronous approach while achieving up to 82% parallelefficiency.The rest of the paper is organized as follows: In section 2, we present related work of our proposedresearch. We present the background in section 3, followed by an in-depth solution space in sec-tion 4. In section 6, we present a detailed performance evaluation of asynchronous gradient descent,and conclusions with future directions in section 7.2Under review as a conference paper at ICLR 20172 R ELATED WORKBatch gradient descent is the most widely used algorithm for training Deep Learning models. Thisalgorithm has been implemented several times for sequential, multi-core and many-core systemssuch as GPUs. The most widely used implementations are Caffe (Jia et al., 2014) (CPUs/GPUs),Warp-CTC (GPUs), Theano (Bastien et al., 2012; Bergstra et al., 2010) (CPUs/GPUs), Torch (Col-lobert et al., 2002) (CPUs/GPUs), CNTK (Agarwal et al., 2014) (GPUs and Distributed Memoryusing MPI) and Google TensorFlow (Abadi et al., 2015) which use nVIDIA CUDA Deep NeuralNetwork (cuDNN).Caffe is one of the leading software tools for training and deploying deep learning algorithms, andit can be used to develop novel extensions to these algorithms such as the ones described below.Caffe supports execution on a single node (connected with several GPUs) and a version has beenimplemented that takes full advantage of Intel systems. While the research described below wasperformed using Caffe, the extensions can be applied to Tensorflow as well.Caffe (and other deep learning software) is also equipped with several optimizations designed toavoid significant problems in training deep networks. The vanishing gradient problem (Bianchini& Scarselli, 2014) causes deep networks to fail to learn much at all in the early layers, and wassolved in (Hinton & Osindero, 2006) and (Bengio et al., 2007) where it was shown that a networkcould be trained one layer at a time with autoencoders (Hinton & Salakhutdinov, 2006), and thenput together to form a single network (Vincent et al., 2010). Another optimization that helps to solvethis problem is switching from sigmoidal neurons torectified linear neurons .The problem of accelerating gradient descent, especially disctributed across compute resources, is ofinterest to many researchers. Approaches generally fall into two categories, whether or not they areequivalent to having run using a single compute device; utilizing a single compute device necessarilycomputes gradient updates and applies them immediately to the model. Further, the gradient updatescan be classified as either synchronous or asynchronous depending on whether the communication ofthe gradients can be overlapped with any computation of the gradients. For example, the DistBeliefparameter server approach (Dean et al., 2012) computes gradient updates asynchronously based onan out-of-date copy of the model and applies them to the latest model. Though this is not equivalentto having run on a single device, it is able to process samples much faster.Chen et al. (2016) revisit asynchronous gradient descent and propose a few synchronous variants inorder to impove time to convergence. Notably, they show that waiting for all workers to complete,aggregating the gradients, and applying the gradients to the same common model (thereby eachworker has a copy of the latest model) provides a good time to convergence while also leveragingmultiple compute devices. Their approach is where this paper begins while additionally proposingapproaches ranging from synchronous to parameter server variants.3 F UNDAMENTALS3.1 N EURAL NETWORKSMachine Learning algorithms designed to emulate the computational structure of the brain to modeldata are called “Neural Networks.” The basic unit of a neural network is the neuron and neurons areconnected to one another via synapses .3.1.1 B ACKPROPAGATIONNeural networks are trained through an algorithm called backpropagation . This is a means of com-puting gradients layer by layer to implement the gradient descent algorithm ’s update rule ofw0=w+rwC (1)b0=b+rbC (2)where ware the weights, bthe biases,the learning rate, and Cis a cost function to be optimized,usually square error or cross-entropy. This rule is often replaced by a slightly more complex rule,such as Adaptive Gradient Descent (AdaGrad) (Duchi et al., 2011) or Momentum (Qian, 1999).3Under review as a conference paper at ICLR 2017To compute the gradients, we set W(`),b(`)the weights and biases for each layer, z(`+1)=W(`)a(`)+b(`)anda(`)=(z(`)), whereis the activation function. Let n`represent the numberof layers. Then, we use Algorithm 1.Algorithm 1 Back Propagation1:input: DataX2Rnpand labelsY2Rn`2:forifrom 1 tondo3: Compute all z(`)anda(`).4:(n`)=(yan`)(z(n`))5: for`fromn`1to 2do6:(`)=W`(`+1)0(z(`))7: end for8:rW(`)C=(`+1)a(`)T9:rb(`)C=(`+1)10:end forAlthough there are several nonlinear activation functions in common use, the networks examined inthis paper only include rectified linear units (ReLU) where ReLU(x) = max(0;x).3.2 C AFFECaffe (Jia et al., 2014) is one of the leading software packages for building and training neuralnetworks. It provides abstractions for a wide range of topologies and for training them with manydifferent types of optimizers. Caffe provides abstractions for operations on multi-dimensional arrays(tensors) which are essential for implementing Deep Learning algorithms. From an input tensor, anoutput tensor, and tensors for each hidden layer, Caffe constructs a computational graph that man-ages these tensors and their updates as a single object. Caffe is particularly useful for researchers,because it is heavily optimized and can be modified through an open source C++ backend.As Caffe’s runtime is implemented in C++, it can extract native performance from the computa-tion environment it is run on. Furthermore, Caffe abstracts GPU computations, leveraging nVIDIACUDA Deep Neural Network Library (cuDNN) for the task. We have modified this code for dis-tributed memory computation on large scale systems using MPI to natively use network hardware foroptimal performance. The base, synchronous implementation is similar to FireCaffe (Iandola et al.,2015), another distributed memory implementation of Caffe. Further modifications are described inSection 4.There are three phases of computation within Caffe that pass over the enumerated layers of thenetwork. First, the forward pass computes the output result given the samples from the input batch,starting at the first layer. Next, starting at the last (output) layer, based on the difference betweenthe output result and the ground truth, the backward pass uses the backpropagation technique tocompute the gradients for each layer. Lastly, one final pass is made over the network to apply thegradients to the weights and biases before starting the process over again with the next batch.4 S OLUTION SPACEThe goal of improving gradient descent is to accelerate the time to solution without sacrificing theaccuracy of the model. The base case to consider is then computing and applying gradients onebatch at a time on a single compute device. One way to accelerate the computation while alsomaintaining equivalence to the sequential is to use data parallelism. Data parallelism is where thetraditional batch is further subdivided into equally-sized mini-batches, each mini-batch is computedon separate devices, then the gradients resulting from each mini-batch is averaged together. Sinceeach gradient update is itself an average, taking the average of the mini-gradients results in an updatethat is effectively the same as having computed the original batch size. This is called the effectivebatch size . Data parallelism is the approach we explore in this paper, attempting many ways ofhiding the latency of the gradient communication that occurs between compute devices. We useMPI to communicate the gradients.4Under review as a conference paper at ICLR 2017Caffe provides callback methods in its C++ interface that interject user-defined functionality into keyphases of the computation (see 3.2). Specifically, one user-defined function is executed immediatelybefore the foward pass when the batch computation begins. The other user-defined function executesafter the backward pass finishes, but before the application of the gradients to the weights and biases.Additional callback functions were added to support finer-grained control over the three phases ofcomputation. One of the additional callbacks executes after each gradient is computed during thebackward phase, once per set of learnable parameters, such as the weights or biases of a given layer.Another callback function that was added is called once per learnable parameter during the applyphase, just before the gradients are applied. Lastly, a callback function was added that turns thegradient application into a task queue, requesting additional tasks in an unspecified order until allgradients have been applied.A critical implementation detail for any of our proposed approaches is to make sure the individualnetwork models maintained by each compute device start from the same random initial conditionsfor the weights and biases. Before the first batch is computed, the weights and biases from the masterprocess are copied (broadcast) to the other processes. That way any gradients that are computed,when averaged together, are based on the same initial conditions.4.1 S YNCHRONOUS GRADIENT DESCENTSimilar to what Chen et al. (2016) proposes and what is implemented in FireCaffe (Iandola et al.,2015), synchronous gradient descent averages the gradients from each mini-batch together beforeapplying them, forming one complete batch at a time. The way this is implemented in Caffe is touse the callback function that executes when all gradients are ready to be applied. During this call-back, MPI Allreduce is used to sum the gradients, placing the same resulting sum on each computedevice. This function is blocking, meaning it returns control back to Caffe only after the sum iscomputed across all devices. Since the result is a sum and not the intended average, it is then scaleddown based on the number of compute devices in use. It is important to note that the reductionoperation can be performed in-place, meaning it can use the memory location directly holding thegradient without performing any costly memory copies, especially for networks with a large numberof parameters such as AlexNet. This approach also has the important quality that the gradients areaveraged after they have been used by each layer of the backpropagation, preserving the importanceof any activations within the network against the mini-batch instead of against the effective batch.4.2 L AYER -WISEGRADIENT DESCENTChen et al. (2016) proposes the pipelining of gradient computation and application. For example,the gradients of upper layers can be concurrently applied while computing the gradients of lowerlayers. This approach must be done carefully to maintain equivalence with the sequential base case.We make the observation that gradients can be averaged as soon as they are computed during thebackward phase, instead of waiting for all gradients to be computed. However, adjacent layers willuse and/or update the gradients of layers that have otherwise finished computing their gradients.This implies the averaging of the gradients must be performed on a copy of the gradients rather thanin-place. Further, the averaging of the copied gradients must finish before they can be applied.We utilize a background thread of computation in order to perform the gradient averaging concurrentwith the remaining gradient computation. This provides maximal overlap of the communicationlatency with useful computation. There are a few options when to apply the averaged gradients.Waiting for all communication to finish before applying all gradients is straightfoward and similar tothe synchronous approach described previously, though perhaps at least some of the communicationlatency would be overlapped. Another approach is to wait, one layer at a time, for the gradientsfor a particular layer to finish averaging and then apply the gradients. It is intuitive to perform thewaiting in the same order in which backpropagation was performed, from the last layer to the firstlayer. Lastly, since all gradient updates are independent, we can perform them in an arbitrary order.This takes advantage of the observation that not all layers have the same number of parameters, andfurther, the gradients for the weights and the gradients for the biases can be averaged separately;the size of the weight gradients are typically larger than the bias gradients, implying that the biasgradients will complete their communication more quickly. Since the communcation of the variousparameters can finish somewhat arbitrarily based on when the communication was initiated and the5Under review as a conference paper at ICLR 2017size of the communication, we can apply the gradients as soon as they complete their averaging. Weevaluate these strategies in 6.4.3 A SYNCHRONOUS GRADIENT DESCENTAs stated in (Chen et al., 2016), parameter server implementations suffer from poor convergencesince gradient updates are calculated based on out-of-date networks. Continuing with our data par-allel approach, there is a lower limit to the size of the mini-batches and therefore the number ofcompute devices that can be utilized. As the amount of work per compute device decreases pro-portional to the decreasing size of the mini-batches, there is less computation available to mask thelatency of the gradient averaging across the devices. Initiating the averaging layer-wise as describedabove may not be enough to mitigate this problem.We propose delaying the application of the gradients by a fixed number of iterations much smallerthan the number of compute devices as would have been done in a parameter server approach.The gradients are delayed by using a concurrent communication thread and applying the gradientone, two, or three iterations later thus giving the averaging enough time to complete as needed.If the gradient needs to be delayed by one iteration, this requires one communication thread andone additional buffer to hold the gradient; delaying by two iterations requires two communicationthreads and two additional buffers and so on. This approach is somewhere between a parameterserver (Dean et al., 2012) and the various approaches that maintain equivalency with a sequentialcomputation.5 I MPLEMENTATION DETAILSThe implementations evaluated in this paper focus on data parallelism and the averaging of gradientsacross compute devices. This is achieved using MPI and parallel I/O.5.1 H ANDLING I/OThe data parallelism is achieved by distributing datasets across compute devices, partitioning thembased on the number of devices utilized; each device receives a disjoint subset of the dataset andno samples are shuffled or exchanged between compute devices outside of the gradient averaging.Caffe frequently uses a database in LMDB format for its datasets, however this format cannot beused on remote (network) filesystems or even between processes on the same host. Caffe mitigatesthis issue when using more than one GPU on the same host by using a single I/O reading threadand a round-robin deal of the samples to device-specific queues. Our implementations mitigate thisissue by first converting an LMDB database into a netCDF file (Rew & Davis, 1990). netCDF filescan be read and partitioned using parallel MPI-IO via the parallel netCDF library (Li et al., 2003).5.2 D ISTRIBUTED MEMORY IMPLEMENTATION USING MPIFor single-node GPU computation, using one or more GPU devices in a single host, Caffe providesa means of allocating one contiguous buffer to hold the data for the weights and biases and a secondbuffer to hold the gradients for each. We extended this approach for CPU hosts. A single contigousbuffer allows the non-layer-wise, i.e., network-wise gradient averages to be performed using a singleMPI reduction operation. The layer-wise implementations require one MPI reduction operation pernetwork parameter. There is a fixed cost to start a communication primitive regardless of how muchdata is communicated. It is sometimes beneficial to aggregate otherwise many small communicationrequests into a larger one.Although Caffe provides a way of utilizing all GPUs within the host, it does not currently leverageNVIDIA’s NCCL package (NVIDIA Corporation, 2015) for optimized, high-bandwidth collectivecommunication routines. We used the NCCL equivalent to the MPI all reduction to sum gradientsacross GPU devices on the DGX-1 platform.6Under review as a conference paper at ICLR 20176 E XPERIMENTAL EVALUATIONIn this section, we present an experimental evaluation and analysis of the heuristics described insection 4.6.1 H ARDWARE ARCHITECTURESWe evaluate using a CPU cluster as well as NVIDIA’s speialized DGX-1 multi-GPU host system.Each node of the multi-node cluster consists of a multi-core Intel Sandybridge CPU connected viaInfiniBand. We use Intel MPI 5.1.2 for performance evaluation. The heuristics are implemented inCaffe (Jia et al., 2014), specifically the intelcaffe branch designed to optimize performance on IntelCPUs.The DGX-1 system contains 8 Pascal GPUs connected using the high-speed NVlink interconnect.For the DGX-1 evaluations, the latest version of Berkley’s Caffe was modified to use the NCCLcommunicaiton primitives in addition to our algorithmic changes.6.2 I MAGE NET AND NETWORK ARCHITECTURESWe evaluate on two distinct network architectures trained on the ImageNet dataset. ImageNet refersspecifically to the ILSVRC2015 (Russakovsky et al., 2015) dataset. This dataset consists of a train-ing set of just under 1.3 million images of various sizes (as jpg files) divided among 1000 classes,along with a validation set consisting of 50000 images of the same type and classes. Additionally,for the competition, there is a testing set, but it is held separately and not available publicly. It isestablished as one of the benchmark dataset for machine learning with large datasets, and among thefamous architectures that achieved record top 1 and top 5 accuracies on it are AlexNet (Krizhevskyet al., 2012) and GoogLeNet (Szegedy et al., 2015).We evaluate on AlexNet and GoogLeNet because they are now well-established models with knowntraining regimes and loss curves. They also demonstrate two different regimes for paralleliza-tion: AlexNet has approximately 60 million parameters that need to be communicated, whereasGoogLeNet has approximately 4 million. In contrast to the smaller amount of communication forGoogLeNet, it requires roughly twice the amount of time to process a each image than AlexNet doeswhen communication is ignored.6.3 E VALUATIONFigure 1 compares the implemented approaches relative to a communication-less baseline “nocomm”. The effective batch sizes were 256 and 32 for AleNet and GoogLeNet, respectively. Forexample, using 8 compute devices for GoogLeNet uses a mini-batch size of 32=8 = 4 . The evalu-ation on DGX-1 were limited to 8 compute devices whereas the CPU cluster evaluation eventuallyhit the strong scaling limit for data parallelism.These results show that delaying the gradient updates by one or more iterations is the most effectivemeans of hiding the communication latency. The layer-wise approaches did not perform as well asexpected. These trends were consistent across both hardware platforms.The layer-wise approaches, though promising as equivalent to a sequential computation, were notable to complete their gradient averages quickly enough. Compared to the delayed gradient ap-proach, this is perhaps intuitive. The delayed gradient approach is able to hide the communicationlatency across all three complete phases of the computation whereas the layer-wise approaches onlyhave as long as it takes to complete the backpropagation phase. This is not enough time to completethe communication, especially as the mini-batch sizes decrease and therefore provide less work tomask the communication.In addition to looking at the time per batch above, the rates of convergence of these heuristics mustbe evaluated. All of the heuristics completed training AlexNet to the standard top-1 accuracy of54% using the default AlexNet settings that come with Caffe. However, it is worth noting that atthe beginning of training, they showed different loss curves showing that there is a tradeoff betweennumber of batches per second and accuracy at a given batch as shown in Table 1.7Under review as a conference paper at ICLR 20170"0.5"1"1.5"2"2.5"3"3.5"no"comm"SGD"SGD"Layer4wise"AGD"1"comm"AGD"2"comm"SGD"task4wise,"1"comm"SGD"task4wise,"2"comm"Itera&ons*per*second*1"2"4"8"16"32"(a) AlexNet CPU0"5"10"15"20"25"no"comm"SGD"AGD"1"comm"AGD"2"comm"AGD"3"comm"Itera&ons*per*second*1"2"4"8" (b) AlexNet DGX-10"0.5"1"1.5"2"2.5"3"no"comm"SGD"SGD"Layer4wise"AGD"1"comm"AGD"2"comm"SGD"task4wise,"1"comm"SGD"task4wise,"2"comm"Itera&ons*per*second*1"2"4"8"16"(c) GoogLeNet CPU0"5"10"15"20"25"30"no"comm"SGD"AGD"1"comm"AGD"2"comm"AGD"3"comm"Itera&ons*per*second*1"2"4"8" (d) GoogLeNet DGX-1Figure 1: Evaluation of SGD and AGD approaches. Effective batch sizes were 256 and 32 for AlexNet andGoogLeNet, respectively.Table 1: AlexNet Accuracy After Every 1000 Batches on DGX-1batch 1000 2000 3000 4000 5000serial, 1 GPU 0.0124 0.05164 0.10102 0.13432 0.16454SGD 0.01116 0.03984 0.07594 0.10622 0.13052AGD, 1 comm 0.0039 0.01324 0.02632 0.05076 0.07362AGD, 2 comm 0.00104 0.00356 0.00636 0.01282 0.01688We also evaluated whether these approaches converged in addition to just improving the number ofiterations per second. All approaches evaluated managed to converge within the exepcted number ofiterations. Notably, AlexNet on DGX-1 reached convergence in 11 hours using the delayed gradientapproach and two communication threads using the standard AlexNet network from Caffe.7 C ONCLUSIONSThere is a tradeoff between maintaining equivalence to sequential methods versus leveraging thevast computational resources available for gradient descent. We find that asynchronous methodscan give a 1.7x speedup while not sacrificing accuracy at the end of an otherwise identical trainingregime. This improvement was achieved without the need for a warm start, contrary to previouslypublished results using parameter servers.8Under review as a conference paper at ICLR 2017REFERENCESMart ́ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, AndrewHarp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, ManjunathKudlur, Josh Levenberg, Dan Man ́e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-cent Vanhoucke, Vijay Vasudevan, Fernanda Vi ́egas, Oriol Vinyals, Pete Warden, Martin Watten-berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learningon heterogeneous systems, 2015. URL http://tensorflow.org/ . Software available fromtensorflow.org.Amit Agarwal, Eldar Akchurin, Chris Basoglu, Guoguo Chen, Scott Cyphers, Jasha Droppo, AdamEversole, Brian Guenter, Mark Hillebrand, Ryan Hoens, Xuedong Huang, Zhiheng Huang,Vladimir Ivanov, Alexey Kamenev, Philipp Kranen, Oleksii Kuchaiev, Wolfgang Manousek,Avner May, Bhaskar Mitra, Olivier Nano, Gaizka Navarro, Alexey Orlov, Marko Padmilac,Hari Parthasarathi, Baolin Peng, Alexey Reznichenko, Frank Seide, Michael L. Seltzer, Mal-colm Slaney, Andreas Stolcke, Yongqiang Wang, Huaming Wang, Kaisheng Yao, Dong Yu,Yu Zhang, and Geoffrey Zweig. An introduction to computational networks and the compu-tational network toolkit. Technical Report MSR-TR-2014-112, August 2014. URL http://research.microsoft.com/apps/pubs/default.aspx?id=226641 .Fr ́ed ́eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, ArnaudBergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements.Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise trainingof deep networks. In B. Sch ̈olkopf, J. C. Platt, and T. Hoffman (eds.), Advances in Neural Infor-mation Processing Systems 19 , pp. 153–160. MIT Press, 2007. URL http://papers.nips.cc/paper/3048-greedy-layer-wise-training-of-deep-networks.pdf .James Bergstra, Olivier Breuleux, Fr ́ed ́eric Bastien, Pascal Lamblin, Razvan Pascanu, GuillaumeDesjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPUmath expression compiler. In Proceedings of the Python for Scientific Computing Conference(SciPy) , June 2010. Oral Presentation.Monica Bianchini and Franco Scarselli. On the complexity of neural network classifiers: A com-parison between shallow and deep architectures. IEEE Transactions on Neural Networks andLearning Systems , 25(8):1553 – 1565, 2014. doi: 10.1109/TNNLS.2013.2293637.Jianmin Chen, Rajat Monga, Samy Bengio, and Rafal J ́ozefowicz. Revisiting distributed syn-chronous SGD. CoRR , abs/1604.00981, 2016. URL http://arxiv.org/abs/1604.00981 .Ronan Collobert, Samy Bengio, and Johnny Marithoz. Torch: A modular machine learning softwarelibrary, 2002.Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc’aurelioRanzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V . Le, and Andrew Y . Ng. Large scaledistributed deep networks. In P. Bartlett, F.c.n. Pereira, C.j.c. Burges, L. Bottou, and K.q. Wein-berger (eds.), Advances in Neural Information Processing Systems 25 , pp. 1232–1240. 2012. URLhttp://books.nips.cc/papers/files/nips25/NIPS2012_0598.pdf .John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning andstochastic optimization. J. Mach. Learn. Res. , 12:2121–2159, July 2011. ISSN 1532-4435. URLhttp://dl.acm.org/citation.cfm?id=1953048.2021068 .Al Geist, William Gropp, Steve Huss-Lederman, Andrew Lumsdaine, Ewing L. Lusk, WilliamSaphir, Tony Skjellum, and Marc Snir. MPI-2: Extending the message-passing interface. InEuro-Par, Vol. I , pp. 128–135, 1996.W. Gropp, E. Lusk, N. Doss, and A. Skjellum. A High-Performance, Portable Implementation ofthe MPI Message Passing Interface Standard. Parallel Computing , 22(6):789–828, 1996.9Under review as a conference paper at ICLR 2017G E Hinton and R R Salakhutdinov. Reducing the dimensionality of data withneural networks. Science , 313(5786):504–507, July 2006. doi: 10.1126/science.1127647. URL http://www.ncbi.nlm.nih.gov/sites/entrez?db=pubmed&uid=16873662&cmd=showdetailview&indexed=google .Geoffrey E. Hinton and Simon Osindero. A fast learning algorithm for deep belief nets. NeuralComputation , 18:2006, 2006.Forrest N Iandola, Khalid Ashraf, Mattthew W Moskewicz, and Kurt Keutzer. Firecaffe: near-linear acceleration of deep neural network training on compute clusters. arXiv preprintarXiv:1511.00175 , 2015.Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser-gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed-ding. arXiv preprint arXiv:1408.5093 , 2014.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deepconvolutional neural networks. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Wein-berger (eds.), Advances in Neural Information Processing Systems 25 , pp. 1097–1105. Cur-ran Associates, Inc., 2012. URL http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf .Jianwei Li, Wei-keng Liao, Alok Choudhary, Robert Ross, Rajeev Thakur, William Gropp, RobertLatham, Andrew Siegel, Brad Gallagher, and Michael Zingale. Parallel netcdf: A high-performance scientific i/o interface. In Supercomputing, 2003 ACM/IEEE Conference , pp. 39–39.IEEE, 2003.NVIDIA Corporation. NCCL: Optimized primitives for collective multi-GPU communication.https://github.com/NVIDIA/nccl, 2015.Ning Qian. On the momentum term in gradient descent learning algorithms. Neural Net-works , 12(1):145 – 151, 1999. ISSN 0893-6080. doi: http://dx.doi.org/10.1016/S0893-6080(98)00116-6. URL http://www.sciencedirect.com/science/article/pii/S0893608098001166 .Russ Rew and Glenn Davis. Netcdf: an interface for scientific data access. IEEE computer graphicsand applications , 10(4):76–82, 1990.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. InCVPR 2015 , 2015. URL http://arxiv.org/abs/1409.4842 .Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.Stacked denoising autoencoders: Learning useful representations in a deep network with a localdenoising criterion. J. Mach. Learn. Res. , 11:3371–3408, December 2010. ISSN 1532-4435.URL http://dl.acm.org/citation.cfm?id=1756006.1953039 .10
rJLS7qKel
Published as a conference paper at ICLR 2017LEARNING TO ACT BY PREDICTING THE FUTUREAlexey DosovitskiyIntel LabsVladlen KoltunIntel LabsABSTRACTWe present an approach to sensorimotor control in immersive environments. Ourapproach utilizes a high-dimensional sensory stream and a lower-dimensionalmeasurement stream. The cotemporal structure of these streams provides a richsupervisory signal, which enables training a sensorimotor control model by in-teracting with the environment. The model is trained using supervised learningtechniques, but without extraneous supervision. It learns to act based on raw sen-sory input from a complex three-dimensional environment. The presented formu-lation enables learning without a fixed goal at training time, and pursuing dynam-ically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. Theresults demonstrate that the presented approach outperforms sophisticated priorformulations, particularly on challenging tasks. The results also show that trainedmodels successfully generalize across environments and goals. A model trainedusing the presented approach won the Full Deathmatch track of the Visual DoomAI Competition, which was held in previously unseen environments.1 I NTRODUCTIONMachine learning problems are commonly divided into three classes: supervised, unsupervised, andreinforcement learning. In this view, supervised learning is concerned with learning input-outputmappings, unsupervised learning aims to find hidden structure in data, and reinforcement learningdeals with goal-directed behavior (Murphy, 2012). Reinforcement learning is compelling becauseit considers the natural setting of an organism acting in its environment. It is generally taken tocomprise a class of problems (learning to act), the mathematical formalization of these problems(maximizing the expected discounted return), and a family of algorithmic approaches (optimizingan objective derived from the Bellman equation) (Kaelbling et al., 1996; Sutton & Barto, 2017).While reinforcement learning (RL) has achieved significant progress (Mnih et al., 2015), key chal-lenges remain. One is sensorimotor control from raw sensory input in complex and dynamic three-dimensional environments, learned directly from experience. Another is the acquisition of generalskills that can be flexibly deployed to accomplish a multitude of dynamically specified goals (Lakeet al., 2016).In this work, we propose an approach to sensorimotor control that aims to assist progress towardsovercoming these challenges. Our approach departs from the reward-based formalization commonlyused in RL. Instead of a monolithic state and a scalar reward, we consider a stream of sensory inputfstgand a stream of measurements fmtg. The sensory stream is typically high-dimensional andmay include the raw visual, auditory, and tactile input. The measurement stream has lower dimen-sionality and constitutes a set of data that pertain to the agent’s current state. In a physical system,measurements can include attitude, supply levels, and structural integrity. In a three-dimensionalcomputer game, they can include health, ammunition levels, and the number of adversaries over-come.Our guiding observation is that the interlocked temporal structure of the sensory and measurementstreams provides a rich supervisory signal. Given present sensory input, measurements, and goal,the agent can be trained to predict the effect of different actions on future measurements. Assumingthat the goal can be expressed in terms of future measurements, predicting these provides all theinformation necessary to support action. This reduces sensorimotor control to supervised learning,while supporting learning from raw experience and without extraneous data. Supervision is pro-1Published as a conference paper at ICLR 2017vided by experience itself: by acting and observing the effects of different actions in the context ofchanging sensory inputs and goals.This approach has two significant benefits. First, in contrast to an occasional scalar reward assumedin traditional RL, the measurement stream provides rich and temporally dense supervision that canstabilize and accelerate training. While a sparse scalar reward may be the only feedback availablein a board game (Tesauro, 1994; Silver et al., 2016), a multidimensional stream of sensations is amore appropriate model for an organism that is learning to function in an immersive environment(Adolph & Berger, 2006).The second advantage of the presented formulation is that it supports training without a fixed goaland pursuing dynamically specified goals at test time. Assuming that the goal can be expressed interms of future measurements, the model can be trained to take the goal into account in its predictionof the future. At test time, the agent can predict future measurements given its current sensory input,measurements, and goal, and then simply select the action that best suits its present goal.We evaluate the presented approach in immersive three-dimensional simulations that require visu-ally navigating a complex three-dimensional environment, recognizing objects, and interacting withdynamic adversaries. We use the classical first-person game Doom, which introduced immersivethree-dimensional games to popular culture (Kushner, 2003). The presented approach is given onlyraw visual input and the statistics shown to the player in the game, such as health and ammunitionlevels. No human gameplay is used, the model trains on raw experience.Experimental results demonstrate that the presented approach outperforms state-of-the-art deep RLmodels, particularly on complex tasks. Experiments further demonstrate that models learned by thepresented approach generalize across environments and goals, and that the use of vectorial measure-ments instead of a scalar reward is beneficial. A model trained with the presented approach won theFull Deathmatch track of the Visual Doom AI Competition, which took place in previously unseenenvironments. The presented approach outperformed the second best submission, which employeda substantially more complex model and additional supervision during training, by more than 50%.2 B ACKGROUNDThe supervised learning (SL) perspective on learning to act by interacting with the environmentdates back decades. Jordan & Rumelhart (1992) analyze this approach, review early work, andargue that the choice of SL versus RL should be guided by the characteristics of the environment.Their analysis suggests that RL may be more efficient when the environment provides only a sparsescalar reward signal, whereas SL can be advantageous when temporally dense multidimensionalfeedback is available.Sutton (1988) analyzed temporal-difference (TD) learning and argued that it is preferable to SL forprediction problems in which the correctness of the prediction is revealed many steps after the pre-diction is made. Sutton’s influential analysis assumes a sparse scalar reward. TD and policy gradientmethods have since come to dominate the study of sensorimotor learning (Kober et al., 2013; Mnihet al., 2015; Sutton & Barto, 2017). While the use of SL is natural in imitation learning (LeCunet al., 2005; Ross et al., 2013) or in conjunction with model-based RL (Levine & Koltun, 2013),the formulation of sensorimotor learning from raw experience as supervised learning is rare (Levineet al., 2016). Our work suggests that when the learner is exposed to dense multidimensional sen-sory feedback, direct future prediction can support effective sensorimotor coordination in complexdynamic environments.Our approach has similarities to Monte Carlo methods. The convergence of such methods wasanalyzed early on and they were seen as theoretically advantageous, particularly when function ap-proximators are used (Bertsekas, 1995; Sutton, 1995; Singh & Sutton, 1996). The choice of TDlearning over Monte Carlo methods was argued on practical grounds, based on empirical perfor-mance on canonical examples (Sutton, 1995). While the understanding of the convergence of bothtypes of methods has since improved (Szepesv ́ari & Littman, 1999; Tsitsiklis, 2002; Even-Dar &Mansour, 2003), the argument for TD versus Monte Carlo is to this day empirical (Sutton & Barto,2017). Sharp negative examples exist (Bertsekas, 2010). Our work deals with the more generalsetting of vectorial feedback and parameterized goals, and shows that a simple Monte-Carlo-typemethod performs extremely well in a compelling instantiation of this setting.2Published as a conference paper at ICLR 2017Vector-valued feedback has been considered in the context of multi-objective decision-making(G ́abor et al., 1998; Roijers et al., 2013). Transfer across related tasks has been analyzed byKonidaris et al. (2012). Parameterized goals have been studied in the context of continuous mo-tor skills such as throwing darts at a target (da Silva et al., 2012; Kober et al., 2012; Deisenrothet al., 2014). A general framework for sharing value function approximators across both states andgoals has been described by Schaul et al. (2015). Our work is most closely related to the frameworkof Schaul et al. (2015), but presents a specific formulation in which goals are defined in terms ofintrinsic measurements and control is based on direct future prediction. We provide an architecturethat handles realistic sensory and measurement streams and achieves state-of-the-art performance incomplex and dynamic three-dimensional environments.Learning to act in simulated environments has been the focus of significant attention following thesuccessful application of deep RL to Atari games by Mnih et al. (2015). A number of recent effortsapplied related ideas to three-dimensional environments. Lillicrap et al. (2016) considered continu-ous and high-dimensional action spaces and learned control policies in the TORCS simulator. Mnihet al. (2016) described asynchronous variants of deep RL methods and demonstrated navigation ina three-dimensional labyrinth. Oh et al. (2016) augmented deep Q-networks with external mem-ory and evaluated their performance on a set of tasks in Minecraft. In a recent technical report,Kulkarni et al. (2016b) proposed end-to-end training of successor representations and demonstratednavigation in a Doom-based environment. In another recent report, Blundell et al. (2016) considereda nonparametric approach to control and conducted experiments in a three-dimensional labyrinth.Experiments reported in Section 4 demonstrate that our approach significantly outperforms state-of-the-art deep RL methods.Prediction of future states in dynamical systems was considered by Littman et al. (2001) and Singhet al. (2003). Predictive representations in the form of generalized value functions were advocatedby Sutton et al. (2011). More recently, Oh et al. (2015) learned to predict future frames in Atarigames. Prediction of full sensory input in realistic three-dimensional environments remains an openchallenge, although significant progress is being made (Mathieu et al., 2016; Finn et al., 2016; Kalch-brenner et al., 2016). Our work considers prediction of future values of meaningful measurementsfrom rich sensory input and shows that such prediction supports effective sensorimotor control.3 M ODELConsider an agent that interacts with the environment over discrete time steps. At each time step t,the agent receives an observation otand executes an action atbased on this observation. We assumethat the observations have the following structure: ot=hst;mti, where stis raw sensory inputandmtis a set of measurements. In our experiments, stis an image: the agent’s view of its three-dimensional environment. More generally, stcan include input from multiple sensory modalities.The measurements mtcan indicate the attitude, supply levels, and structural integrity in a physicalsystem, or health, ammunition, and score in a computer game.The distinction between sensory input stand measurements mtis somewhat artificial: both standmtconstitute sensory input in different forms. In our model, the measurement vector mtis distin-guished from other sensations in two ways. First, the measurement vector is the part of the observa-tion that the agent will aim to predict. At present, predicting full sensory streams is beyond our ca-pabilities (although see the work of Kalchbrenner et al. (2016) and van den Oord et al. (2016) for im-pressive recent progress). We therefore designate a subset of sensations as measurements that will bepredicted. Second, we assume that the agent’s goals can be defined in terms of future measurements.Specifically, let 1;:::;nbe a set of temporal offsets and let f=hmt+1mt;:::;mt+nmtibe the corresponding differences of future and present measurements. We assume that any goalthat the agent will pursue can be defined as maximization of a function u(f;g). Any parametricfunction can be used. Our experiments use goals that are expressed as linear combinations of futuremeasurements:u(f;g) =g>f; (1)where the vector gparameterizes the goal and has the same dimensionality as f. This model gener-alizes the standard reinforcement learning formulation: the scalar reward signal can be viewed as ameasurement, and exponential decay is one possible configuration of the goal vector.3Published as a conference paper at ICLR 2017To predict future measurements, we use a parameterized function approximator, denoted by F:pat=F(ot;a;g;): (2)Herea2A is an action, are the learned parameters of F, andpatis the resulting prediction. Thedimensionality of patmatches the dimensionality of fandg. Note that the prediction is a function ofthe current observation, the considered action, and the goal. At test time, given learned parameters, the agent can choose the action that yields the best predicted outcome:at= arg maxa2Ag>F(ot;a;g;): (3)The goal vector used at test time need not be identical to any goal seen during training.3.1 T RAININGThe predictor Fis trained on experiences collected by the agent. Starting with a random policy, theagent begins to interact with its environment. This interaction takes place over episodes that last fora fixed number of time steps or until a terminal event occurs.Consider a set of experiences collected by the agent, yielding a set Dof training examples:D=fhoi;ai;gi;fiigNi=1. Herehoi;ai;giiis the input and fiis the output of example i. The pre-dictor is trained using a regression loss:L() =NXi=1kF(oi;ai;gi;)fik2: (4)A classification loss can be used for predicting categorical measurements, but this was not necessaryin our experiments.As the agent collects new experiences, the training set Dand the predictor used by the agent change.We maintain an experience memory of the Mmost recent experiences out of which a mini-batch ofNexamples is randomly sampled for every iteration of the solver. The parameters of the predictorused by the agent are updated after every knew experiences. This setup departs from pure on-policy training and we have not observed any adverse effect of using a small experience memory.Additional details are provided in Appendix A.We have evaluated two training regimes:1. Single goal: the goal vector is fixed throughout the training process.2. Randomized goals: the goal vector for each episode is generated at random.In both regimes, the agent follows an "-greedy policy: it acts greedily according to the current goalwith probability 1", and selects a random action with probability ". The value of "is initially setto1and is decreased during training according to a fixed schedule.3.2 A RCHITECTUREThe predictor Fis a deep network parameterized by . The network architecture we use is shownin Figure 1. The network has three input modules: a perception module S(s), a measurementmoduleM(m)and a goal module G(g). In our experiments, sis an image and the perceptionmoduleSis implemented as a convolutional network. The measurement and goal modules arefully-connected networks. The outputs of the three input modules are concatenated, forming thejoint input representation used for subsequent processing:j=J(s;m;g) =hS(s);M(m);G(g)i: (5)Future measurements are predicted based on this input representation. The network emits predic-tions of future measurements for all actions at once. This could be done by a fully-connected modulethat absorbs the input representation and outputs predictions. However, we found that introducingadditional structure into the prediction module enhances its ability to learn the fine differences be-tween the outcomes of different actions. To this end, we build on the ideas of Wang et al. (2016) and4Published as a conference paper at ICLR 2017MeasurementsImageActionExpectationPrediction+GoalTargetNormalizeDuplicateActiontakenFigure 1: Network structure. The image s, measurements m, and goal gare first processed sep-arately by three input modules. The outputs of these modules are concatenated into a joint repre-sentation j. This joint representation is processed by two parallel streams that predict the expectedmeasurements E(j)and the normalized action-conditional differences fAi(j)g, which are then com-bined to produce the final prediction for each action.split the prediction module into two streams: an expectation stream E(j)and an action stream A(j).The expectation stream predicts the average of the future measurements over all potential actions.The action stream concentrates on the fine differences between actions: A(j) =A1(j);:::;Aw(j),wherew=jAjis the number of actions. We add a normalization layer at the end of the actionstream that ensures that the average of the predictions of the action stream is zero for each futuremeasurement:Ai(j) =Ai(j)1wwXk=1Ak(j) (6)for alli. The normalization layer subtracts the average over all actions from each prediction, forcingthe expectation stream Eto compensate by predicting these average values. The output of theexpectation stream has dimensionality dim(f), where fis the vector of future measurements. Theoutput of the action stream has dimensionality wdim(f).The output of the network is a prediction of future measurements for each action, composed bysumming the output of the expectation stream and the normalized action-conditional output of theaction stream:p=hpa1;:::;pawi=DA1(j) +E(j);:::;Aw(j) +E(j)E: (7)The output of the network has the same dimensionality as the output of the action stream.4 E XPERIMENTSWe evaluate the presented approach in immersive three-dimensional simulations based on the classi-cal game Doom. In these simulations, the agent has a first-person view of the environment and mustact based on the same visual information that is shown to human players in the game. To interfacewith the game engine, we use the ViZDoom platform developed by Kempka et al. (2016). One ofthe advantages of this platform is that it allows running the simulation at thousands of frames persecond on a single CPU core, which enables training models on tens of millions of simulation stepsin a single day.We compare the presented approach to state-of-the-art deep RL methods in four scenarios of in-creasing difficulty, study generalization across environments and goals, and evaluate the importanceof different aspects of the model.4.1 S ETUPScenarios. We use four scenarios of increasing difficulty:5Published as a conference paper at ICLR 2017D1: Basic D2: NavigationD3: Battle D4: Battle 2Figure 2: Example frames from the four scenarios.D1 Gathering health kits in a square room. (“Basic”)D2 Gathering health kits and avoiding poison vials in a maze. (“Navigation”)D3 Defending against adversaries while gathering health and ammunition in a maze. (“Battle”)D4 Defending against adversaries while gathering health and ammunition in a more compli-cated maze. (“Battle 2”)These scenarios are illustrated in Figure 2 and in the supplementary video ( http://bit.ly/2f9tacZ ).The first two scenarios are provided with the ViZDoom platform. In D1, the agent is in a squareroom and its health is declining at a constant rate. To survive, it must move around and collect healthkits, which are distributed abundantly in the room. This task is easy: as long as the agent learns toavoid walls and keep traversing the room, performance is good. In D2, the agent is in a maze andits health is again declining at a constant rate. Here it must again collect health kits that increase itshealth, but it must also avoid blue poison vials that decrease health. This task is harder: the agentmust learn to traverse irregularly shaped passageways, and to distinguish health kits from poisonvials. In both tasks, the agent has access to three binary sub-actions: move forward, turn left, andturn right. Any combination of these three can be used at any given time, resulting in 8 possibleactions. The only measurement provided to the agent in these scenarios is health.The last two scenarios, D3 and D4, are more challenging and were designed by us using elements ofthe ViZDoom platform. Here the agent is armed and is under attack by alien monsters. The monstersspawn abundantly, move around in the environment, and shoot fireballs at the agent. Health kits andammunition are sporadically distributed throughout the environment and can be collected by theagent. The environment is a simple maze in D3 and a more complex one in D4. In both scenarios,the agent has access to eight sub-actions: move forward, move backward, turn left, turn right, strafeleft, strafe right, run, and shoot. Any combination of these sub-actions can be used, resulting in6Published as a conference paper at ICLR 2017256 possible actions. The agent is provided with three measurements: health, ammunition, and fragcount (number of monsters killed).Model. The future predictor network used in our experiments was configured to be as close aspossible to the DQN model of Mnih et al. (2015), to ensure a fair comparison. Additional details onthe architecture are provided in Appendix A.Training and testing. The agent is trained and tested over episodes. Each episode terminates after525 steps (equivalent to 1 minute of real time) or when the agent’s health drops to zero. Statisticsreported in figures and tables summarize the final values of respective measurements at the end ofepisodes.We set the temporal offsets 1;:::;nof predicted future measurements to 1, 2, 4, 8, 16, and 32steps in all experiments. Only the latest three time steps contribute to the objective function, withcoefficients (0:5;0:5;1). More details are provided in Appendix A.4.2 R ESULTSComparison to prior work. We have compared the presented approach to three deep RL methods:DQN (Mnih et al., 2015), A3C (Mnih et al., 2016), and DSR (Kulkarni et al., 2016b). DQN is astandard baseline for visuomotor control due to its impressive performance on Atari games. A3Cis more recent and is commonly regarded as the state of the art in this area. DSR is described ina recent technical report and we included it because the authors also used the ViZDoom platformin experiments, albeit with a simple task. Further details on the setup of the prior approaches areprovided in Appendix B.The performance of the different approaches during training is shown in Figure 3. In reporting theresults of these experiments, we refer to our approach as DFP (direct future prediction). For thefirst two scenarios, all approaches were trained to maximize health. For these scenarios, Figure3 reports average health at the end of an episode over the course of training. For the last twoscenarios, all approaches were trained to maximize a linear combination of the three normalizedmeasurements (ammo, health, and frags) with coefficients (0:5;0:5;1). For these scenarios, Figure3 reports average frags at the end of an episode. Each presented curve averages information fromthree independent training runs, and each data point is computed from 350;000steps of testing.DQN, A3C, and DFP were trained for 50million steps. The training procedure for DSR is muchslower and can only process roughly 1 million simulation steps per day. For this reason, we wereonly able to evaluate DSR on the Basic scenario and were not able to perform extensive hyperparam-eter tuning. We report results for this technique after 10days of training. (This time was sufficientto significantly exceed the number of training steps reported in the experiments of Kulkarni et al.(2016b), but not sufficient to approach the number of steps afforded by the other approaches.)Table 1 reports the performance of the models after training. Each fully trained model was testedover1million simulation steps. The table reports average health at the end of an episode for sce-narios D1 and D2, and average frags at the end of an episode for D3 and D4. We also reportthe average training speed for each approach, in millions of simulation steps per day of train-ing. The performance of the different models is additionally illustrated in the supplementary video(http://bit.ly/2f9tacZ ).D1 (health) D2 (health) D3 (frags) D4 (frags) steps/dayDQNA3CDSRDFP89:16:497:50:14:60:197:70:425:47:859:32:084:10:61:20:85:60:233:50:40:40:26:72:916:51:17M80M1M70MTable 1: Comparison to prior work. We report average health at the end of an episode for scenariosD1 and D2, and average frags at the end of an episode for scenarios D3 and D4.7Published as a conference paper at ICLR 20170 10 20 30 40 50Millions of steps020406080100HealthD1: BasicDFPA3CDQNDSR0 10 20 30 40 50Millions of steps020406080100D2: Navigation0 10 20 30 40 50Millions of steps061218243036FragsD3: Battle0 10 20 30 40 50Millions of steps0369121518D4: Battle 2Figure 3: Performance of different approaches during training. DQN, A3C, and DFP achieve sim-ilar performance in the Basic scenario. DFP outperforms the prior approaches in the other threescenarios, with a multiplicative gap in performance in the most complex ones (D3 and D4).In the Basic scenario, DQN, A3C, and DFP all perform well. As reported in Table 1, the performanceof A3C and DFP is virtually identical at 97:5%, while DQN reaches 89%. In the more complexNavigation scenario, a significant gap opens up between DQN and A3C; this is consistent with theexperiments of Mnih et al. (2016). DFP achieves the best performance in this scenario, with a 25percentage point advantage during testing. Note that in these first two scenarios, DFP was onlygiven a single measurement per time step (health).In the more complex Battle and Battle 2 scenarios (D3 and D4), DFP dominates the other ap-proaches. It outperforms A3C at test time by a factor of 6in D3 and by a factor of 2:5in D4.Note that the advantage of DFP is particularly significant in the scenarios that provide richer mea-surements: three measurements per time step in D3 and D4. The effect of multiple measurements isfurther evaluated in controlled experiments reported below.Generalization across environments. We now evaluate how the behaviors learned by the pre-sented approach generalize across different environments. To this end, we have created 100 ran-domly textured versions of the mazes from scenarios D3 and D4. We used 90 of these for trainingand 10 for testing, with disjoint sets of textures in the training and testing environments. We callthese scenarios D3-tx and D4-tx.Table 2 shows the performance of the approach for different combinations of training and testingregimes. For example, the entry in the D4-tx row of the D3 column shows the performance (inaverage number of frags at the end of an episode) of a model trained in D3 and tested in D4-tx. Notsurprisingly, a model trained in the simple D3 environment does not learn sufficient invariance tosurface appearance to generalize well to other environments. Training in the more complex multi-texture environment in D4 yields better generalization: the trained model performs well in D3 andexhibits non-trivial performance in D3-tx and D4-tx. Finally, exposing the model to significantvariation in surface appearance in D3-tx or D4-tx during training yields very good generalization.8Published as a conference paper at ICLR 2017TrainD3 D4 D3-tx D4-tx D4-tx-LTestD3 33:6 17:8 29:8 20:9 22 :0D4 1:6 17:1 5:4 10:8 12 :4D3-tx 3:9 8:1 22:6 15:6 19 :4D4-tx 1:7 5:1 6:2 10:2 12 :7Table 2: Generalization across environments.The last column of Table 2 additionally reports the performance of a higher-capacity model trainedin D4-tx. This combination is referred to as D4-tx-L. As shown in the table, this model performseven better. The architecture is detailed in Appendix A.Visual Doom AI Competition. To further evaluate the presented approach, we participated inthe Visual Doom AI Competition, held during September 2016. The competition evaluated sen-sorimotor control models that act based on raw visual input. The competition had the form of atournament: the submitted agents play multiple games against each other, their performance mea-sured by aggregate frags. The competition included two tracks. The Limited Deathmatch track washeld in a known environment that was given to the participants in advance at training time. TheFull Deathmatch track evaluated generalization to previously unseen environments and took placein multiple new environments that were not available to the participating teams at training time. Weonly enrolled in the Full Deathmatch track. Our model was trained using a variant of the D4-tx-Lregime.Our model won, outperforming the second best submission by more than 50%. That submission, de-scribed by Lample & Chaplot (2016), constitutes a strong baseline. It is a deep recurrent Q-networkthat incorporates an LSTM and was trained using reward shaping and extra supervision from thegame engine. Specifically, the authors took advantage of the ability provided by the ViZDoom plat-form to use the internal configuration of the game, including ground-truth knowledge of the presenceof enemies in the field of view, during training. The authors’ report shows that this additional su-pervision improved performance significantly. Our model, which is simpler, achieved even higherperformance without such additional supervision.Goal-agnostic training. We now evaluate the ability of the presented approach to learn without afixed goal at training time, and adapt to varying goals at test time. These experiments are performedin the Battle scenario. We use three training regimes: (a) fixed goal vector during training, (b)random goal vector with each value sampled uniformly from [0;1]for every episode, and (c) randomgoal vector with each value sampled uniformly from [1;1]for every episode. More details areprovided in Appendix A. Intuitively, in the second regime the agent is instructed to maximize thedifferent measurements, but has no knowledge of their relative importance. The third regime makesno assumptions as to whether the measured quantities are desirable or not.The results are shown in Table 3. Each group of columns corresponds to a training regime and eachrow to a different test-time goal. Goals are given by the weights of the three measurements (ammo,health, and frags) in the objective function. The first test-time goal in Table 3 is the goal vector usedin the battle scenarios in the prior experiments, the second seeks to maximize the frag count, thethird is a pacifist (maximize ammo and health, minimize frags), the fourth seeks to aimlessly drainammunition, and the fifth aims to maximize health. For each row, each group of columns reports theaverage value of each of the three measurements at the end of an episode. Note that health level atthe end of an episode can be negative if the agent suffered major damage in the pre-terminal step.We draw two main conclusions. First, on the main task (first row), models trained without knowingthe goal in advance (b,c) perform nearly as well as a dedicated model trained specifically for theeventual goal (a). Without knowing the eventual goal during training, the agent performs the taskalmost as well as when it was specifically trained for it. Second, all models generalize to new goalsbut not equally well. Models trained with a variety of goals (b,c) generalize much better than amodel trained with a fixed goal.9Published as a conference paper at ICLR 2017(a) fixed goal (0:5;0:5;1) (b) random goals [0;1] (c) random goals [1;1]test goal ammo health frags ammo health frags ammo health frags(0:5;0:5;1) 83 :4 97:0 33:6 92 :3 96:9 31:5 49 :3 94:3 28:9(0;0;1) 0 :33:7 11:5 4 :3 30:0 20:6 21 :8 70:9 24:6(1;1;1) 28 :62:0 0:0 22 :1 4:4 0:2 89 :4 83:6 0:0(1;0;0) 1 :08:3 1:7 1 :97:5 1:2 0 :98:6 1:7(0;1;0) 0 :7 2:7 2:6 9 :0 77:8 6:6 3 :0 69:6 7:9Table 3: Generalization across goals. Each group of three columns corresponds to a training regime,each row corresponds to a test-time goal. The results in the first row indicate that the approachperforms well on the main task even without knowing the goal at training time. The results in theother rows indicate that goal-agnostic training supports generalization across goals at test time.fragsall measurements all offsets 22:6all measurements one offset 17:2frags only all offsets 10:3frags only one offset 5:0Table 4: Ablation study. Predictingall measurements at all temporal offsetsyields the best results.Ablation study. We now perform an ablation studyusing the D3-tx scenario. Specifically, we evaluate theimportance of vectorial feedback versus a scalar reward,and the effect of predicting measurements at multipletemporal offsets. The results are summarized in Ta-ble 4. The table reports the performance (in averagefrags at the end of an episode) of our full model (predict-ing three measurements at six temporal offsets) and ofablated variants that only predict frags (a scalar reward)and/or only predict at the farthest temporal offset. As theresults demonstrate, predicting multiple measurementssignificantly improves the performance of the learnedmodel, even when it is evaluated by only one of thosemeasurements. Predicting measurements at multiple future times is also beneficial. This supportsthe intuition that a dense flow of multivariate measurements is a better training signal than a scalarreward.5 D ISCUSSIONWe presented an approach to sensorimotor control in immersive environments. Our approach issimple and demonstrates that supervised learning techniques can be adapted to learning to act incomplex and dynamic three-dimensional environments given raw sensory input and intrinsic mea-surements. The model trains on raw experience, by interacting with the environment without extra-neous supervision. Natural supervision is provided by the cotemporal structure of the sensory andmeasurement streams. Our experiments have demonstrated that this simple approach outperformssophisticated deep reinforcement learning formulations on challenging tasks in immersive environ-ments. Experiments have further demonstrated that the use of multivariate measurements providesa significant advantage over conventional scalar rewards and that the trained model can effectivelypursue new goals not specified during training.The presented work can be extended in multiple ways that are important for broadening the rangeof behaviors that can be learned. First, the presented model is purely reactive: it acts based onthe current frame only, with no explicit facilities for memory and no test-time retention of internalrepresentations. Recent work has explored memory-based models (Oh et al., 2016) and integratingsuch ideas with the presented approach may yield substantial advances. Second, significant progressin behavioral sophistication will likely require temporal abstraction and hierarchical organization oflearned skills (Barto & Mahadevan, 2003; Kulkarni et al., 2016a). Third, the presented model wasdeveloped for discrete action spaces; applying the presented ideas to continuous actions would beinteresting (Lillicrap et al., 2016). Finally, predicting features learned directly from rich sensoryinput can blur the distinction between sensory and measurement streams (Mathieu et al., 2016; Finnet al., 2016; Kalchbrenner et al., 2016).10Published as a conference paper at ICLR 2017REFERENCESKaren E. Adolph and Sarah E. Berger. Motor development. In Handbook of Child Psychology , volume 2, pp.161–213. Wiley, 6th edition, 2006.Andrew G. Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. DiscreteEvent Dynamic Systems , 13(1-2), 2003.Dimitri P. Bertsekas. A counterexample to temporal differences learning. Neural Computation , 7(2), 1995.Dimitri P. Bertsekas. Pathologies of temporal difference methods in approximate dynamic programming. InIEEE Conference on Decision and Control , 2010.Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z. Leibo, Jack Rae,Daan Wierstra, and Demis Hassabis. Model-free episodic control. arXiv:1606.04460 , 2016.Bruno Castro da Silva, George Konidaris, and Andrew G. Barto. Learning parameterized skills. In ICML , 2012.Marc Peter Deisenroth, Peter Englert, Jan Peters, and Dieter Fox. Multi-task policy search for robotics. InICRA , 2014.Eyal Even-Dar and Yishay Mansour. Learning rates for Q-learning. JMLR , 5, 2003.Chelsea Finn, Ian J. Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction throughvideo prediction. In NIPS , 2016.Zolt ́an G ́abor, Zsolt Kalm ́ar, and Csaba Szepesv ́ari. Multi-criteria reinforcement learning. In ICML , 1998.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In ICCV , 2015.Michael I. Jordan and David E. Rumelhart. Forward models: Supervised learning with a distal teacher. Cogni-tive Science , 16(3), 1992.Leslie Pack Kaelbling, Michael L. Littman, and Andrew W. Moore. Reinforcement learning: A survey. JAIR ,4, 1996.Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, andKoray Kavukcuoglu. Video pixel networks. arXiv:1610.00527 , 2016.Michał Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja ́skowski. ViZDoom: ADoom-based AI research platform for visual reinforcement learning. In IEEE Conference on ComputationalIntelligence and Games , 2016.Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR , 2015.Jens Kober, Andreas Wilhelm, Erhan Oztop, and Jan Peters. Reinforcement learning to adjust parametrizedmotor primitives to new situations. Autonomous Robots , 33(4), 2012.Jens Kober, J. Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. IJRR , 32(11),2013.George Konidaris, Ilya Scheidwasser, and Andrew G. Barto. Transfer in reinforcement learning via sharedfeatures. JMLR , 13, 2012.Tejas D. Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Joshua B. Tenenbaum. Hierarchical deep rein-forcement learning: Integrating temporal abstraction and intrinsic motivation. In NIPS , 2016a.Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J. Gershman. Deep successor reinforcementlearning. arXiv:1606.02396 , 2016b.David Kushner. Masters of Doom: How Two Guys Created an Empire and Transformed Pop Culture . RandomHouse, 2003.Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines thatlearn and think like people. arXiv:1604.00289 , 2016.Guillaume Lample and Devendra Singh Chaplot. Playing FPS games with deep reinforcement learning.arXiv:1609.05521 , 2016.Yann LeCun, Urs Muller, Jan Ben, Eric Cosatto, and Beat Flepp. Off-road obstacle avoidance through end-to-end learning. In NIPS , 2005.Sergey Levine and Vladlen Koltun. Guided policy search. In ICML , 2013.11Published as a conference paper at ICLR 2017Sergey Levine, Peter Pastor, Alex Krizhevsky, and Deirdre Quillen. Learning hand-eye coordination for roboticgrasping with deep learning and large-scale data collection. In ISER , 2016.Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver,and Daan Wierstra. Continuous control with deep reinforcement learning. In ICLR , 2016.Michael L. Littman, Richard S. Sutton, and Satinder P. Singh. Predictive representations of state. In NIPS ,2001.Micha ̈el Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean squareerror. In ICLR , 2016.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, AlexGraves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, AmirSadik, et al. Human-level control through deep reinforcement learning. Nature , 518(7540), 2015.V olodymyr Mnih, Adri `a Puigdom `enech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley,David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In ICML ,2016.Kevin P. Murphy. Machine Learning: A Probabilistic Perspective . MIT Press, 2012.Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L. Lewis, and Satinder P. Singh. Action-conditional videoprediction using deep networks in Atari games. In NIPS , 2015.Junhyuk Oh, Valliappa Chockalingam, Satinder P. Singh, and Honglak Lee. Control of memory, active percep-tion, and action in Minecraft. In ICML , 2016.Diederik M. Roijers, Peter Vamplew, Shimon Whiteson, and Richard Dazeley. A survey of multi-objectivesequential decision-making. JAIR , 48, 2013.St ́ephane Ross, Narek Melik-Barkhudarov, Kumar Shaurya Shankar, Andreas Wendel, Debadeepta Dey, J. An-drew Bagnell, and Martial Hebert. Learning monocular reactive UA V control in cluttered natural environ-ments. In ICRA , 2013.Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. InICML , 2015.David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, JulianSchrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe,et al. Mastering the game of Go with deep neural networks and tree search. Nature , 529(7587), 2016.Satinder P. Singh and Richard S. Sutton. Reinforcement learning with replacing eligibility traces. MachineLearning , 22(1-3), 1996.Satinder P. Singh, Michael L. Littman, Nicholas K. Jong, David Pardoe, and Peter Stone. Learning predictivestate representations. In ICML , 2003.Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning , 3, 1988.Richard S. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding.InNIPS , 1995.Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction . MIT Press, 2nd edition,2017.Richard S. Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M. Pilarski, Adam White, and DoinaPrecup. Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotorinteraction. In AAMAS , 2011.Csaba Szepesv ́ari and Michael L. Littman. A unified analysis of value-function-based reinforcement learningalgorithms. Neural Computation , 11(8), 1999.Gerald Tesauro. TD-gammon, a self-teaching backgammon program, achieves master-level play. Neural Com-putation , 6(2), 1994.John N. Tsitsiklis. On the convergence of optimistic policy iteration. JMLR , 2002.A ̈aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalch-brenner, Andrew W. Senior, and Koray Kavukcuoglu. WaveNet: A generative model for raw audio.arXiv:1609.03499 , 2016.Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. Duelingnetwork architectures for deep reinforcement learning. In ICML , 2016.12Published as a conference paper at ICLR 2017A I MPLEMENTATION DETAILSA.1 N ETWORK ARCHITECTURESThe detailed architectures of two network variants – basic andlarge – are shown in Tables A1and A2. The basic network follows the architecture of Mnih et al. (2015) as closely as possible.Thelarge network is similar, but all layers starting from the third are wider by a factor of two.In all networks we use the leaky ReLU nonlinearity LReLU(x) = max(x;0:2x)after each non-terminal layer. We initialize the weights as proposed by He et al. (2015).module input dimension channels kernel stridePerception84841 32 8 4212132 64 4 2101064 64 3 1101064 512 Measurement3 128 128 128 128 128 Goal36 128 128 128 128 128 Expectation512 + 128 + 128 512 512 36 Action512 + 128 + 128 512 512 36256 Table A1: The basic architecture.module input dimension channels kernel stridePerception1281281 32 8 4323232 64 4 2161664 128 3 11616128 1024 Measurement3 128 128 128 128 128 Goal36 128 128 128 128 128 Expectation1024 + 128 + 128 1024 1024 3 6 Action1024 + 128 + 128 1024 1024 36256 Table A2: The large architecture.We empirically validate the architectural choices in the D3-tx regime. We compare the full basicarchitecture to three variants:No normalization: normalization at the end of the action stream is not performed.No split: no expectation/action split, simply predict future measurements with a fully-connected network.13Published as a conference paper at ICLR 2017No input measurements: the input measurement stream is removed, and current measure-ments are not provided to the network.The results are reported in Table A3. All modifications of the basic architecture hurt performance,showing that the two-stream formulation is beneficial and that providing the current measurementsto the network increases performance but is not crucial.full no normalization no split no input measurementsScore 22:6 21 :6 16 :5 19 :4Table A3: Evaluation of different network architectures.A.2 O THER DETAILSThe raw sensory input to the agent is the observed image, in grayscale, without any additional pre-processing. The resolution is 8484pixels for the basic model and 128128pixels for the largeone. We normalized the measurements by their standard deviations under random exploration. Moreprecisely, we divided ammo count, health level, and frag count by 7:5,30:0, and 1:0, respectively.We performed frame skipping during both training and testing. The agent observes the environmentand selects an action every 4thframe. The selected action is repeated during the skipped frames.This accelerates training without sacrificing accuracy. In the paper, “step” always refers to stepsafter frame skipping (equivalent to every 4thstep before frame skipping). When played by a human,Doom runs at 35frames per second, so one step of the agent is equivalent to 114 milliseconds ofreal time. Therefore, frame skipping has the added benefit of bringing the reaction time of the agentcloser to that of a human.We set the temporal offsets 1;:::;nof predicted future measurements to 1, 2, 4, 8, 16, and 32steps in all experiments. The longest temporal offset corresponds to 3.66 seconds of real time. In allexperiments, only the latest three predictions (after 8, 16, and 32 steps) contributed to the objectivefunction, with fixed coefficients (0:5;0:5;1:0). Therefore, in scenarios with multiple measurementsavailable to the agent (D3 and D4), the goal vector was specified by three numbers: the relativeweights of the three measurements (ammo, health, frags) in the objective function. In goal-directedtraining, these were fixed to (0:5;0:5;1:0), and in goal-agnostic training they were sampled uni-formly at random from [0;1]or[1;1].We used an experience memory of M= 20;000steps, and sampled a mini-batch of N= 64 samplesafter everyk= 64 new experiences added. We added the experiences to the memory using 8copiesof the agent running in parallel. The networks in all experiments were trained using the Adamalgorithm (Kingma & Ba, 2015) with 1= 0:95,2= 0:999, and"= 104. The initial learningrate is set to 104and is gradually decreased during training. The basic networks were trained for800;000mini-batch iterations (or 51:2million steps), the large one for 2;000;000iterations.B B ASELINESWe compared our approach to three prior methods: DQN (Mnih et al., 2015), DSR (Kulka-rni et al., 2016b), and A3C (Mnih et al., 2016). We used the authors’ implementationsof DQN ( https://github.com/kuz/DeepMind-Atari-Deep-Q-Learner ) and DSR(https://github.com/Ardavans/DSR ), and an independent implementation of A3C(https://github.com/muupan/async-rl ). For scenarios D1 and D2 we used the changein health as reward. For D3 and D4 we used a linear combination of changes of the three normalizedmeasurements with the same coefficients as for the presented approach: (0:5;0:5;1). For DQN andDSR we tested three learning rates: the default one ( 0:00025 ) and two alternatives ( 0:00005 and0:00002 ). Other hyperparameters were left at their default values. For A3C, which trains faster, weperformed a search over a set of learning rates ( f2;4;8;16;32g104) for the first two tasks; forthe last two tasks we trained 20models with random learning rates sampled log-uniformly between104and102and random (entropy regularization) sampled log-uniformly between 104and101. For all baselines we report the best results we were able to obtain.14
rJ6DhP5xe
Under review as a conference paper at ICLR 2017GENERALIZABLE FEATURES FROM UNSUPERVISEDLEARNINGMehdi Mirza & Aaron Courville & Yoshua BengioMILAUniversit ́e de Montr ́ealfmemirzamo, aaron.courville, yoshua.umontreal g@gmail.comABSTRACTHumans learn a predictive model of the world and use this model to reason aboutfuture events and the consequences of actions. In contrast to most machine predic-tors, we exhibit an impressive ability to generalize to unseen scenarios and reasonintelligently in these settings. One important aspect of this ability is physical in-tuition (Lake et al., 2016). In this work, we explore the potential of unsupervisedlearning to find features that promote better generalization to settings outside thesupervised training distribution. Our task is predicting the stability of towers ofsquare blocks. We demonstrate that an unsupervised model, trained to predict fu-ture frames of a video sequence of stable and unstable block configurations, canyield features that support extrapolating stability prediction to blocks configura-tions outside the training set distribution.1 I NTRODUCTIONHumans learn a tremendous amount of knowledge about the world with almost no supervision andcan construct a predictive model of the world. We use this model of the world to interact with ourenvironment. As also argued by Lake et al. (2016) one of the core ingredients of human intelligenceis intuitive physics. Children can learn and predict some of the common physical behaviors of ourworld just by observing and interacting without any direct supervision. And they form a sophisti-cated predictive model of the physical environment and expect the world to behave based on theirmental model and have a reasonable expectation about unseen situations T ́egl ́as et al. (2011).Despite impressive progress in the last few years in the training of the supervised models, we havenot yet quite been able to achieve similar results in unsupervised learning, and it remains one of thechallenging research areas in the field. The full potential of the application of unsupervised learningis yet to be realized.In this work, we leverage unsupervised learning to train a predictive model over sequences. We usethe imagined and predicted future sequence data to help a physical environment prediction modelgeneralize better to unseen settings.More specifically we focus on the task of predicting if a tower of square bricks will fall or not, asintroduced by Lerer et al. (2016). They showed that a deep convolution neural network could predictthe fall of the towers with super-human accuracy. But despite the strengths of convolution neuralnetworks, Zhang et al. (2016) shows how deep neural networks have a hard time generalizing tonovel situations in the same way as humans or simulation-based models can do. In this work, weshow that deep neural networks are capable of generalizing to novel situations through a form ofunsupervised learning. The core idea is to observe the world without any supervision and build afuture predictive model of it, and in a later stage leverage and utilize the imagined future to train abetter fall prediction model.2 R ELATED WORKIn the beginning, unsupervised learning and generative models emerged as pre-training method Hin-ton & Salakhutdinov (2006); Hinton et al. (2006); Bengio et al. (2007) to help other tasks such as1Under review as a conference paper at ICLR 2017supervised learning. But since Krizhevsky et al. (2012) many other regularization Srivastava et al.(2014), weight initialization Glorot & Bengio (2010) and normalization Ioffe & Szegedy (2015)techniques and architecture designs He et al. (2015) has been introduced that diminish the effect ofpre-training. Although pre-training still could be useful in data scarce domains they are many otherways and applications that unsupervised learning are still very interesting models and it is a veryactive area of research. Just to name a few applications are semi-supervised learning Kingma et al.(2014); Salimans et al. (2016); Dumoulin et al. (2016) super resolution Sønderby et al. (2016).Video generation is one active area of research with many applications, and many of the recentworks have been using some of the states of the art neural networks for video generation. Srivas-tava et al. (2015) uses LSTM recurrent neural networks to train an unsupervised future predictivemodel for video generation. And here we use a very similar architecture as described in Section 4.1.Mathieu et al. (2015) combines the common mean-squared-error objective function with an adver-sarial training cost in order to generate sharper samples. Lotter et al. (2016) introduce another formof unsupervised video prediction training scheme that manages to predict future events such as thedirection of the turn of a car which could have potential use in training of the self-driving cars.Model-based reinforcement learning (RL) is an active research area that holds the promise of makingthe RL agents less data hungry. Learning agents could explore, learn in an unsupervised way abouttheir world, and learn even more by dreaming about future states. We believe that action-conditionvideo prediction models are an important ingredient for this task. Fragkiadaki et al. (2015) learnthe dynamics of billiards balls by supervised training of a neural net. Action-conditioned videoprediction models have been applied to Atari playing agent Oh et al. (2015) as well as robotics (Finnet al., 2016; Finn & Levine, 2016).3 D ATASETRecent datasets for predicting the stability of block configurations (Lerer et al., 2016; Zhang et al.,2016) only provide binary labels of stability, and exclude the video simulation of the block configu-ration. We, therefore, construct a new dataset, with a similar setup as Lerer et al. (2016); Zhang et al.(2016), that includes this video sequence. We use a Javascript based physics engine1to generate thedata.We construct towers made of 35square blocks. To sample a random tower configuration, weuniformly shift each block in its xandyposition such that it touches the block below. Because tallertowers are more unstable, this shift is smaller when we add more blocks. To simplify our learningsetting, we balance the number of stable and unstable block configurations. For each tower height,we create 8000 ,1000 and3000 video clips for the training, validation, and test set, respectively. Thevideo clips are sub-sampled in time to include more noticeable changes in the blocks configurations.We decided to keep 39 number of frames which with our sub-sampling rate was enough time forunstable towers to collapse. Each video frame is an RGB image of size 64x64. In addition to binarystability label, we include the number of blocks that fell down.4 A RCHITECTUREThe core idea of this paper is to use future state predictions of a generative video model to en-hance the performance of a supervised prediction model. Our architecture consists of two separatemodules:Frame predictor A generative model to predict future frames of a video sequence. This model istrained to either generate the last frame or the complete sequence of frames.Stability predictor In the original task, stability is predicted from a static image of a block config-uration. We explore whether, in addition to initial configuration, the last frame predictionof our unsupervised model improves the performance of the stability prediction.In the following sections, we explore several different architectures for both modules.1https://chandlerprall.github.io/Physijs/2Under review as a conference paper at ICLR 20174.1 F UTURE FRAME PREDICTIONWe consider two different model architectures for this task. The first one, named ConvDeconv, onlytakes the first frame as input and predicts the last frame of the video sequence. The architectureconsist of a block of convolution and max-pooling layers. To compensate for the dimensionalityreduction of the max-pooling layers, we have a fully-connected layer following the last max-poolinglayer. And finally a subsequent block of deconvolution layers with the output size same as the modelinput size. All activation functions are ReLU(Nair & Hinton, 2010). See Table 1 for more detailsof the architecture. The objective function is the mean squared error between the generated lastframe and the ground-truth frame; as a result, this training will not require any labels. We alsoexperimented with an additional adversarial cost as in Mathieu et al. (2015) but did not observeany improvement for the stability prediction task. We hypothesize that although the adversarialobjective function helps to have sharper images, such improved sample quality does not transferto better stability prediction. Figure 1 shows a few examples of the generated data on the test set.Mean squared error is minimized using the AdaM Optimizer(Kingma & Ba, 2014) and we use early-stopping when the validation loss does not improve for 100epochs.We extend this ConvDeconv model in a second architecture, named ConvLSTMDeconv, to predictthe next frame at each timestep. This model is composed of an LSTM architecture. The sameconvolutional and deconvolutional blocks as ConvDeconv is utilized to respectively input the currentframe to the LSTM transition and output the next frame from the current LSTM state. The detailsof the ConvLSTMDeconv model architecture are shown in Table 2 and Figure 3 shows the diagramof the both architectures. During the training at each time step the ground-truth data feeds in to themodel, but during the test time only the initial time step gets the first frame from the data and forsubsequent time steps the generated frames from the previous time steps feed in to the model. The issimilar setup to recurrent neural network language models Mikolov (2012), and this is necessary asduring the test time we only have access to the first frame. As before, the model is trained to predictthe next frame at each time step by minimizing the predictive mean-squared-error using AdaMoptimizer and early-stopping. For training, we further subsample in time dimension and reduce thesequence length to 5-time steps. Figure 2 shows some sample generated sequences from the test set.Layer Type Output channels/dimensions Kernel/Pool size1 Conv 64 332 MaxPool 64 443 Conv 128 334 MaxPool 64 335 Conv 64 336 MaxPool 64 337 FC 646416 = 655368 DeConv 64 339 DeConv 128 3310 DeConv 64 3311 DeConv 3 33Table 1: ConvDeconv model architecture.FC stands for ”Fully Connected”.Layer Type Output channels/Dimension Kernel/Pool size1 Conv 64 332 MaxPool 64 443 Conv 128 334 MaxPool 64 335 Conv 64 336 MaxPool 64 337 FC LSTM 20008 FC 646439 DeConv 64 3310 DeConv 64 3311 DeConv 3 33Table 2: ConvLSTMDeconv model architecture.FC stands for ”Fully Connected”.Figure 1: Samples from the ConvDeconv model. First and second rows show first and last framerespectively from the test data. And the third row shows generated last frame samples.4.2 S TABILITY PREDICTIONWe have two supervised models for stability prediction. The first one will be a baseline that takesas input the first frame and predict the fall of the tower. For this model we use 50 layer ResNet3Under review as a conference paper at ICLR 2017Figure 2: Samples from the ConvLSTMDeconv model. Each row is a different sample. The leftsequence is the data and the right sequence is the generated data. Note that during generation modelonly see the first frame and for next time steps uses its own output from the last timestep.architecture from He et al. (2016). We trained the baseline model on each of the different towerheights 3, 4, 5. We call it the single model and name experiments 3S, 4S, 5S respectively for thenumber of blocks that it was trained on. The second model will be the one using the generateddata: it takes as input the first frame and the generated last frame. It consisted of two 50 LayerResNet blocks in parallel, one for the first frame and one for last frame and the last hidden layerof both models are concatenated together before a logistic regression layer (or Softmax in the caseof non-binary labels). Both ResNet blocks share parameters. Based on whether the generated datais coming from ConvDeconv model or ConvLSTMDeconv model we labeled experiments as 3CD,4CD, 5CD and 3CLD, 4CLD, 5CLD respectively.None of the models are pre-trained and all the weights are randomly initialized. As in 4.1, we useAdaM and we stopped the training when the validation accuracy was not improved for 100epochs.All images are contrast normalized independently and we augment our training set using randomhorizontal flip of the images and randomly changing the contrast and brightness.Figure 3: Different model architectures. The first two on the left are ConvDeconv and ConvLST-MDeconv described in Section 4.1. And the two on the right are models used for the supervised fallprediction described in Section 4.2. Single frame predictor is the baseline model. And the doubleframe predictor is the model that uses the generated data.4Under review as a conference paper at ICLR 20175 R ESULTSFigure 4 shows the classification results for each of the 9 models described in Section 4.2 tested on3, 4 and 5 blocks. Each test case is shown with a different color. And Table 3 shows all the 27 testcase results’ numerical values. In almost all cases the generated data improves the generalizationperformance to test cases with a different number of blocks than it was trained on. For comparisonwe have included results from Zhang et al. (2016) in Table 4. Since Zhang et al. (2016) only reportsthe results when the models are trained on tower of 4 blocks, the corresponding results would be thesecond block row in Table 3, models 4S, 4CD and 4CLD. Even though the datasets are not the same,but it can be observed that the range of performance of the baseline 4S model is consistent with therange of performance of AlexNet model on Table 4. It can be seen that how the results of the 4CDmodel are significantly better than both IPE and human performance reported in Zhang et al. (2016),while the baselines have similar performances.One observation is the fact that the improvements are more significant when it’s been tested onscenarios with more bricks than during training. It also improves the reverse case, i.e. fewer bricksthan during training, but the improvement is not as significant. It is worth mentioning that testingon a lower number of bricks is a much harder problem as pointed out in Zhang et al. (2016) too. Intheir case, the prediction performance was almost random when going from 4 blocks to 3 blocks,which is not the case in our experiments2. One possible explanation for performance loss is that abalanced tower with fewer blocks corresponds to an unstable configuration for a tower with moreblocks e.g. a tower with 3 blocks is classified as unstable for a prediction model trained on towers of5 blocks. One solution could be to train these models to predict how many blocks have fallen insteadof a binary stability label. Because we have access to this data in our dataset, we explored the sameexperiments using these labels. Unfortunately, we did not observe any significant improvement. Themain reason could be that the distribution of the number of fallen blocks is extremely unbalanced. Itis hard to collect data with a balanced number of fallen blocks because some configurations are thusvery unlikely e.g. a tower of 5 blocks with only two blocks falls (the majority of the time the wholetower collapses).The another observation is the fact that models that use ConvDeconv generated data performedslightly better than those that use ConvLSTMDeconv. As seen in Figure 2 the samples in the Con-vLSTMDeconv case are more noisy and less sharper than those in Figure 1. This could be causedsince after the first time step the model outputs from the last time step is used as input for the nexttime step, the samples degenerates the longer the sequence is.Data augmentation was crucial to increase the generalization performance of the stability predictione.g. 5CD model tested on 4 bricks achieved only 50% without data augmentation while reaching74:5%accuracy with data augmentation. This significant improvement from data augmentationcould be partly because our dataset was relatively small.Figure 4: Accuracy in percentage for each of the 9 models tested on test sets with a different numberof blocks. Each color represents the number of blocks that the model was tested on. 50% is chance.2We are not using the same dataset as Zhang et al. (2016) and hence direct comparison is not possible.5Under review as a conference paper at ICLR 2017Model Train set Test set Accuracy3S 3 3 91.87 %3S 3 4 66.1 %3S 3 5 63.7 %3CD 3 3 95.5 %3CD 3 4 92.63 %3CD 3 5 89 %3CLD 3 3 93.3 %3CLD 3 4 90.33 %3CLD 3 5 84.30 %4S 4 3 52.5 %4S 4 4 87 %4S 4 5 75.53 %4CD 4 3 80.53 %4CD 4 4 92.5 %4CD 4 5 89.1 %4CLD 4 3 65.53 %4CLD 4 4 91.20 %4CLD 4 5 84.20 %5S 5 3 59.26 %5S 5 4 67.23 %5S 5 5 86.50 %5CD 5 3 58.27 %5CD 5 4 74.50 %5CD 5 5 88.53 %5CLD 5 3 58.90 %5CLD 5 4 74.50 %5CLD 5 5 88.53 %Table 3: The results from our experimentsModel Train set Test set AccuracyAlexNet 4 3 51 %AlexNet 4 4 95 %AlexNet 4 5 78.5 %IPE N/A 3 72 %IPE N/A 4 64 %IPE N/A 5 56 %Human N/A 3 76.5 %Human N/A 4 68.5 %Human N/A 5 59 %Table 4: The results reported on Zhang et al.(2016). We emphasize that these results are ona different dataset.6 C ONCLUSIONIn this paper, we showed that data generated from an unsupervised model could help a supervisedlearner to generalize to unseen scenarios. We argue that this ability of transfer learning and gener-alization by observing the world could be one of the ingredients to construct a model of the worldthat could have applications in many tasks, such as model-based RL. We aim to extend this work infuture by looking at the videos of robots manipulating objects and being able to predict their failurebeforehand, which could help an RL agent to explore more intelligently.ACKNOWLEDGMENTSWe would like to thank Harm de Vries and Laurent Dinh for their help and feedback in writingthe paper. And also thank Adam Lerer and Jiajun Wu for sharing their dataset. We thank NSERC,CIFAR, IBM, Canada Research Chairs, Google and Samsung for funding.REFERENCESYoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, et al. Greedy layer-wise trainingof deep networks. Advances in neural information processing systems , 19:153, 2007.Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi-etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704 ,2016.Chelsea Finn and Sergey Levine. Deep visual foresight for planning robot motion. arXiv preprintarXiv:1610.00696 , 2016.Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interactionthrough video prediction. arXiv preprint arXiv:1605.07157 , 2016.Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, and Jitendra Malik. Learning visual predictivemodels of physics for playing billiards. arXiv preprint arXiv:1511.07404 , 2015.6Under review as a conference paper at ICLR 2017Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neuralnetworks. In Aistats , volume 9, pp. 249–256, 2010.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. arXiv preprint arXiv:1512.03385 , 2015.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residualnetworks. arXiv preprint arXiv:1603.05027 , 2016.Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neuralnetworks. Science , 313(5786):504–507, 2006.Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep beliefnets. Neural computation , 18(7):1527–1554, 2006.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. arXiv preprint arXiv:1502.03167 , 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervisedlearning with deep generative models. In Advances in Neural Information Processing Systems ,pp. 3581–3589, 2014.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-lutional neural networks. In Advances in neural information processing systems , pp. 1097–1105,2012.Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Buildingmachines that learn and think like people. arXiv preprint arXiv:1604.00289 , 2016.Adam Lerer, Sam Gross, and Rob Fergus. Learning physical intuition of block towers by example.arXiv preprint arXiv:1603.01312 , 2016.William Lotter, Gabriel Kreiman, and David Cox. Deep predictive coding networks for video pre-diction and unsupervised learning. arXiv preprint arXiv:1605.08104 , 2016.Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyondmean square error. arXiv preprint arXiv:1511.05440 , 2015.Tom ́aˇs Mikolov. Statistical language models based on neural networks. Presentation at Google,Mountain View, 2nd April , 2012.Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. InProceedings of the 27th International Conference on Machine Learning (ICML-10) , pp. 807–814,2010.Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditionalvideo prediction using deep networks in atari games. In Advances in Neural Information Process-ing Systems , pp. 2863–2871, 2015.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz ́ar. Amortisedmap inference for image super-resolution. arXiv preprint arXiv:1610.04490 , 2016.Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine LearningResearch , 15(1):1929–1958, 2014.Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of videorepresentations using lstms. CoRR, abs/1502.04681 , 2, 2015.7Under review as a conference paper at ICLR 2017Ern ̋o T ́egl ́as, Edward Vul, Vittorio Girotto, Michel Gonzalez, Joshua B Tenenbaum, and Luca LBonatti. Pure reasoning in 12-month-old infants as probabilistic inference. science , 332(6033):1054–1059, 2011.Renqiao Zhang, Jiajun Wu, Chengkai Zhang, William T Freeman, and Joshua B Tenenbaum. Acomparative evaluation of approximate probabilistic simulation and deep neural networks as ac-counts of human physical scene understanding. arXiv preprint arXiv:1605.01138 , 2016.8
S1Jhfftgx
Under review as a conference paper at ICLR 2017ENFORCING CONSTRAINTS ON OUTPUTSWITH UNCONSTRAINED INFERENCEJay Yoon LeeCarnegie Mellon UniversityPittsburgh, PAjaylee@cs.cmu.eduMichael Wick, Jean-Baptiste TristanOracle LabsBurlington, MAfmichael.wick,jean.baptiste.tristan g@oracle.comABSTRACTIncreasingly, practitioners apply neural networks to complex problems in natu-ral language processing (NLP), such as syntactic parsing, that have rich outputstructures. Many such applications require deterministic constraints on the outputvalues; for example, requiring that the sequential outputs encode a valid tree. Whilehidden units might capture such properties, the network is not always able tolearn them from the training data alone, and practitioners must then resort to post-processing. In this paper, we present an inference method for neural networks thatenforces deterministic constraints on outputs without performing post-processingor expensive discrete search over the feasible space. Instead, for each input, wenudge the continuous weights until the network’s unconstrained inference proce-dure generates an output that satisfies the constraints. We find that our methodreduces the number of violating outputs by up to 81%, while improving accuracy.1 I NTRODUCTIONMany neural networks have discrete-valued output units that correspond to an inference or predictionabout an input. Often, a problem might involve multiple discrete outputs. Unlike multiclass classi-fication, which associates a single discrete output with each input, so called structured predictionproblems associate multiple outputs with each input. For example, in multi-label classification,instead of predicting a single relevant class pertaining to the image or sentence, we must predict allrelevant classes: the image contains a dog, a tree, and a sky. In sequence prediction problems, thediscrete outputs might be a sequence of words or symbols that must form a coherent translation of asource language sentence (Cho et al., 2014; Sutskever et al., 2014), description of an image (Vinyalset al., 2015b), answer to a question (Kumar et al., 2016), or a parse-tree for an input sentence (Vinyalset al., 2015a). Crucially, in structured prediction, the output values are interdependent. Even thoughneural networks usually predict outputs independently or sequentially (one output at a time), thehidden units allow them to successfully capture many dependencies.Sometimes, the outputs must obey hard constraints. For example, in sequence labeling with BILOUencoding, a ‘begin’ marker Bcannot immediately follow an ‘inside’ marker I(Ratinov & Roth,2009). In clustering, pairwise binary decisions must obey transitivity so that they yield a validequivalence class relation over the data points (McCallum & Wellner, 2005; Wick et al., 2006; 2008).In syntactic/dependency parsing, the output sequence must encode a valid parse tree (McDonald& Pereira, 2006; Vinyals et al., 2015a; Dyer et al., 2016). In formal language generation or neuralcompilers the output must belong to a context free language or compile (Reed & de Freitas, 2016). Indual decomposition approaches to joint inference, copies of variables must satisfy equality constraints(Koo et al., 2010; Rush et al., 2010; Rush & Collins, 2012). Finally, in some ensemble methods,the outputs of multiple conditionally independent classifiers must reach a consensus on the outputclass. Indeed, there are a tremendous number of problems that require hard constraints on the outputs.Unlike softer dependencies, violating a hard-constraint is often unacceptable because the output ofthe network would not “type-check” causing problems for downstream components. Unfortunatelyin practice, networks are not always able to exactly learn constraints from the training data alone.As a motivating example, consider a sequence-to-sequence network that inputs a sentence and outputsa sequence of “shift-reduce” commands that describe the sentence’s parse tree. Briefly, the shift-1Under review as a conference paper at ICLR 2017reduce commands control a parsing algorithm by indicating how and when to use its stack. Eachcommand controls whether to shift ( s) a token onto the stack, reduce ( r) the top of the stack into aparent tree node, or push ( !) the current reduction back onto the stack.To be successful, the network must generate commands that imply a valid tree over the entire inputsentence. However, the decoder outputs just a single command at a time, producing some outputsthat are not globally-consistent, valid shift-reduce programs. Indeed, the output may not have enoughshifts to include every input token in the tree or may attempt to reduce when the stack is empty. Forexample, the following input sentence “ So it ’s a very mixed bag . ” comprises ten space-delimitedtokens (the quotations are part of the input), but our unconstrained sequence-to-sequence networkoutputs an invalid sequence with only nine shifts ssr!sr!ssssrrr!rr!ssrrrrrr! . We mustintroduce another shift so the last token is pushed onto the stack and issue another reduce so itis inserted into the tree.We could attempt to fix the output with post-processing, but where is the right place to insertthese commands in the sequence? There are 406 = choose (29;2)candidate locations. Furthercomplicating our post-processing dilemma is the fact that the output contains several other errorsthat are seemingly unrelated to the constraint. Instead, we could attempt to fix the problem with amore sophisticated decoder, but this is difficult because the decoder outputs a single character at eachtime-step and our constraints are global, limiting corrections to the end of the sequence when it is toolate to rectify an earlier decision. A beam search is less myopic, but in practice most of the network’soutput mass is peaked on the best output token, resulting in little improvement.In this paper, we propose an inference method for neural networks that enforces output constraintswithout employing combinatorial discrete search. The idea is to modify some (or all) of the weightsfor each instance at test-time, iteratively nudging them, until the network’s efficient unconstrainedinference procedure produces a valid output. We achieve this by expressing the hard constraints asan optimization problem over the continuous weights and employ back-propagation to change them.Prima facie , back-propagation is doomed because the constraint loss is necessarily a function of theargmax that produced the discrete values. However, we circumvent this problem by optimizing overthe energy of the violating outputs instead. Since the weights directly determine the output throughthe energy, we are able to manipulate the unconstrained inference procedure to produce the desiredresult. Much like scoped-learning, the algorithm customizes the weights for each example at test-time(Blei et al., 2002), but does so in a way to satisfy the constraints.When applied to the above example, our method removes enough energy mass from the invalid outputspace in only twelve steps, allowing unconstrained decoding to produce a valid output sequence:ssr!sr!ssssrrr!rr!ssrrrrrr! (initial output)sssr!ssssrr!srrr!rr!ssrrrrrr! (rectified output after 12 steps)Interestingly, the network generates an additional scommand at the beginning of the sequence whilealso producing a cascade of error correction in later time steps: the new output now satisfies theconstraints and is a perfectly correct parse. Of course, enforcing constraints does not always lead toan improvement in accuracy, but we find that often it does in practice, especially for a well-trainednetwork. We find that our method is able to completely satisfy constraints in up to 81% of the outputs.2 B ACKGROUNDConsider a neural network that generates a variable length output vector y=fyigny1from a variablelength input vector x=fxigmx1. For example, in image classification, the input vector encodes fixedmulti-dimensional tensor of pixel intensities and the output vector comprises just a single elementcorresponding to the discrete class label. In sequence-to-sequence, the input might be a variablelength vector of French tokens, and the output would be a variable length vector of its Englishtranslation. It is sometimes convenient to think of the network as a function from input to outputf(x;W)7!y (1)However, for the purpose of exposition, we separate the neural network into a real-valued model(negative energy function) that scores the compatibility of the outputs (given the weights and input)and an inference procedure that searches for high scoring outputs.2Under review as a conference paper at ICLR 2017For the model, let yibe a discrete output from an output unit and let (yi;x;W)be its correspondingreal-valued log-space activation score (e.g., the log of the softmax for locally normalized models orsimply a linear activation value for globally normalized models). Define the negative energy over acollection of output values yas an exponentiated sum of log-space activation scores(y;x;W) = exp Xi (yi;x;W)!(2)Then, inference is the problem of finding the values of the outputs ythat maximize the negativeenergy given fixed inputs xand weights W. Thus, we can rewrite the neural network as the function:f(x;W)7!argmaxy(y;x;W) (3)The purpose of separating the model from the inference procedure is so we can later formalize ouroptimization problem. We emphasize that this formulation is consistent with existing neural networks.Indeed, inference in feed-forward networks is a single feed-forward pass from inputs to outputs.When the outputs only depend on each other through hidden states that only depend on earlier layersof the network, feed-forward inference is exact in the sense that it finds the optimum of Equation 3.For recurrent neural networks (RNNs), each output depends on hidden states that are functions ofprevious output values. However, we can still think of the usual procedure that produces the highestscoring output at each time step as a local greedy approximation to global inference; of course, theprocedure can optionally be improved with a beam.3 C ONSTRAINED INFERENCE FOR NEURAL NETWORKSA major advantage of neural networks is that once trained, inference is extremely efficient. However,constraints can render inference intractable due to discrete search. Our goal is take advantage of thefact that unconstrained inference is inexpensive and design a constrained inference algorithm thatexploits such a procedure as a black box. Our method iteratively adjusts the weights for each test-timeinput, concentrating the probability mass on the feasible region so that unconstrained inferencebecomes increasingly likely to generate an output that satisfies the constraints.In this work, we focus on constraints that require the outputs to belong to an input-dependent context-free languageLx(CFL). The idea is to treat the output space of the neural network as the terminalsymbols, and devise the appropriate production rules and non-terminals to express constraints onthem. An advantage of employing CFLs over other formalisms such as first order logic (FOL) isthat CFLs are intuitive for expressing constraints on the outputs, especially for language models andsequence-to-sequence networks. For example, when modeling Python or Java code, it is easy toexpress many of the desired programming language’s constraints using a CFL, but cumbersome inFOL. Indeed, CFLs are an expressive class of languages.To motivate our algorithm, we begin with the ideal optimization problem and argue that unlikefor linear models with local constraints, the resulting Lagrangian is not well suited for globallyconstrained inference in neural networks. We ultimately settle on an alternative objective function thatreasonably models our constrained inference problem. Although our algorithm lacks the theoreticalguarantees enjoyed by classic relaxation algorithms we nevertheless find it works well in practice.Consider the following constrained inference problem for neural networksmaxy(x;y;W)s:t:y2Lx(4)Naively enforcing the constraint requires combinatorial discrete search, which is intractable in general.Instead, we prefer a smooth optimization problem with meaningful gradients to guide the search.With this in mind, let g(y;L)7!rforr2R+be a function that measures a loss between a sentenceyand a grammarLsuch thatg(y;L) = 0 if and only if there are no grammatical errors in y. That is,g(y;L) = 0 for the feasible region and is strictly positive everywhere else. For a large class of CFLs,gcould be the least errors count function (Lyon, 1974) or a weighted version thereof. We could thenexpress CFL membership as an equality constraint and minimize the Lagrangianminmaxy(x;y;W) +g(y;L) (5)3Under review as a conference paper at ICLR 2017However, this dual optimization problem has a major flaw. Our constraints are global and do notnecessarily factorize over the individual outputs. Consequently, there is just a single dual variable. Optimizing does little more than eliminate a single contour of output configurations at a time,resulting in a brute-force trial and error search.Instead, observe that the network’s weights control the negative energy of the output configurations.By properly adjusting the weights, we can affect the outcome of inference by removing mass frominvalid outputs. The weights are likely to generalize much better than the single dual variable becausein most neural networks, the weights are tied across space (e.g., CNNs) or time (e.g., RNNs). As aresult, lowering the negative energy for a single invalid output has the effect of lowering the negativeenergy for an entire family of invalid outputs, enabling faster search. With this in mind, we introducean independent copy Wof the network’s weights Wand minimize with respect to these “dualweights” instead of the dual variable. This is powerful because we have effectively introduced anexponential number of “dual variables” (via the energy, which scores each output) that we can easilycontrol via the weights; although similar, the new optimization is no longer equivalent to the original:minWmaxy(x;y;W) + ( x;y;W)g(y;L) (6)While a step in the right direction, the objective still requires combinatorial search because (1) themaximization involves two non-linear neural networks and (2) a greedy decoding algorithm is unableto cope with the global loss g() because the constraints do not factorize over the individual outputs.In contrast the functions involved in classic Lagrangian relaxation methods for NLP have multipliersfor each output variable that can be combined with linear models to form a single unified decodingproblem for which efficient inference exists (Koo et al., 2010; Rush et al., 2010; Rush & Collins,2012). Since our non-linear functions and global constraints do not afford us the same ability, wemust modify the optimization problem for a final time so that we can employ the network’s efficientinference procedure as a black-box. In particular, we (1) remove the negative-energy term thatinvolves the original weights Wand compensate with a regularizer that attempts to keep the dualweightsWas close to these weights as possible and (2) maximize exclusively over the networkparameterized by W. The result is a different optimization problem on which our algorithm is based:minW(x;y;W)g(y;Lx) +kWWk2y= argmaxy(x;y;W)(7)Informally, our algorithm alternates the maximization (by running efficient unconstrained inference)and minimization (by performing SGD) until it produces a feasible output or it exceeds a maximumnumber of iterations. For each test-example, we re-initialize the dual weights to the trained weights toensure the network does not deviate too far from the trained network. More precisely see Algorithm 1.Algorithm 1 Constrained inference for neural netsInputs: test instance x, input specific CFL Lx, pretrained weights WW W#reset instance-specific weightswhile not converged doy f(x;W)#perform inference using weights Wr @@W(x;y;W)g(y;Lx) +kWWk2#compute constraint lossW Wr#update instance-specific weights with SGD or a variant thereofend while4 A PPLICATION TO PARSINGConsider the structured prediction problem of syntactic parsing in which the goal is to input a sentencecomprising a sequence of tokens and output a tree describing the grammatical parse of the sentence.One way to model the problem with neural networks is to linearize the representation of the parsetree and then employ the familiar sequence-to-sequence model (Vinyals et al., 2015a).Let us suppose we linearize the tree using a sequence of shift ( s) and reduce ( r,r! ) commands thatcontrol an implicit shift reduce parser. Intuitively, these commands describe the exact instructions forconverting the input sentence into a complete parse tree: the interpretation of the symbol sis that we4Under review as a conference paper at ICLR 2017shift an input token onto the stack and the interpretation of the symbol ris that we start (or continue)reducing (popping) the top elements of the stack, the interpretation of a third symbol !is that we stopreducing and push the reduced result back onto the stack. Thus, given an input sentence and an outputsequence of shift-reduce commands, we can deterministically recover the tree by simulating a shiftreduce parser. For example, the sequence ssrr!ssr!rr!rr! encodes a type-free version of theparse tree (S (NP the ball) (VP is (NP red))) for the input sentence “the ball is red”It is easy to recover the tree structure from the input sentence and the output commands by simulatinga shift reduce parser, performing one command at a time as prescribed by the classic algorithm.Note that for output sequences to form a valid tree over the input, the sequence must satisfy a numberof constraints. First, the number of shifts must equal the number of input tokens mx, otherwise eitherthe tree would not cover the entire input sentence or the tree would contain spurious terminal symbols.Second, the parser cannot issue a reduce command if there are no items left on the stack. Third, thenumber of reduces must be sufficient to leave just a single item, the root node, on the stack.We can express most of these constraints with a CFLL=8>>>>><>>>>>:G!sRr!R!sRrR!Rr!R!RRR!(8)Intuitively, Rule 1 states that a valid shift-reduce command set must begin with a shift (since stack isinitially empty, there is nothing to reduce) and end with a reduce that places the final result on thestack. Rule 2 states that if we do a shift, then we need to reduce the shifted token at some point in thefuture. Rule 3 states that if we do not shift then we are allowed to reduce only if we also push theresult on the stack. Rule 4 allows for multiple subtrees. Rule 5 is the base case.Note, however, that this grammar is for a general purpose shift-reduce language, but we need toconstrain the number of shifts to equal the number of input tokens mx. Since the constraint is a bitverbose to express with production rules, we can instead write the regular language (s(r!)?)mx(r!)?wheremis the number of elements in xand intersect it with our CFL.Lx=L\(s(r!)?)mx(r!)?(9)Rather than relying on a general purpose algorithm to compute g(y;Lx)that measures the numberof grammatical errors, we instead implement it specifically for our language. Let ctni=1(b(i))be thefunction that counts the number of times proposition b(i)is true. Now, define the following lossg(y;Lx) = (mcti(yi=s))2+ Xictj>i(yj=r)ctj>i(yj2fs;!g)!2+cti(yi=r)(cti(yi2fs;!g))2(10)The first term measures the amount of violation due to the regular language and the second and thirdterms measure the amount of violation according to the CFL.5 R ELATED WORKThere has been recent work in applying neural networks to structured prediction problems. Forexample, the recent structured prediction energy networks (SPENS) combines graphical models andneural networks via an energy function defined over the output variables (Belanger & McCallum,2016). SPENS focuses on soft constraints (via the energy function) and performs inference byrelaxing the binary output variables to be continuous and then backpropagating into them. In contrast,our method focuses on hard constraints and we backpropagate into the weights rather than into theoutputs directly. We could combine our method with SPENs to handle soft constraints; for example,by back-propagating the output energy into the weights instead of the relaxed outputs themselves.There has been recent work on applying neural networks to parsing problems that require the ability tohandle hard constraints. For example, by employing a sequence-to-sequence network (Vinyals et al.,2015a) or a custom network designed for shift reduce parsing (Dyer et al., 2016). The former requires5Under review as a conference paper at ICLR 2017task inference weights changed (W)conversion rate accuracyazbzunconstrained none 0.0% 75.6%constrained all 65.2% 82.4%constrained output only 20.9% 77.8%constrained encoder only 58.2% 82.5%constrained decoder only 57.4% 82.3%srno typesunconstrained none 0.0% 84.0%constrained all 81.8% 84.4%srwith typesunconstrained none 0.0% 87.8%constrained all 79.2% 88.3%constrained output only 5.0% 88.1%constrained decoder (top layer) 36.2% 88.2%constrained decoder (all layers) 54.7% 88.3%constrained decoder (top) + attention 38.0% 88.1%constrained decoder (all) + attention 56.5% 88.2%Table 1: Conversion rates on all three tasks with 100 steps of SGD. Note that satisfying the constraintshas no negative affect on accuracy and often has a positive affect.bzazbzazbzazazbzbzbzbzbz !zbaaazbaaazbaaaaaazbzbzbzbzbiteration output loss accuracy0zbaaazbaaazbaaaaaazbzbzbaaazbzb 0.260 75.039zbaaazbaaazbaaaaaazbzbzbaaazbzb 0.259 75.040zbaaazbaaazbaaaaaazbzbzbaaazb 0.250 80.072zbaaazbaaazbaaaaaazbzbzbaaazb 0.249 80.073zbaaazbaaazbaaaaaazbzbzbzbzb 0.0 100.0Table 2: An example for which enforcing the constraints improves accuracy. Red indicates errors.The output changes more than once before the constraints are finally enforced. Greedy decoding withconstraints might correct this example because the spurious a’s are at the end of the sequence.the output to form a valid parse tree and hence they employ post-processing to ensure this property.The latter satisfies constraints as part of the decoding process by sampling over a combinatorial space.Our approach does not rely on post processing or discrete search.Another intriguing approach is to distill the hard constraints into the weights at training time using ateacher network (Hu et al., 2016). The method is appealing because it does not require constrainedinference or combinatorial search. However, the method must achieve a difficult balance between theloss due to the training data and the loss due to the constraint violations. Further, it would cruciallyrely on network’s ability to generalize the constraints learned on the training data to the testing data.Finally, our method highly resembles dual decomposition and more generally Lagrangian relaxationfor structured prediction (Koo et al., 2010; Rush et al., 2010; Rush & Collins, 2012). In suchtechniques, it is assumed that a computationally efficient inference algorithm can maximize overa superset of the feasible region (indeed this assumption parallels our exploitation of the fact thatunconstrained inference in the neural network is efficient). Then, the method employs gradientdescent to gradually concentrate this superset onto the feasible region until the constraints aresatisfied. However, for computational reasons, these techniques assume that the constraints factorizeover the output and that the functions are linear so that they can be combined into a single model. Incontrast, we have a single dual variable so we instead minimize with respect to the weights, whichgeneralize better over the output. Further, we are unable to combine the dual into a single model overwhich we can do inference because the network is highly non-linear.6 E XPERIMENTSIn this section we empirically evaluate our constrained inference procedure on two sequence-to-sequence tasks. The first is a transduction task between two simple languages, which we describenext. The second is the sequence-to-sequence shift-reduce parsing task described in Section 4.6Under review as a conference paper at ICLR 2017azazbzazbzbzazbzbzbzbzbz !aaaaaazbaaazbzbaaazbzbzbzbzbiteration output loss accuracy0aaaaaazbaaazbaaazbzbzbzbaaazb 0.2472 66.71aaaaaazbaaazbaaazbzbzbzbaaazb 0.2467 66.72aaaaaazbaaazbaaazbzbzbzbaaazb 0.2462 66.73aaaaaazbaaazbzbaaazbzbzbzbzb 0.0 100.0Table 3: An example for which enforcing the constraints improves accuracy. Red indicates errors.Note that greedy decoding with constraints would not fix the errors in the middle since errors aremade before constraints are violated. In contrast, the proposed method takes the constraints intoaccount in a globall manner, allowing earlier errors to be corrected by future constraint violations.bzbzbzbzazbzbzazazazazbz !zbzbzbzbaaazbzbaaaaaaaaaaaazbiteration output loss accuracy0zbzbzbzbaaazbaaaaaaaaaaaazbaaa 0.2954 74.24zbzbzbzbzbaaaaaaaaazbzbaaaaaa 0.0 60.0Table 4: An example for which enforcing the constraints degrades accuracy. Errors in red.A transducer T:L1!L 2is a function from a source language to a target language. For the purposeof the experiments Tis known and our goal is to learn it from data. We choose a transducer similarto those studied in recent work (Grefenstette et al., 2015). The source language L0is(az|bz)?and the target language L1is(aaa|zb)?. The transducer is defined to map aztoaaa andbztozb. For example, T( bzazbz )7!zbaaazb . The training set comprises 1934 sequences of length2–20 and the test set contain sentences of lengths 21-24. As is common practice, we employ shortersentences for training to require generalization to longer sentences at test time.We employ a thirty-two hidden unit single-layered, attentionless, sequence-to-sequence long short-term memory (LSTM) in which the decoder LSTM inputs the final encoder state at each time-step. Theencoder and decoder LSTMs each have their own set of weights. We train the network for 1000 epochsusing RMSProp to maximize the likelihood of the output (decoder) sequences in the training set. Thenetwork achieves perfect train accuracy while learning the rules of the output grammar nearly perfectly,even on the test-set. However, despite learning the train-set perfectly, the network fails to learn theinput-specific constraint that the number of a’s in the output should be three times as the number inthe input. We implement a loss for this constraint and evaluate how well our method enforces theconstraint at test-time: g(y;Lx1) = (n+m)13PxiI(xi=a)PyiI(yi=a)2wheren+m, the combined intput/output length, normalizes between 0 and 1. For constrained inference werun Algorithm 1 and employ vanilla stochastic gradient descent with a learning rate of 0.05 and noweight decay. We cap the number of iterations at a maximum of 100.The top section of Table 1 contains the results for this azbz task. We use the term converted to referto a sentence that initially had a constraint-violation, but was later fixed by the constrained-inferenceprocedure. The conversion rate is the percentage of such sentences that we convert: on this task, upto two-thirds. We experiment with which subset of the weights is best for satisfying the constraints,finding that it is best to modify them all. We also report accuracy to study an initial concern.Specifically, we had to omit the negative energy of the original weights Wfrom our optimizationproblem, Equation 7, potentially allowing the network to find a set of dual weights Wthat happento satisfy the constraints, but that have poor performance. However, we found this not to be the case.In fact, we report the token-wise accuracy over the examples for which the unconstrained neuralnetwork violated constraints and find that on the contrary, accuracy improves. Further, we find theregularizer is unnecessary since the initialization W=Wensures the network never drifts too far.In order to gain a better understanding of the algorithm’s behavior, we provide data-cases thathighlight both success and failure (Tables 2,3,4). The title of these tables is the input and the desiredground truth output. The rows of the table show the network’s output at each iteration (as indicated).The loss column is the constraint loss weighted by the output’s energy (x;y;W)g(y;LX1), andthe final column is the token-wise accuracy between the output and the ground truth.7Under review as a conference paper at ICLR 2017h“ So it ’s a very mixed bag . ” i! sssr!ssssrr!srrr!rr!ssrrrrrr!iteration output loss accuracy0ssr!sr!ssssrrr!rr!ssrrrrrr! 0.0857 33.3%11ssr!sr!ssssrrr!rr!ssrrrrrr! 0.0855 33.3%12sssr!ssssrr!srrr!rr!ssrrrrrr! 0.0000 100.0%Table 5: A shift-reduce example for which the method successfully enforces constraints. The initialoutput has only nine shifts, but there are ten tokens in the input. Enforcing the constraint not onlycorrects the number of shifts to ten, but changes the implied tree structure to the correct tree.Table 2 contains an example for which our method successfully satisfies the constraints resultingin perfect accuracy. However, because the constraint violation appears at the end of the string, agreedy decoder that opportunistically enforces constraints on the fly could potentially correct thiserror. In Table 3 we show a more interesting example for which such a greedy decoder would notbe as successful. In particular, the unconstrained network outputs the final aaa too early in thesequence, but the constraint that controls the number of a’s in the output is not violated until theend of the sequence. In contrast, our method takes the constraint into account globally, allowingthe network to not only rectify the constraint, but to achieve perfect accuracy on the sentence(in just four gradient updates). Finally, in Table 4, we show an example for which enforcing theconstraints hurts the accuracy. The updates causes the network to erroneously change outputs thatwere actually correct. This can happen if (a) the underlying network is sometimes inaccurate inits output or confidence/probabilities thereon or (b) the gradient steps are too large causing thenetwork to completely leapfrog over the correct solution in a single step. The latter can be avoided bynormalizing the constraint loss so it does not grow unbounded with the number of outputs and byerring on the side of a smaller learning rate.We repeat the same experiment (middle section of Table 1), but on the shift-reduce parsing taskdescribed in Section 4. We convert the Wall Street Journal portion of the Penn Tree Bank (PTB) intoshift-reduce commands and randomly split into 30k train and 9.2k test examples. We increase thenumber of hidden units to sixty-four to accommodate the larger input space (50k words) and employEquation 10 (normalized by sequence length) for the constraint loss. We measure the sequence-aligned token accuracy. Otherwise, we employ the exact same experimental parameters as the azbztask, both for training the LSTM and for our algorithm. We find that our algorithm performs evenbetter on the real-world task, converting over 80% of the violated outputs. We again find that ourprocedure has no negative impact on accuracy, which in fact improves, but not as substantially as fortheazbz task. Table 5 contains a successful example that we had previously highlighted in Section 1.The algorithm satisfies the constraints, and also corrects the remaining output errors.Finally, we conduct a version of the shift-reduce experiment that includes the phrase types (e.g.,noun-phrase (NP)). To accommodate the larger output space (output alphabet size increases to 479),we employ a larger network with 128 hidden units, attention and three-layers. Note that even thismore sophisticated network fails to learn the constraints from data and adding layers does not help.The larger network affords us the opportunity to experiment with modifying different subsets ofweights for enforcing constraints. As seen in the last section of Table 1, modifying all the weightsworks best, converting 79.2% of the violating sentences; again without negatively affecting accuracy.7 C ONCLUSIONWe presented an algorithm for satisfying constraints in neural networks that avoids combinatorialsearch, but employs the network’s efficient unconstrained procedure as a black box. We evaluatedthe algorithm on two sequence to sequence tasks, a toy transducer problem and a real-world shift-reduce parsing problem. We found that the method was able to completely rectify up to 80% ofviolated outputs when capping the number of iterations at 100. Often, enforcing constraints causedthe accuracy to improve, dispelling initial concerns that adjusting the weights at test-time wouldbe treacherous. Our method currently lacks the same theoretical guarantees as classic Lagrangianrelaxation methods, so in future work we want to focus on supplemental theory and additionalobjective functions. We also hope to extend the work to handle soft constraints, for example, asimposed by an external language model.8Under review as a conference paper at ICLR 2017REFERENCESDavid Belanger and Andrew McCallum. Structured prediction energy networks. In InternationalConference on Machine Learning , 2016.David M. Blei, Andrew Bagnell, and Andrew K. McCallum. Learning with scope, with applicationto information extraction and classification. In Uncertainty in Artificial Intelligence (UAI) , 2002.Kyunghyun Cho, Bart Van Merri ̈enboer, C ̧alar G ̈ulc ̧ehre, Dzmitry Bahdanau, Fethi Bougares, Hol-ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder–decoderfor statistical machine translation. In Proceedings of the 2014 Conference on Empirical Meth-ods in Natural Language Processing (EMNLP) , pp. 1724–1734. Association for ComputationalLinguistics, October 2014.Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. Recurrent neural networkgrammars. In NAACL-HLT , pp. 199–209, 2016.Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning totransduce with unbounded memory. In Neural Information Processing Systems (NIPS) , 2015.Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric P. Xing. Harnessing deep neuralnetworks with logical rules. In Association for Computational Linguistics (ACL) , 2016.Terry Koo, Alexander M Rush, Michael Collins, Tommi Jaakkola, and David Sontag. Dual decompo-sition for parsing with non-projective head automata. In Proceedings of the 2010 Conference onEmpirical Methods in Natural Language Processing , pp. 1288–1298. Association for Computa-tional Linguistics, 2010.Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, VictorZhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks fornatural language processing. Machine Learning , pp. 1378–1387, 2016.Gordon Lyon. Syntax-directed least-errors anallysis for context-free languages: A practical approach.Programming Languages , 17(1), January 1974.Andrew McCallum and Ben Wellner. Conditional models of identity uncertainty with applications tonoun coreference. In Neural Information Processing Systems (NIPS) , 2005.Ryan McDonald and Fernando Pereira. Learning of approximate dependency parsing algorithms. InEACL , 2006.Lev Ratinov and Dan Roth. Design challenges and misconceptions in named entity recognition. InComputational Natural Language Learning (CoNNL) , 2009.Scott Reed and Nando de Freitas. Neural program interpreters. In International Conference onLearning Representations (ICLR) , 2016.Alexander M. Rush and Michael Collins. A tutorial on dual decomposition and lagrangian relaxationfor inference in natural language processing. Journal of Artificial Intelligence Research , 45:305–362, 2012.Alexander M Rush, David Sontag, Michael Collins, and Tommi Jaakkola. On dual decompositionand linear programming relaxations for natural language processing. In Proceedings of the 2010Conference on Empirical Methods in Natural Language Processing , pp. 1–11. Association forComputational Linguistics, 2010.Ilya Sutskever, Oriol Vinyals, and Quoc V . Le. Sequence to sequence learning with neural networks.InNeural Information Processing Systems (NIPS) , 2014.Oriol Vinyals, Luksz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. Grammaras a foreign language. In NIPS , 2015a.Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural imagecaption generator. In Computer Vision and Pattern Recognition (CVPR) , 2015b.9Under review as a conference paper at ICLR 2017Michael Wick, Aron Culotta, and Andrew McCallum. Learning field compatibilities to extractdatabase records from unstructured text. In Proceedings of the 2006 Conference on EmpiricalMethods in Natural Language Processing , EMNLP ’06, pp. 603–611, Stroudsburg, PA, USA, 2006.Association for Computational Linguistics. ISBN 1-932432-73-6.Michael Wick, Khashayar Rohanimanesh, Andrew McCallum, and AnHai Doan. A discriminativeapproach to ontology alignment. In In proceedings of the 14th NTII WS at the conference for VeryLarge Databases (VLDB) , 2008.10
BysvGP5ee
Published as a conference paper at ICLR 2017VARIATIONAL LOSSY AUTOENCODERXi Chenyz, Diederik P. Kingmaz, Tim Salimansz, Yan Duanyz, Prafulla Dhariwalz,John Schulmanyz, Ilya Sutskeverz, Pieter AbbeelyzyUC Berkeley, Department of Electrical Engineering and Computer SciencezOpenAIfpeter,dpkingma,tim,rocky,prafulla,joschu,ilyasu,pieter g@openai.comABSTRACTRepresentation learning seeks to expose certain aspects of observed data in alearned representation that’s amenable to downstream tasks like classification. Forinstance, a good representation for 2D images might be one that describes onlyglobal structure and discards information about detailed texture. In this paper,we present a simple but principled method to learn such global representationsby combining Variational Autoencoder (V AE) with neural autoregressive modelssuch as RNN, MADE and PixelRNN/CNN. Our proposed V AE model allows usto have control over what the global latent code can learn and by designing thearchitecture accordingly, we can force the global latent code to discard irrelevantinformation such as texture in 2D images, and hence the V AE only “autoencodes”data in a lossy fashion. In addition, by leveraging autoregressive models as bothprior distribution p(z)and decoding distribution p(xjz), we can greatly improvegenerative modeling performance of V AEs, achieving new state-of-the-art resultson MNIST, OMNIGLOT and Caltech-101 Silhouettes density estimation tasks aswell as competitive results on CIFAR10.1 I NTRODUCTIONA key goal of representation learning is to identify and disentangle the underlying causal factors ofthe data, so that it becomes easier to understand the data, to classify it, or to perform other tasks(Bengio et al., 2013). For image data this often means that we are interested in uncovering the“global structure” that captures the content of an image (for example, the identity of objects presentin the image) and its “style”, but that we are typically less interested in the local and high frequencysources of variation such as the specific textures or white noise patterns.A popular approach for learning representations is to fit a probabilistic latent variable model, an ap-proach also known as analysis-by-synthesis (Yuille & Kersten, 2006; Nair et al., 2008). By learninga generative model of the data with the appropriate hierarchical structure of latent variables, it ishoped that the model will somehow uncover and untangle those causal sources of variations thatwe happen to be interested in. However, without further assumptions, representation learning viagenerative modeling is ill-posed: there are many different possible generative models with different(or no) kinds of latent variables that all encode the same probability density function on our ob-served data. Thus, the results we empirically get using this approach are highly dependent on thespecific architectural and modeling choices that are made. Moreover, the objective that we optimizeis often completely disconnected from the goal of learning a good representation: An autoregressivemodel of the data may achieve the same log-likelihood as a variational autoencoder (V AE) (Kingma& Welling, 2013), but the structure learned by the two models is completely different: the lattertypically has a clear hierarchy of latent variables, while the autoregressive model has no stochasticlatent variables at all (although it is conceivable that the deterministic hidden units of the autore-gressive models will have meaningful and useful representations). For this reason, autoregressivemodels have thus far not been popular for the purpose of learning representations, even though theyare extremely powerful as generative models (see e.g. van den Oord et al., 2016a).A natural question becomes: is it possible to have a model that is a powerful density estimatorand at the same time has the right hierarchical structure for representation learning? A potentialsolution would be to use a hybrid model that has both the latent variable structure of a V AE, as1Published as a conference paper at ICLR 2017well as the powerful recurrence of an autoregressive model. However, earlier attempts at combiningthese two kinds of models have run into the problem that the autoregressive part of the model ends upexplaining all structure in the data, while the latent variables are not used (Fabius & van Amersfoort,2014; Chung et al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu &Sun, 2016). Bowman et al. (2015) noted that weakening the autoregressive part of the model by,for example, dropout can encourage the latent variables to be used. We analyze why weakeningis necessary, and we propose a principled solution that takes advantage of this property to controlwhat kind of information goes into latent variables. The model we propose performs well as adensity estimator, as evidenced by state-of-the-art log-likelihood results on MNIST, OMNIGLOTand Caltech-101, and also has a structure that is uniquely suited for learning interesting globalrepresentations of data.2 VAE S DO NOT AUTOENCODE IN GENERALA V AE is frequently interpreted as a regularized autoencoder (Kingma & Welling, 2013; Zhanget al., 2016), but the conditions under which it is guaranteed to autoencode (reconstruction beingclose to original datapoint) are not discussed. In this section, we discuss the often-neglected factthat V AEs do not always autoencode and give explicit reasons why previous attempts to apply V AEin sequence modeling found that the latent code is generally not used unless the decoder is weakened(Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016). The understanding of when V AEdoes autoencode will be an essential building piece for VLAE.2.1 T ECHNICAL BACKGROUNDLetxbe observed variables, zlatent variables and let p(x;z)be the parametric model of theirjoint distribution, called the generative model defined over the variables. Given a dataset X=fx1;:::;xNgwe wish to perform maximum likelihood learning of its parameters:logp(X) =NXi=1logp(x(i)); (1)but in general this marginal likelihood is intractable to compute or differentiate directly for flexiblegenerative models that have high-dimensional latent variables and flexible priors and likelihoods. Asolution is to introduce q(zjx), a parametric inference model defined over the latent variables, andoptimize the variational lower bound on the marginal log-likelihood of each observation x:logp(x)Eq(zjx)[logp(x;z)logq(zjx)] =L(x;) (2)whereindicates the parameters of pandqmodels.There are various ways to optimize the lower bound L(x;); for continuous zit can be done effi-ciently through a re-parameterization of q(zjx)(Kingma & Welling, 2013; Rezende et al., 2014).This way of optimizing the variational lower bound with a parametric inference network and re-parameterization of continuous latent variables is usually called V AE. The “autoencoding” termi-nology comes from the fact that the lower bound L(x;)can be re-arranged:L(x;) =Eq(zjx)[logp(x;z)logq(zjx)] (3)=Eq(zjx)[logp(xjz)]DKL(q(zjx)jjp(z)) (4)where the first term can be seen as the expectation of negative reconstruction error and the KLdivergence term can be seen as a regularizer, which as a whole could be seen as a regularizedautoencoder loss with q(zjx)being the encoder and p(xjz)being the decoder. In the context of2D images modeling, the decoding distribution p(xjz)is usually chosen to be a simple factorizeddistribution, i.e. p(xjz) =Qip(xijz), and this setup often yields a sharp decoding distributionp(xjz)that tends to reconstruct original datapoint xexactly.2.2 B ITS-BACK CODING AND INFORMATION PREFERENCEIt’s straightforward to see that having a more powerful p(xjz)will make V AE’s marginal generativedistribution p(x) =Rzp(z)p(xjz)dzmore expressive. This idea has been explored extensively2Published as a conference paper at ICLR 2017in previous work applying V AE to sequence modeling (Fabius & van Amersfoort, 2014; Chunget al., 2015; Bowman et al., 2015; Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016),where the decoding distribution is a powerful RNN with autoregressive dependency, i.e., p(xjz) =Qip(xijz;x<i). Since RNNs are universal function approximators and any joint distribution over xadmits an autoregressive factorization, the RNN autoregressive decoding distribution can in theoryrepresent any probability distribution even without dependence on z.However, previous attempts have found it hard to benefit from V AE when using an expressive de-coding distribution p(xjz). Indeed it’s documented in detail by Bowman et al. (2015) that in mostcases when an RNN autoregressive decoding distribution is used, the latent code zis completelyignored and the model regresses to be a standard unconditional RNN autoregressive distribution thatdoesn’t depend on the latent code. This phenomenon is commonly attributed to “optimization chal-lenges” of V AE in the literature (Bowman et al., 2015; Serban et al., 2016; Kaae Sønderby et al.,2016) because early in the training the approximate posterior q(zjx)carries little information aboutdatapoint xand hence it’s easy for the model to just set the approximate posterior to be the prior toavoid paying any regularization cost DKL(q(zjx)jjp(z)).Here we present a simple but often-neglected observation that this phenomenon arises not just dueto optimization challenges and instead even if we can solve the optimization problems exactly, thelatent code should still be ignored at optimum for most practical instances of V AE that have in-tractable true posterior distributions and sufficiently powerful decoders. It is easiest to understandthis observation from a Bits-Back Coding perspective of V AE.It is well-known that Bits-Back Coding is an information-theoretic view of Variational Inference(Hinton & Van Camp, 1993; Honkela & Valpola, 2004) and specific links have been establishedbetween Bits-Back Coding and the Helmholtz Machine/V AE (Hinton & Zemel, 1994; Gregor et al.,2013). Here we briefly relate V AE to Bits-Back Coding for self-containedness:First recall that the goal of designing an efficient coding protocol is to minimize the expected codelength of communicating x. To explain Bits-Back Coding, let’s first consider a more naive codingscheme. V AE can be seen as a way to encode data in a two-part code: p(z)andp(xjz), where zcan be seen as the essence/structure of a datum and is encoded first and then the modeling error(deviation from z’s structure) is encoded next. The expected code length under this naive codingscheme for a given data distribution is hence:Cnaive(x) =Exdata;zq(zjx)[logp(z)logp(xjz)] (5)This coding scheme is, however, inefficient. Bits-Back Coding improves on it by noticing thatthe encoder distribution q(zjx)can be used to transmit additional information, up to H(q(zjx))expected nats, as long as the receiver also has access to q(zjx). The decoding scheme works asfollows: a receiver first decodes zfromp(z), then decodes xfromp(xjz)and, by running thesame approximate posterior that the sender is using, decodes a secondary message from q(zjx).Hence, to properly measure the code length of V AE’s two-part code, we need to subtract the extrainformation from q(zjx). Using Bit-Back Coding, the expected code length equates to the negativevariational lower bound or the so-called Helmholtz variational free energy, which means minimizingcode length is equivalent to maximizing the variational lower bound:CBitsBack (x) =Exdata;zq(zjx)[logq(zjx)logp(z)logp(xjz)] (6)=Exdata[L(x)] (7)Casting the problem of optimizing V AE into designing an efficient coding scheme easily allows usto reason when the latent code zwill be used: the latent code zwill be used when the two-part codeis an efficient code . Recalling that the lower-bound of expected code length for data is given bythe Shannon entropy of data generation distribution: H(data) = Exdata[logpdata(x)], we cananalyze V AE’s coding efficiency:CBitsBack (x) =Exdata;zq(zjx)[logq(zjx)logp(z)logp(xjz)] (8)=Exdata[logp(x) +DKL(q(zjx)jjp(zjx))] (9)Exdata[logpdata(x) +DKL(q(zjx)jjp(zjx))] (10)=H(data) + Exdata[DKL(q(zjx)jjp(zjx))] (11)3Published as a conference paper at ICLR 2017Since Kullback Leibler divergence is always non-negative, we know that using the two-part codederived from V AE suffers at least an extra code length of DKL(q(zjx)jjp(zjx))nats for using aposterior that’s not precise. Many previous works in Variational Inference have designed flexibleapproximate posteriors to better approximate true posterior (Salimans et al., 2014; Rezende & Mo-hamed, 2015; Tran et al., 2015; Kingma et al., 2016). Improved posterior approximations haveshown to be effective in improving variational inference but none of the existing methods are able tocompletely close the gap between approximate posterior and true posterior. This leads us to believethat for most practical models, at least in the near future, the extra coding cost DKL(q(zjx)jjp(zjx))will exist and will not be negligible.Once we understand the inefficiency of the Bits-Back Coding mechanism, it’s simple to realize whysometimes the latent code zis not used: if the p(xjz)could model pdata(x)without using informa-tion from z, then it will not use z, in which case the true posterior p(zjx)is simply the prior p(z)and it’s usually easy to set q(zjx)to bep(z)to avoid incurring an extra cost DKL(q(zjx)jjp(zjx)).And it’s exactly the case when a powerful decoding distribution is used like an RNN autoregressivedistribution, which given enough capacity is able to model arbitrarily complex distributions. Hencethere exists a preference of information when a V AE is optimized: information that can be modeledlocally by decoding distribution p(xjz)without access to zwill be encoded locally and only theremainder will be encoded in z.We note that one common way to encourage putting information into the code is to use a factorizeddecoderp(xjz) =Qip(xijz)but so long as there is one dimension xjthat’s independent of allother dimensions for true data distribution, pdata(x) =pdata(xj)pdata(x6=j), then the latent codedoesn’t contain all the information about xsince at least xjwill be modeled locally by factorizedp(xjz). This kind of independence structure rarely exists in images so common V AEs that havefactorized decoder autoencode almost exactly. Other techniques to encourage the usage of the latentcode include annealing the relative weight of of DKL(q(zjx)jjp(z))in the variational lower bound(Bowman et al., 2015; Kaae Sønderby et al., 2016) or the use of free bits (Kingma et al., 2016),which can serve the dual purpose of smoothing the optimization landscape and canceling out part ofthe Bits-Back Code inefficiency DKL(q(zjx)jjp(zjx)).3 V ARIATIONAL LOSSY AUTOENCODERThe discussion in Section 2.2 suggests that autoregressive models cannot be combined with V AEsince information will be preferred to be modeled by autoregressive models. Nevertheless, in thissection, we present two complementary classes of improvements to V AE that utilize autoregressivemodels fruitfully to explicitly control representation learning and improve density estimation.3.1 L OSSY CODE VIA EXPLICIT INFORMATION PLACEMENTEven though the information preference property of V AE might suggest that one should always usethe full autoregressive models to achieve a better code length/log-likelihood, especially when slowdata generation is not a concern, we argue that this information preference property can be exploitedto turn the V AE into a powerful representation learning method that gives us fine-grained controlover the kind of information that gets included in the learned representation.When we try to learn a lossy compression/representation of data, we can simply construct a de-coding distribution that’s capable of modeling the part of information that we don’t want the lossyrepresentation to capture, but, critically, that’s incapable of modelling the information that we dowant the lossy representation to capture.For instance, if we are interested in learning a global representation for 2D images that doesn’tencode information about detailed texture, we can construct a specific factorization of the autore-gressive distribution such that it has a small local receptive field as decoding distribution, e.g.,plocal(xjz) =Qip(xijz;xWindowAround( i)). Notice that, as long as xWindowAround( i)is smallerthanx<i,plocal(xjz)won’t be able to represent arbitrarily complex distribution over xwithout de-pendence on zsince the receptive field is limited such that not all distributions over xadmit suchfactorizations. In particular, the receptive field window can be a small rectangle adjacent to a pixelxiand in this case long-range dependency will be encoded in the latent code z. On the other hand,if the true data distribution admits such factorization for a given datum xand dimension i, i.e.4Published as a conference paper at ICLR 2017pdata(xijxWindowAround( i)) =pdata(xijx<i), then the information preference property discussedin Section 2.2 will apply here, which means that all the information will be encoded in local au-toregressive distribution for xi. Local statistics of 2D images like texture will likely be modeledcompletely by a small local window, whereas global structural information of an images like shapesof objects is long-range dependency that can only be communicated through latent code z. There-fore we have given an example V AE that will produce a lossy compression of 2D images carryingexclusively global information that can’t be modeled locally.Notice that a global representation is only one of many possible lossy representations that we canconstruct using this information preference property. For instance, the conditional of an autoregres-sive distribution might depend on a heavily down-sampled receptive field so that it can only modellong-range pattern whereas local high-frequency statistics need to be encoded into the latent code.Hence we have demonstrated that we can achieve explicit placement of information by constrainingthe receptive field/factorization of an autoregressive distribution that’s used as decoding distribution.We want to additionally emphasize the information preference property is an asymptotic view ina sense that it only holds when the variational lowerbound can be optimized well. Thus, we arenot proposing an alternative to techniques like free bits Kingma et al. (2016) or KL annealing, andindeed they are still useful methods to smooth the optimization problem and used in this paper’sexperiments.3.2 L EARNED PRIOR WITH AUTOREGRESSIVE FLOWInefficiency in Bits-Back Coding, i.e., the mismatch between approximate posterior and true poste-rior, can be exploited to construct a lossy code but it’s still important to minimize such inefficiencyto improve overall modeling performance/coding efficiency. We propose to parametrize the priordistributionp(z;)with an autoregressive model and show that a type of autoregressive latent codecan in theory reduce inefficiency in Bits-Back coding.It is well-known that limited approximate posteriors impede learning and therefore various expres-sive posterior approximations have been proposed to improve V AE’s density estimation performance(Turner et al., 2008; Mnih & Gregor, 2014; Salimans et al., 2014; Rezende & Mohamed, 2015;Kingma et al., 2016). One such class of approximate posteriors that has been shown to attain goodempirical performance is based on the idea of Normalizing Flow, which is to apply an invertiblemapping to a simple random variable, for example a factorized Gaussian as commonly used forq(zjx), in order to obtain a complicated random variable. For an invertible transformation betweena simple distribution yand a more flexible z, we know from the change-of-variable technique thatlogq(zjx) = logq(yjx)log detdzdyand usingq(zjx)as approximate posterior will decrease thecoding efficiency gap DKL(q(zjx)jjp(zjx))provided the transformation is sufficiently expressive.Kingma et al. (2016) introduced Inverse Autoregressive Flow, which is a powerful class of suchinvertible mappings that have simple determinant: zi=yii(y1:i1)i(y1:i1), wherei(:)2R;i(:)2R+are general functions that can be parametrized by expressive neural networks, such as MADE andPixelCNN variants (Germain et al., 2015; van den Oord et al., 2016a). Inverse autoregressive flowis the inverse/whitening of autoregressive flow: yi=zii(y1:i1) +i(y1:i1). We refer interestedreaders to (Rezende & Mohamed, 2015; Kingma et al., 2016) for in-depth discussions on relatedtopics.In this paper, we propose to parametrize our learnable prior as an autoregressive flow from somesimple noise source like spherical Gaussian. Next, we show that using latent code transformedby autoregressive flow (AF) is equivalent to using inverse autoregressive flow (IAF) approximateposterior, which explains why it can similarly improve Bits-Back Coding efficiency. Moreover,compared with an IAF posterior, an AF prior has a more expressive generative model that essentially“comes for free”.For an autoregressive flow f, some continuous noise source is transformed into latent code z:z=f(). Assuming the density function for noise source is u(), we similarly know that logp(z) =logu() + log detddz.5Published as a conference paper at ICLR 2017Simply re-arranging the variational lowerbound for using AF prior reveals that having an AF latentcode zis equivalent to using an IAF posterior for that we can interpret as the new latent code:L(x;) =Ezq(zjx)[logp(xjz) + logp(z)logq(zjx)] (12)=Ezq(zjx);=f1(z)logp(xjf()) + logu() + log detddzlogq(zjx)(13)=Ezq(zjx);=f1(z)264logp(xjf()) + logu()(logq(zjx)log detddz)|{z}IAF Posterior375 (14)AF prior is the same as IAF posterior along the encoder path, f1(q(zjx)), but differs along thedecoder/generator path: IAF posterior has a shorter decoder path p(xjz)whereas AF prior has adeeper decoder path p(xjf()). The crucial observation is that AF prior and IAF posterior have thesame computation cost under the expectation of zq(zjx), so using AF prior makes the modelmore expressive at no training time cost.4 E XPERIMENTSIn this paper, we evaluate VLAE on 2D images and leave extensions to other forms of data tofuture work. For the rest of the section, we define a VLAE model as a V AE that uses AF priorand autoregressive decoder. We choose to implement conditional distribution p(xjz)with a small-receptive-field PixelCNN (van den Oord et al., 2016a), which has been proved to be a scalableautoregressive model.For evaluation, we use binary image datasets that are commonly used for density estimation tasks:MNIST (LeCun et al., 1998) (both statically binarized1and dynamically binarized version (Burdaet al., 2015a)), OMNIGLOT (Lake et al., 2013; Burda et al., 2015a) and Caltech-101 Silhouettes(Marlin et al., 2010). All datasets uniformly consist of 28x28binary images, which allow us to usea unified architecture. V AE networks used in binary image datasets are simple variants of ResNetV AEs described in (Salimans et al., 2014; Kingma et al., 2016). For the decoder, we use a variantof PixelCNN that has 6layers of masked convolution with filter size 3, which means the window ofdependency, xWindowAround( i), is limited to a small local patch. During training, ”free bits” (Kingmaet al., 2016) is used improve optimization stability. Experimental setup and hyperparameters aredetailed in the appendix. Reported marginal NLL is estimated using Importance Sampling with4096 samples.We designed experiments to answer the following questions:Can VLAE learn lossy codes that encode global statistics?Does using AF priors improves upon using IAF posteriors as predicted by theory?Does using autoregressive decoding distributions improve density estimation performance?4.1 L OSSY COMPRESSIONFirst we are interested in whether VLAE can learn a lossy representation/compression of data byusing the PixelCNN decoder to model local statistics. We trained VLAE model on Statically Bina-rized MNIST and the converged model has E[DKL(q(zjx)jjp(z))] = 13:3nats= 19:2bits, whichis the number of bits it uses on average to encode/compress one MNIST image. By comparison, anidentical V AE model with factorized decoding distribution will uses on average 37:3bits in latentcode, and this thus indicates that VLAE can learn a lossier compression than a V AE with regularfactorized conditional distribution.The next question is whether VLAE’s lossy compression encodes global statistics and discardslocal statistics. In Fig 1a, we visualize original images xdata and one random “decompression”xdecompressed from VLAE: zq(zjxdata);xdecompressedp(xjz). We observe that none of the1We use the version provided by Hugo Larochelle.6Published as a conference paper at ICLR 2017(a) Original test-set images (left)and “decompressioned” versions fromVLAE’s lossy code (right)(b) Samples from VLAEFigure 1: Statically Binarized MNISTdecompressions is an exact reconstruction of the original image but instead the global structure ofthe image was encoded in the lossy code zand regenerated. Also worth noting is that local statisticsare not preserved but a new set of likely local statistics are generated in the decompressed images:the binary masks are usually different and local styles like stroke width are sometimes slightly dif-ferent.However, we remark that the lossy code zdoesn’t always capture the kind of global information thatwe care about and it’s dependent on the type of constraint we put on the decoder. For instance, inFig 4b, we show decompressions for OMNIGLOT dataset, which has more meaningful variationsin small patches than MNIST, and we can observe that semantics are not preserved in some cases.This highlights the need to specify the type of statistics we care about in a representation, which willbe different across tasks and datasets, and design decoding distribution accordingly.(a) Original test-set images (left)and “decompressioned” versions fromVLAE’s lossy code (right)(b) Samples from VLAEFigure 2: OMNIGLOT4.2 D ENSITY ESTIMATIONNext we investigate whether leveraging autoregressive models as latent distribution p(z)and asdecoding distribution p(xjz)would improve density estimation performance.To verify whether AF prior is able to improve upon IAF posterior alone, it’s desirable to testthis model without using autoregressive decoder but instead using the conventional independentBernoulli distribution for p(xjz). Hence we use the best performing model from Kingma et al.7Published as a conference paper at ICLR 2017Table 1: Statically Binarized MNISTModel NLL TestNormalizing flows (Rezende & Mohamed, 2015) 85.10DRAW (Gregor et al., 2015) <80.97Discrete V AE (Rolfe, 2016) 81.01PixelRNN (van den Oord et al., 2016a) 79.20IAF V AE (Kingma et al., 2016) 79.88AF V AE 79.30VLAE 79.03(2016) on statically binarized MNIST and make the single modification of replacing the originalIAF posterior with an equivalent AF prior, removing the context. As seen in Table 1, V AE with AFprior is outperforming V AE with an equivalent IAF posterior, indicating that the deeper generativemodel from AF prior is beneficial. A similar gain carries over when an autoregressive decoder isused: on statically binarized MNIST, using AF prior instead of IAF posterior reduces train NLL by0:8nat and test NLL by 0:6nat.Next we evaluate whether using autoregressive decoding distribution can improve performance andwe show in Table 1 that a VLAE model, with AF prior and PixelCNN conditional, is able to out-perform a V AE with just AF prior and achieves new state-of-the-art results on statically binarizedMNIST.In addition, we hypothesize that the separation of different types of information, the modeling globalstructure in latent code and local statistics in PixelCNN, likely has some form of good inductive bi-ases for 2D images. In order to evaluate if VLAE is an expressive density estimator with goodinductive biases, we will test a single VLAE model, with the same network architecture , on allbinary datasets. We choose hyperparameters manually on statically binarized MNIST and use thesame hyperparameters to evaluate on dynamically binarized MNIST, OMNIGLOT and Caltech-101Silhouettes. We also note that better performance can be obtained if we individually tune hyperpa-rameters for each dataset. As a concrete demonstration, we report the performance of a fine-tunedVLAE on OMNIGLOT dataset in Table 3.Table 2: Dynamically binarized MNISTModel NLL TestConvolutional V AE + HVI (Salimans et al., 2014) 81.94DLGM 2hl + IWAE (Burda et al., 2015a) 82.90Discrete V AE (Rolfe, 2016) 80.04LV AE (Kaae Sønderby et al., 2016) 81.74DRAW + VGP (Tran et al., 2015) <79.88IAF V AE (Kingma et al., 2016) 79.10Unconditional Decoder 87.55VLAE 78.53Table 3: OMNIGLOT. [1] (Burda et al., 2015a),[2] (Burda et al., 2015b), [3] (Gregor et al.,2015), [4] (Gregor et al., 2016),Model NLL TestV AE [1] 106.31IWAE [1] 103.38RBM (500 hidden) [2] 100.46DRAW [3] <96.50Conv DRAW [4] <91.00Unconditional Decoder 95.02VLAE 90.98VLAE (fine-tuned) 89.83Table 4: Caltech-101 Silhouettes. [1] (Born-schein & Bengio, 2014), [2] (Cho et al., 2011),[3] (Du et al., 2015), [4] (Rolfe, 2016), [5](Goessling & Amit, 2015),Model NLL TestRWS SBN [1] 113.3RBM [2] 107.8NAIS NADE [3] 100.0Discrete V AE [4] 97.6SpARN [5] 88.48Unconditional Decoder 89.26VLAE 77.368Published as a conference paper at ICLR 2017As seen in Table 2,3,4, with the same set of hyperparameters tuned on statically binarized MNIST,VLAE is able to perform well on the rest of datasets, significantly exceeding previous state-of-the-art results on dynamically binarized MNIST and Caltech-101 Silhouettes and tying statisticallywith best previous result on OMNIGLOT. In order to isolate the effect of expressive PixelCNN asdecoder, we also report performance of the same PixelCNN trained without V AE part under thename “Unconditional Decoder”.4.3 N ATURAL IMAGES : CIFAR10In addition to binary image datasets, we have applied VLAE to the CIFAR10 dataset of naturalimages. Density estimation of CIFAR10 images has been a challenging benchmark problem used bymany recent generative models and hence is great task to position VLAE among existing methods.We investigated using ResNet (He et al., 2016) and DenseNet (Huang et al., 2016) as buildingblocks for V AE networks and observed that DenseNet reduces overfitting. We also propose a newoptimization technique that blends the advantages of KL annealing (Serban et al., 2016) and ”freebits” (Kingma et al., 2016) to stabilize learning on this challenging dataset. Detailed experimentalsetup is described in Appendix.VLAE is compared to other methods on CIFAR10 in Table 5. We show that VLAE models attain newstate-of-the-art performance among other variationally trained latent-variable models. DenseNetVLAE model also outperforms most other tractable likelihood models including Gated PixelCNNand PixelRNN and has results only slightly worse than currently unarchived state-of-the-art Pixel-CNN++.Table 5: CIFAR10. Likelihood for VLAE is approximated with 512 importance samples. [1](van den Oord et al., 2016a), [2] (Dinh et al., 2014), [3] (van den Oord & Schrauwen, 2014), [4](Dinh et al., 2016), [5] (van den Oord et al., 2016b), [6] (Salimans et al., 2017), [7] (Sohl-Dicksteinet al., 2015), [8] (Gregor et al., 2016), [9] (Kingma et al., 2016)Method bits/dim Results with tractable likelihood models :Uniform distribution [1] 8.00Multivariate Gaussian [1] 4.70NICE [2] 4.48Deep GMMs [3] 4.00Real NVP [4] 3.49PixelCNN [1] 3.14Gated PixelCNN [5] 3.03PixelRNN [1] 3.00PixelCNN++ [6] 2.92Results with variationally trained latent-variable models :Deep Diffusion [7] 5.40Convolutional DRAW [8] 3.58ResNet V AE with IAF [9] 3.11ResNet VLAE 3.04DenseNet VLAE 2.95We also investigate learning lossy codes on CIFAR10 images. To illustrate how does the receptivefield size of PixelCNN decoder influence properties of learned latent codes, we show visualizationsof similar VLAE models with receptive fields of different sizes. Specifically we say a receptive field,xWindowAround( i), has sizeAxBwhen a pixel xican depend on the rectangle block of size AxBimmediately on top of xias well as theA12pixels immediately to the left of xi. We use thisnotation to refer to different types of PixelCNN decoders in Figure 3.From (a)-(c) in Figure 3, we can see that larger receptive fields progressively make autoregressivedecoders capture more structural information. In (a), a smaller receptive field tends to preserverather detailed shape information in the lossy code whereas the latent code only retains rough shapein (c) with a larger receptive field.9Published as a conference paper at ICLR 2017(a)4x2 (b)5x3(c)7x4 (d)7x4GrayscaleFigure 3: CIFAR10: Original test-set images (left) and “decompressioned” versions from VLAE’slossy code (right) with different types of receptive fieldsIt’s interesting to also note that in (a)-(c), oftentimes color information is partially omitted fromlatent codes and one explanation can be that color is very predictable locally. However, colorinformation can be important to preserve if our task is, for example, object classification. Todemonstrate how we can encode color information in the lossy code, we can choose to makePixelCNN decoder depend only on images’ grayscale versions. In other words, instead of choos-ing the decoder to be plocal(xjz) =Qip(xijz;xWindowAround( i)), we use a decoder of the formplocal(xjz) =Qip(xijz;Grayscale( xWindowAround( i))). In (d) of Figure 3, we visualize lossycodes for a VLAE that has the same receptive field size as (c) but uses a “grayscale receptive field”.We note that the lossy codes in (d) encode roughly the same structural information as those in (c) butgenerally generate objects that are more recognizable due to the preservation of color information.This serves as one example of how we can design the lossy latent code carefully to encode what’simportant and what’s not.5 R ELATED WORKWe investigate a fusion between variational autoencoders with continuous latent variables (Kingma& Welling, 2013; Rezende et al., 2014) and neural autoregressive models. For autoregression, wespecifically apply a novel type of architecture where autoregression is realised through a carefully10Published as a conference paper at ICLR 2017constructed deep convolutional network, introduced in the PixelCNN model for images (van denOord et al., 2016a,b). These family of convolutional autoregressive models was further explored, andextended, for audio in WaveNet (Oord et al., 2016), video in Video Pixel Networks (Kalchbrenneret al., 2016b) and language in ByteNet (Kalchbrenner et al., 2016a).The combination of latent variables with expressive decoder was previously explored using recurrentnetworks mainly in the context of language modeling (Chung et al., 2015; Bowman et al., 2015;Serban et al., 2016; Fraccaro et al., 2016; Xu & Sun, 2016). Bowman et al. (2015) has also proposedto weaken an otherwise too expressive decoder by dropout to force some information into latentcodes.Concurrent with our work, PixelV AE (Gulrajani et al., 2016) also explored using conditional Pixel-CNN as a V AE’s decoder and has obtained impressive density modeling results through the use ofmultiple levels of stochastic units.Using autoregressive model on latent code was explored in the context of discrete latent variables inDARN (Gregor et al., 2013). Kingma et al. (2016), Kaae Sønderby et al. (2016), Gregor et al. (2016)and Salimans (2016) explored V AE architecture with an explicitly deep autoregressive prior forcontinuous latent variables, but the autoregressive data likelihood is intractable in those architecturesand needs to inferred variationally. In contrast, we use multiple steps of autoregressive flows thathas exact likelihood and analyze the effect of using expressive latent code.Optimization challenges for using (all levels of) continuous latent code were discussed before andpractical solutions were proposed (Bowman et al., 2015; Kaae Sønderby et al., 2016; Kingma et al.,2016). In this paper, we present a complementary perspective on when/how should the latent codebe used by appealing to a Bits-Back interpretation of V AE.Learning a lossy compressor with latent variable model has been investigated with Con-vDRAW (Gregor et al., 2016). It learns a hierarchy of latent variables and just using high-levellatent variables will result in a lossy compression that performs similarly to JPEG. Our model simi-larly learns a lossy compressor but it uses an autoregressive model to explicitly control what kind ofinformation should be lost in compression.6 C ONCLUSIONIn this paper, we analyze the condition under which the latent code in V AE should be used, i.e. whendoes V AE autoencode, and use this observation to design a V AE model that’s a lossy compressor ofobserved data. At modeling level, we propose two complementary improvements to V AE that areshown to have good empirical performance.VLAE has the appealing properties of controllable representation learning and improved densityestimation performance but these properties come at a cost: compared with V AE models that havesimple prior and decoder, VLAE is slower at generation due to the sequential nature of autoregres-sive model.Moving forward, we believe it’s exciting to extend this principle of learning lossy codes to otherforms of data, in particular those that have a temporal aspect like audio and video. Another promis-ing direction is to design representations that contain only information for downstream tasks andutilize those representations to improve semi-supervised learning.REFERENCESYoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and newperspectives. IEEE transactions on pattern analysis and machine intelligence , 35(8):1798–1828,2013.J ̈org Bornschein and Yoshua Bengio. Reweighted wake-sleep. arXiv preprint arXiv:1406.2751 ,2014.Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Ben-gio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349 , 2015.11Published as a conference paper at ICLR 2017Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXivpreprint arXiv:1509.00519 , 2015a.Yuri Burda, Roger B Grosse, and Ruslan Salakhutdinov. Accurate and conservative estimates of mrflog-likelihood using reverse annealing. In AISTATS , 2015b.KyungHyun Cho, Tapani Raiko, and Alexander T Ihler. Enhanced gradient and adaptive learningrate for training restricted boltzmann machines. In Proceedings of the 28th International Confer-ence on Machine Learning (ICML-11) , pp. 105–112, 2011.Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Ben-gio. A recurrent latent variable model for sequential data. In Advances in neural informationprocessing systems , pp. 2980–2988, 2015.Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: non-linear independent components esti-mation. arXiv preprint arXiv:1410.8516 , 2014.Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP. arXivpreprint arXiv:1605.08803 , 2016.Chao Du, Jun Zhu, and Bo Zhang. Learning deep generative models with doubly stochastic mcmc.arXiv preprint arXiv:1506.04557 , 2015.Otto Fabius and Joost R van Amersfoort. Variational recurrent auto-encoders. arXiv preprintarXiv:1412.6581 , 2014.Marco Fraccaro, Søren Kaae Sønderby, Ulrich Paquet, and Ole Winther. Sequential neural modelswith stochastic layers. arXiv preprint arXiv:1605.07571 , 2016.Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoderfor distribution estimation. arXiv preprint arXiv:1502.03509 , 2015.Marc Goessling and Yali Amit. Sparse autoregressive networks. arXiv preprint arXiv:1511.04776 ,2015.Karol Gregor, Andriy Mnih, and Daan Wierstra. Deep AutoRegressive Networks. arXiv preprintarXiv:1310.8499 , 2013.Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural networkfor image generation. arXiv preprint arXiv:1502.04623 , 2015.Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. Towardsconceptual compression. arXiv preprint arXiv:1604.08772 , 2016.Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez,and Aaron Courville. Pixelvae: A latent variable model for natural images. arXiv preprintarXiv:1611.05013 , 2016.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residualnetworks. arXiv preprint arXiv:1603.05027 , 2016.Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing thedescription length of the weights. In Proceedings of the sixth annual conference on Computationallearning theory , pp. 5–13. ACM, 1993.Geoffrey E Hinton and Richard S Zemel. Autoencoders, minimum description length, andHelmholtz free energy. Advances in neural information processing systems , pp. 3–3, 1994.Antti Honkela and Harri Valpola. Variational learning and bits-back coding: an information-theoretic view to bayesian learning. IEEE Transactions on Neural Networks , 15(4):800–810,2004.Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connectedconvolutional networks. arXiv preprint arXiv:1608.06993 , 2016.12Published as a conference paper at ICLR 2017Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther.How to train deep variational autoencoders and probabilistic ladder networks. arXiv preprintarXiv:1602.02282 , 2016.Nal Kalchbrenner, Lasse Espheholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and KorayKavukcuoglu. eural machine translation in linear time. arXiv preprint arXiv:1610.00527 , 2016a.Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, AlexGraves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527 , 2016b.Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. Proceedings of the 2ndInternational Conference on Learning Representations , 2013.Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverseautoregressive flow. arXiv preprint arXiv:1606.04934 , 2016.Brenden M Lake, Ruslan R Salakhutdinov, and Josh Tenenbaum. One-shot learning by inverting acompositional causal process. In Advances in neural information processing systems , pp. 2526–2534, 2013.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Benjamin M Marlin, Kevin Swersky, Bo Chen, and Nando de Freitas. Inductive principles forrestricted boltzmann machine learning. In AISTATS , pp. 509–516, 2010.Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXivpreprint arXiv:1402.0030 , 2014.Vinod Nair, Josh Susskind, and Geoffrey E Hinton. Analysis-by-synthesis by learning to invertgenerative black boxes. In International Conference on Artificial Neural Networks , pp. 971–981.Springer, 2008.Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves,Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model forraw audio. arXiv preprint arXiv:1609.03499 , 2016.Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedingsof The 32nd International Conference on Machine Learning , pp. 1530–1538, 2015.Danilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approx-imate inference in deep generative models. In Proceedings of the 31st International Conferenceon Machine Learning (ICML-14) , pp. 1278–1286, 2014.Jason Tyler Rolfe. Discrete variational autoencoders. arXiv preprint arXiv:1609.02200 , 2016.Tim Salimans. A structured variational auto-encoder for learning deep hierarchies of sparse features.arXiv preprint arXiv:1602.08734 , 2016.Tim Salimans, Diederip P. Kingma, and Max Welling. Markov chain Monte Carlo and variationalinference: Bridging the gap. arXiv preprint arXiv:1410.6460 , 2014.Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving thepixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprintarXiv:1701.05517 , 2017.Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, AaronCourville, and Yoshua Bengio. A hierarchical latent variable encoder-decoder model for gen-erating dialogues. arXiv preprint arXiv:1605.06069 , 2016.Jascha Sohl-Dickstein, Eric A Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsuper-vised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585 , 2015.Dustin Tran, Rajesh Ranganath, and David M Blei. Variational gaussian process. arXiv preprintarXiv:1511.06499 , 2015.13Published as a conference paper at ICLR 2017Richard E Turner, Pietro Berkes, and Maneesh Sahani. Two problems with variational expectationmaximisation for time-series models. In Proceedings of the Workshop on Inference and Estima-tion in Probabilistic Time-Series Models , pp. 107–115, 2008.Aaron van den Oord and Benjamin Schrauwen. Factoring variations in natural images with deepgaussian mixture models. In Advances in Neural Information Processing Systems , pp. 3518–3526, 2014.Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.arXiv preprint arXiv:1601.06759 , 2016a.Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko-ray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv preprintarXiv:1606.05328 , 2016b.Weidi Xu and Haoze Sun. Semi-supervised variational autoencoders for sequence classification.arXiv preprint arXiv:1603.02514 , 2016.Alan Yuille and Daniel Kersten. Vision as bayesian inference: analysis by synthesis? Trends incognitive sciences , 10(7):301–308, 2006.Biao Zhang, Deyi Xiong, and Jinsong Su. Variational neural machine translation. arXiv preprintarXiv:1605.07869 , 2016.14Published as a conference paper at ICLR 2017APPENDIXA D ETAILED EXPERIMENT SETUP FOR BINARY IMAGESFor V AE’s encoder and decoder, we use the same ResNet (He et al., 2015) V AE architecture as theone used in IAF MNIST experiment (Kingma et al., 2016). The only difference is that the decodernetwork now, instead of outputing a 28x28x1 spatial feature map to specify the mean of a factorizedbernoulli distribution, outputs a 28x28x4 spatial feature map that’s concatenated with the originalbinary image channel-wise, forming a 28x28x5 feature map that’s then fed through a typical maskedPixelCNN (van den Oord et al., 2016a). As such even though the PixelCNN conditions on the latentcode, we don’t call it a Conditional PixelCNN because it doesn’t use the specific architecture thatwas proposed in van den Oord et al. (2016b). For the PixelCNN, it has 6 masked convolution layerswith 12 3x3 filters organized in ResNet blocks, and it has 4 additional 1x1 convolution ResNet blockbetween every other masked convolution layer to increase processing capacity since it employs fewermasked convolutions than usual. All the masked convolution layer have their weights tied to reduceoverfitting on statically binarized MNIST, and untying the weights will increase performance forother datasets. Experiments are tuned on the validation set and then final experiment was run withtrain and validation set, with performance evaluated with test set. Exponential Linear Units (Clevertet al., 2015) are used as activation functions in both V AE network and PixelCNN network. Weightnormalization is everywhere with data-dependent initialization (Salimans & Kingma, 2016).A latent code of dimension 64was used. For AF prior, it’s implemented with MADE (Germainet al., 2015) as detailed in Kingma et al. (2016). We used 4steps of autoregressive flow and eachflow is implemented by a 3-layer MADE that has 640 hidden units and uses Relu (Nair & Hinton,2010) as activation functions. Differing from the practice of Kingma et al. (2016), we use mean-onlyautoregressive flow, which we found to be more numerically stable.In terms of training, Adamax (Kingma & Ba, 2014) was used with a learning rate of 0:002.0:01nats/data-dim free bits (Kingma et al., 2016) was found to be effective in dealing with the problemof all the latent code being ignored early in training. Polyak averaging (Polyak & Juditsky, 1992)was used to compute the final parameters, with = 0:998.All experiments are implemented using TensorFlow (Abadi et al., 2016).B A DDITIONAL EXPERIMENT SETUP FOR CIFAR10Latent codes are represented by 16feature maps of size 8x8, and this choice of spatial stochas-tic units are inspired by ResNet IAF V AE (Kingma et al., 2016). Prior distribution is factorizedGaussian noise transformed by 6autoregressive flows, each of which is implemented by a Pixel-CNN (van den Oord et al., 2016a) with 2hidden layers and 128feature maps. Between every otherautoregressive flow, the ordering of stochastic units is reversed.ResNet VLAE has the following structure for encoder: 2 ResNet blocks, Conv w/ stride=2, 2 ResNetblocks, Conv w/ stride=2, 3 ResNet blocks, 1x1 convolution and has a symmetric decoder. Channelsize = 48 for 32x32 feature maps and 96 for other feature maps. DenseNet VLAE follows a similarstructure: replacing 2 ResNet blocks with one DenseNet block of 3 steps and each step producesa certain number of feature maps such that at the end of a block, the concatenated feature maps isslightly more than the ResNet VLAE at the same stage.Conditional PixelCNN++ (Salimans et al., 2017) is used as the decoder. Specifically the channel-autoregressive variant is used to ensure there is sufficient capacity even when the receptive field issmall. Specifically, the decoder PixelCNN has 4 blocks of 64 feature maps where each block isconditioned on previous blocks with Gated ResNet connections and hence the PixelCNN decoderswe use are shallow but very wide. For 4x2 receptive field experiment, we use 1 layer of verticalstack convolutions and 2 layers of horizontal stack convolutions; for 5x3 receptive field experiment,we use 2 layers of vertical stack convolutions and 2 layers of horizontal stack convolutions; For 5x3receptive field experiment, we use 2 layers of vertical stack convolutions and 2 layers of horizontalstack convolutions; For 7x4 receptive field experiment, we use 3 layers of vertical stack convolutionsand 3 layers of horizontal stack convolutions; for 7x4 Grayscale experiment, we transform RGB15Published as a conference paper at ICLR 2017images into gray-scale images via this specific transformation: (0:299R)+(0:587G)+(0:114B).Best density estimation result is obtained with 7x4 receptive field experiments.C S OFT FREE BITS”Free bits” was a technique proposed in (Kingma et al., 2016) where Kgroups of stochastic unitsare encouraged to be used through the following surrogate objective:eL=ExMEq(zjx)[logp(xjz)]KXj=1maximum (;ExM[DKL(q(zjjx)jjp(zj))])This technique is easy to use since it’s usually easy to determine the minimum number of bits/nats,, stochastic units need to encode. Choosing is hence easier than setting a fixed KL annealingschedule (Serban et al., 2016).On the other hand, Kl annealing has the benefit of the surrogate objective will smoothly become thetrue objective, the variational lower bound where as ”free bits” has a sharp transition at the boundary.Therefore, we propose to still use as hyperparameter to specify at least nats should be used buttry to change the optimization objective as slowly as possible:LSoftFreeBits (x;) =Eq(zjx)[logp(xjz)]DKL(q(zjx)jjp(z))where 0<1.And we make the optimization smoother by changing slowly online to make sure at least natsare used: when Kl is too much higher than (we experimented wide range of thresholds from 3%to30%, all of which yield improved results, and we tend to use 5%us a threshold), is increased,and when Kl lower than ,is decreased to encourage information flow.We found it sufficient to increase/decrease at 10% increment and didn’t further tune this parameter.D A UTOREGRESSIVE DECODER WITHOUT AUTOREGRESSIVE PRIORIn this section, we investigate the scenario of just using an autoregressive decoder without usingan autoregressive prior. We compare the exact same model in three configurations: 1) using small-receptive-field PixelCNN as an unconditional density estimator; 2) using small-receptive-field asa decoder in a V AE with Gaussian latent variables; 3) replacing Gaussian latent variables withautoregressive flow latent variables in 2).Table 1: Ablation on Dynamically binarized MNISTModel NLL Test KLUnconditional PixelCNN 87.55 0PixelCNN Decoder + Gaussian Prior 79.48 10.60PixelCNN Decoder + AF Prior 78.94 11.73In Table 1, we can observe that each step of modification improves density estimation performance.In addition, using an autoregressive latent code makes the latent code transmit more information asshown in the difference of E[DKL(q(zjx)jjp(z))].E CIFAR10 G ENERATED SAMPLESREFERENCESMartın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg SCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machinelearning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 , 2016.16Published as a conference paper at ICLR 2017(a)4x2@ 3.12 bits/dim (b)7x4@ 2.95 bits/dimFigure 4: CIFAR10: Generated samples for different modelsDjork-Arn ́e Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep networklearning by Exponential Linear Units (ELUs). arXiv preprint arXiv:1511.07289 , 2015.Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoderfor distribution estimation. arXiv preprint arXiv:1502.03509 , 2015.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. arXiv preprint arXiv:1512.03385 , 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverseautoregressive flow. arXiv preprint arXiv:1606.04934 , 2016.Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. InProceedings of the 27th International Conference on Machine Learning (ICML-10) , pp. 807–814,2010.Boris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging.SIAM Journal on Control and Optimization , 30(4):838–855, 1992.Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac-celerate training of deep neural networks. arXiv preprint arXiv:1602.07868 , 2016.Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving thepixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprintarXiv:1701.05517 , 2017.Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, AaronCourville, and Yoshua Bengio. A hierarchical latent variable encoder-decoder model for gen-erating dialogues. arXiv preprint arXiv:1605.06069 , 2016.Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.arXiv preprint arXiv:1601.06759 , 2016a.Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko-ray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv preprintarXiv:1606.05328 , 2016b.17
rkFBJv9gg
Published as a conference paper at ICLR 2017LEARNING FEATURES OF MUSIC FROM SCRATCHJohn Thickstun1, Zaid Harchaoui2& Sham M. Kakade1;21Department of Computer Science and Engineering,2Department of StatisticsUniversity of WashingtonSeattle, WA 98195, USAfthickstn,sham g@cs.washington.edu ,name@uw.eduABSTRACTThis paper introduces a new large-scale music dataset, MusicNet, to serve as asource of supervision and evaluation of machine learning methods for music re-search. MusicNet consists of hundreds of freely-licensed classical music record-ings by 10 composers, written for 11 instruments, together with instrument/noteannotations resulting in over 1 million temporal labels on 34 hours of chambermusic performances under various studio and microphone conditions.The paper defines a multi-label classification task to predict notes in musicalrecordings, along with an evaluation protocol, and benchmarks several machinelearning architectures for this task: i) learning from spectrogram features; ii) end-to-end learning with a neural net; iii) end-to-end learning with a convolutionalneural net. These experiments show that end-to-end models trained for note pre-diction learn frequency selective filters as a low-level representation of audio.1 I NTRODUCTIONMusic research has benefited recently from the effectiveness of machine learning methods on a widerange of problems from music recommendation (van den Oord et al., 2013; McFee & Lanckriet,2011) to music generation (Hadjeres & Pachet, 2016); see also the recent demos of the GoogleMagenta project1. As of today, there is no large publicly available labeled dataset for the simpleyet challenging task of note prediction for classical music. The MIREX MultiF0 DevelopmentSet (Benetos & Dixon, 2011) and the Bach10 dataset (Duan et al., 2011) together contain less than7 minutes of labeled music. These datasets were designed for method evaluation, not for trainingsupervised learning methods.This situation stands in contrast to other application domains of machine learning. For instance, incomputer vision large labeled datasets such as ImageNet (Russakovsky et al., 2015) are fruitfullyused to train end-to-end learning architectures. Learned feature representations have outperformedtraditional hand-crafted low-level visual features and lead to tremendous progress for image classi-fication. In (Humphrey et al., 2012), Humphrey, Bello, and LeCun issued a call to action: “Deeparchitectures often require a large amount of labeled data for supervised training, a luxury musicinformatics has never really enjoyed. Given the proven success of supervised methods, MIR wouldlikely benefit a good deal from a concentrated effort in the curation of sharable data in a sustainablemanner.”This paper introduces a new large labeled dataset, MusicNet, which is publicly available2as a re-source for learning feature representations of music. MusicNet is a corpus of aligned labels onfreely-licensed classical music recordings, made possible by licensing initiatives of the EuropeanArchive, the Isabella Stewart Gardner Museum, Musopen, and various individual artists. The datasetconsists of 34 hours of human-verified aligned recordings, containing a total of 1;299;329individ-ual labels on segments of these recordings. Table 1 summarizes statistics of MusicNet.The focus of this paper’s experiments is to learn low-level features of music from raw audio data.In Sect. 4, we will construct a multi-label classification task to predict notes in musical recordings,1https://magenta.tensorflow.org/2http://homes.cs.washington.edu/ ̃thickstn/musicnet.html .1Published as a conference paper at ICLR 2017MusicNetTable 1: Summary statistics of the MusicNet dataset. See Sect. 2 for further discussion of MusicNetand Sect. 3 for a description of the labelling process. Appendix A discusses the methodology forcomputing error rate of this process.along with an evaluation protocol. We will consider a variety of machine learning architectures forthis task: i) learning from spectrogram features; ii) end-to-end learning with a neural net; iii) end-to-end learning with a convolutional neural net. Each of the proposed end-to-end models learns aset of frequency selective filters as low-level features of musical audio, which are similar in spiritto a spectrogram. The learned low-level features are visualized in Figure 1. The learned featuresmodestly outperform spectrogram features; we will explore possible reasons for this in Sect. 5.Figure 1: (Left) Bottom-level weights learned by a two-layer ReLU network trained on 16,384-samples windows ( 1=3seconds) of raw audio with `2regularized ( = 1) square loss for multi-label note classification on raw audio recordings. (Middle) Magnified view of the center of each setof weights. (Right) The truncated frequency spectrum of each set of weights.2Published as a conference paper at ICLR 20172 M USIC NETRelated Works. The experiments in this paper suggest that large amounts of data are necessaryto recovering useful features from music; see Sect. 4.5 for details. The Lakh dataset, released thissummer based on the work of Raffel & Ellis (2015), offers note-level annotations for many 30-second clips of pop music in the Million Song Dataset (McFee et al., 2012). The syncRWC datasetis a subset of the RWC dataset (Goto et al., 2003) consisting of 61 recordings aligned to scores usingthe protocol described in Ewert et al. (2009). The MAPS dataset (Emiya et al., 2010) is a mixtureof acoustic and synthesized data, which expressive models could overfit. The Mazurka project3consists of commercial music. Access to the RWC and Mazurka datasets comes at both a cost andinconvenience. Both the MAPS and Mazurka datasets are comprised entirely of piano music.The MusicNet Dataset. MusicNet is a public collection of labels (exemplified in Table 2) for330 freely-licensed classical music recordings of a variety of instruments arranged in small chamberensembles under various studio and microphone conditions. The recordings average 6 minutes inlength. The shortest recording in the dataset is 55 seconds and the longest is almost 18 minutes.Table 1 summarizes the statistics of MusicNet with breakdowns into various types of labels. Table2 demonstrates examples of labels from the MusicNet dataset.Start End Instrument Note Measure Beat Note Value45.29 45.49 Violin G5 21 3 Eighth48.99 50.13 Cello A#3 24 2 Dotted Half82.91 83.12 Viola C5 51 2.5 EighthTable 2: MusicNet labels on the Pascal String Quartet’s recording of Beethoven’s Opus 127, StringQuartet No. 12 in E-flat major, I - Maestoso - Allegro. Creative commons use of this recording ismade possible by the work of the European Archive.MusicNet labels come from 513 label classes using the most naive definition of a class: distinctinstrument/note combinations. The breakdowns reported in Table 1 indicate the number of distinctnotes that appear for each instrument in our dataset. For example, while a piano has 88 keys only83 of them are performed in MusicNet. For many tasks a note’s value will be a part of its label, inwhich case the number of classes will expand by approximately an order of magnitude after takingthe cartesian product of the set of classes with the set of values: quarter-note, eighth-note, triplet,etc. Labels regularly overlap in the time series, creating polyphonic multi-labels.MusicNet is skewed towards Beethoven, thanks to the composer’s popularity among performingensembles. The dataset is also skewed towards Solo Piano due to an abundance of digital scoresavailable for piano works. For training purposes, researchers may want to augment this dataset toincrease coverage of instruments such as Flute and Oboe that are under-represented in MusicNet.Commercial recordings could be used for this purpose and labeled using the alignment protocoldescribed in Sect. 3.3 D ATASET CONSTRUCTIONMusicNet recordings are freely-licensed classical music collected from the European Archive, theIsabella Stewart Gardner Museum, Musopen, and various artists’ collections. The MusicNet la-bels are retrieved from digital MIDI scores, collected from various archives including the ClassicalArchives ( classicalarchives.com ) Suzuchan’s Classic MIDI ( suzumidi.com ) and Har-feSoft ( harfesoft.de ). The methods in this section produce an alignment between a digital scoreand a corresponding freely-licensed recording. A recording is labeled with events in the score, asso-ciated to times in the performance via the alignment. Scores containing 6;550;760additional labelsare available on request to researchers who wish to augment MusicNet with commercial recordings.Music-to-score alignment is a long-standing problem in the music research and signal processingcommunities (Raphael, 1999). Dynamic time warping (DTW) is a classical approach to this prob-lem. An early use of DTW for music alignment is Orio & Schwarz (2001) where a recording is3http://www.mazurka.org.uk/3Published as a conference paper at ICLR 2017aligned to a crude synthesis of its score, designed to capture some of the structure of an overtoneseries. The method described in this paper aligns recordings to synthesized performances of scores,using side information from a commercial synthesizer. To the best of our knowledge, commercialsynthesis was first used for the purpose of alignment in Turetsky & Ellis (2003).The majority of previous work on alignment focuses on pop music. This is more challenging thanaligning classical music because commercial synthesizers do a poor job reproducing the wide varietyof vocal and instrumental timbers that appear in modern pop. Furthermore, pop features inharmonicinstruments such as drums for which natural metrics on frequency representations–including `2–arenot meaningful. For classical music to score alignment, a variant of the techniques described inTuretsky & Ellis (2003) works robustly. This method is described below; we discuss the evaluationof this procedure and its error rate on MusicNet in the appendix.0 100 200 300 400 500 600 700 800frames (recording)0100200300400500600frames (synthesis)0 100 200 300 400 500 600 700 800frames (recorded performance)050100150200250spectrogram binsFigure 2: (Left) Heatmap visualization of local alignment costs between the synthesized andrecorded spectrograms, with the optimal alignment path in red. The block from x= 0 tox= 100frames corresponds to silence at the beginning of the recorded performance. The slope of the align-ment can be interpreted as an instantaneous tempo ratio between the recorded and synthesized per-formances. The curvature in the alignment between x= 100 andx= 175 corresponds to anextension of the first notes by the performer. (Right) Annotation of note onsets on the spectrogramof the recorded performance, determined by the alignment shown on the left.In order to align the performance with a score, we need to define a metric that compares shortsegments of the score with segments of a performance. Musical scores can be expressed as binaryvectors inEKwhereE=f1;:::;ngandKis a dictionary of notes. Performances resideinRTp, whereT2f1;:::;mgis a sequence of time steps and pis the dimensionality of thespectrogram at time T. Given some local cost function C: (Rp;K)!R, a score Y2EK, anda performance X2RTp, the alignment problem is tominimizet2ZnnXi=1C(Xti;Yi)subject tot0= 0;tn=m;titj ifi<j:(1)Dynamic time warping gives an exact solution to the problem in O(mn)time and space.The success of dynamic time warping depends on the metric used to compare the score and theperformance. Previous works can be broadly categorized into three groups that define an alignmentcostCbetween segments of music xand score yby injecting them into a common normed spacevia maps and:C(x;y) =k(x)(y)k (2)The most popular approach–and the one adopted by this paper–maps the score into the space of theperformance (Orio & Schwarz, 2001; Turetsky & Ellis, 2003; Soulez et al., 2003). An alternativeapproach maps both the score and performance into some third space, commonly a chromogramspace (Hu et al., 2003; Izmirli & Dannenberg, 2010; Joder et al., 2013). Finally, some recentmethods consider alignment in score space, taking = Id and learning (Garreau et al., 2014;Lajugie et al., 2016).4Published as a conference paper at ICLR 2017With reference to the general cost (2), we must specify the maps ;, and the normkk . Wecompute the cost in the performance feature space Rp, hence we take = Id . For the features,we use the log-spectrogram with a window size of 2048 samples. We use a stride of 512 samplesbetween features. Hence adjacent feature frames are computed with 75% overlap. For audio sampledat 44.1kHz, this results in a feature representation with 44;100=51286frames per second. Adiscussion of these parameter choices can be found in the appendix. The map is computed bya synthetizer: we used Plogue’s Sforzando sampler together with Garritan’s Personal Orchestra 4sample library.For a (pseudo)-metric on Rp, we take the `2normkk 2on the low 50dimensions of Rp. RecallthatRprepresents Fourier components, so we can roughly interpret the k’th coordinate of Rpas theenergy associated with the frequency k(22;050=1024)k22:5Hz, where 22;050Hz is theNyquist frequency of a signal sampled at 44:1kHz. The 50 dimension cutoff is chosen empirically:we observe that the resulting alignments are more accurate using a small number of low-frequencybins rather than the full space Rp. Synthesizers do not accurately reproduce the high-frequencyfeatures of a musical instrument; by ignoring the high frequencies, we align on a part of the spectrumwhere the synthesis is most accurate. The proposed choice of cutoff is aggressive compared tousual settings; for instance, Turetsky & Ellis (2003) propose cutoffs in the 2:5kHz range. Thefundamental frequencies of many notes in MusicNet are higher than the 5022:5Hz1kHzcutoff. Nevertheless, we find that all notes align well using only the low-frequency information.4 M ETHODSWe consider identification of notes in a segment of audio x2 X as a multi-label classificationproblem, modeled as follows. Assign each audio segment a binary label vector y2f0;1g128. The128 dimensions correspond to frequency codes for notes, and yn= 1 if notenis present at themidpoint of x. Letf:X !H indicate a feature map. We train a multivariate linear regressionto predict ^ ygivenf(x), which we optimize for square loss. The vector ^ ycan be interpreted as amulti-label estimate of notes in xby choosing a threshold cand predicting label niff^ yn> c. Wesearch for the value cthat maximizes F1-score on a sampled subset of MusicNet.4.1 R ELATED WORKLearning on raw audio is studied in both the music and speech communities. Supervised learning onmusic has been driven by access to labeled datasets. Pop music labeled with chords (Harte, 2010)has lead to a long line of work on chord recognition, most recently Korzeniowsk & Widmer (2016).Genre labels and other metadata has also attracted work on representation learning, for exampleDieleman & Schrauwen (2014). There is also substantial work modeling raw audio representationsof speech; a current example is Tokuda & Zen (2016). Recent work from Google DeepMind exploresgenerative models of raw audio, applied to both speech and music (van den Oord et al., 2016).The music community has worked extensively on a closely related problem to note prediction: fun-damental frequency estimation. This is the analysis of fundamental (in contrast to overtone) frequen-cies in short audio segments; these frequencies are typically considered as proxies for notes. Becauseaccess to large labeled datasets was historically limited, most of these works are unsupervised. Agood overview of this literature can be found in Benetos et al. (2013). Variants of non-negative ma-trix factorization are popular for this task; a recent example is Khlif & Sethu (2015). A different lineof work models audio probabilistically, for example Berg-Kirkpatrick et al. (2014). Recent work byKelz et al. (2016) explores supervised models, trained using the MAPS piano dataset.4.2 M ULTI -LAYER PERCEPTRONSWe build a two-layer network with features fi(x) = log1 + max(0;wTix). We find that com-pression introduced by a logarithm improves performance versus a standard ReLU network (seeTable 3). Figure 1 illustrates a selection of weights wilearned by the bottom layer of this network.The weights learned by the network are modulated sinusoids. This explains the effectiveness ofspectrograms as a low-level representation of musical audio. The weights decay at the boundaries,analogous to Gabor filters in vision. This behavior is explained by the labeling methodology: theaudio segments used here are approximately 1=3of a second long, and a segment is given a note5Published as a conference paper at ICLR 2017label if that note is on in the center of the segment. Therefore information at the boundaries of thesegment is less useful for prediction than information nearer to the center.4.3 ( LOG-)SPECTROGRAMSSpectrograms are an engineered feature representation for musical audio signals, available in popularsoftware packages such as librosa (McFee et al., 2015). Spectrograms (resp. log-spectrograms) areclosely related to a two-layer ReLU network (resp. the log-ReLU network described above). Ifx= (x1;:::;xt)denotes a segment of an audio signal of length tthen we can defineSpeck(x)t1Xs=0e2iks=txs2= t1Xs=0cos(2ks=t )xs!2+ t1Xs=0sin(2ks=t )xs!2:These features are not precisely learnable by a two-layer ReLU network. But recall that jxj=max(0;x) + max(0;x)and if we take weight vectors u;v2RTwithus= cos(2ks=t )andvs= sin(2ks=t )then the ReLU network can learnfk;cos(x) +fk;sin(x)juTxj+jvTxj=t1Xs=0cos(2ks=t )xs+t1Xs=0sin(2ks=t )xs:We call this family of features a ReLUgram and observe that it has a similar form to the spectrogram;we merely replace the x7!x2non-linearity of the spectrogram with x7!jxj. These featuresachieve similar performance to spectrograms on the classification task (see Table 3).4.4 W INDOW SIZEWhen we parameterize a network, we must choose the width of the set of weights in the bottomlayer. This width is called the receptive field in the vision community; in the music communityit is called the window size. Traditional frequency analyses, including spectrograms, are highlysensitive to the window size. Windows must be long enough to capture relevant information, but notso long that they lose temporal resolution; this is the classical time-frequency tradeoff. Furthermore,windowed frequency analysis is subject to boundary effects, known as spectral leakage. Classicalsignal processing attempts to dampen these effects with predefined window functions, which applya mask that attenuates the signal at the boundaries (Rabiner & Schafer, 2007).The proposed end-to-end models learn window functions. If we parameterize these models with alarge window size then the model will learn that distant information is irrelevant to local prediction,so the magnitude of the learned weights will attenuate at the boundaries. We therefore focus on twowindow sizes: 2048 samples, which captures the local content of the signal, and 16,384 samples,which is sufficient to capture almost all relevant context (again see Figure 1).4.5 R EGULARIZATIONThe size of MusicNet is essential to achieving the results in Figure 1. In Figure 3 (Left) we optimizea two-layer ReLU network on a small subset of MusicNet consisting of 65;000monophonic datapoints. While these features do exhibit dominant frequencies, the signal is quite noisy. Comparablenoisy frequency selective features were recovered by Dieleman & Schrauwen (2014); see their Fig-ure 3. We can recover clean features on a small dataset using heavy regularization, but this destroysclassification performance; regularizing with dropout poses a similar tradeoff. By contrast, Figure 3(Right) shows weights learned by an unregularized two-layer network trained on the full MusicNetdataset. The models described in this paper do not overfit to MusicNet and optimal performance(reported in Table 3) is achieved without regularization.4.6 C ONVOLUTIONAL NETWORKSPreviously, we estimated ^ yby regressing against f(x). We now consider a convolutional model thatregresses against features of a collection of shifted segments x`near to the original segment x. Thelearned features of this network are visually comparable to those learned by the fully connected net-work (Figure 1). The parameters of this network are the receptive field, stride, and pooling regions.6Published as a conference paper at ICLR 2017Figure 3: (Left) Features learned by a 2-layer ReLU network trained on small monophonic subsetof MusicNet. (Right) Features learned by the same network, trained on the full MusicNet dataset.The results reported in Table 3 are achieved with 500 hidden units using a receptive field of 2;048samples with an 8-sample stride across a window of 16;384samples. These features are groupedinto average pools of width 16, with a stride of 8 features between pools. A max-pooling operationyields similar results. The learned features are consistent across different parameterizations. In allcases the learned features are comparable to those of a fully connected network.5 R ESULTSWe hold out a test set of 3 recordings for all the results reported in this section:Bach’s Prelude in D major for Solo Piano. WTK Book 1, No 5. Performed by KimikoIshizaka. MusicNet recording id 2303.Mozart’s Serenade in E-flat major. K375, Movement 4 - Menuetto. Performed by the SoniVentorum Wind Quintet. MusicNet recording id 1819.Beethoven’s String Quartet No. 13 in B-flat major. Opus 130, Movement 2 - Presto. Re-leased by the European Archive. MusicNet recording id 2382.The test set is a representative sampling of MusicNet: it covers most of the instruments in the datasetin small, medium, and large ensembles. The test data points are evenly spaced segments separatedby 512 samples, between the 1st and 91st seconds of each recording. For the wider features, there issubstantial overlap between adjacent segments. Each segment is labeled with the notes that are onin the middle of the segment.0.0 0.2 0.4 0.6 0.8 1.0recall0.00.20.40.60.81.0precisionoverallone-notethree-notesFigure 4: Precision-recall curves for the convolutional network on the test set. Curves are eval-uated on subsets of the test set consisting of all data points (blue); points with exactly one label(monophonic; green); and points with exactly three labels (red).We evaluate our models on three scores: precision, recall, and average precision. The precision scoreis the count of correct predictions by the model (across all data points) divided by the total number7Published as a conference paper at ICLR 2017of predictions by the model. The recall score is the count of correct predictions by the model dividedby the total number of (ground truth) labels in the test set. Precision and recall are parameterized bythe note prediction threshold c(see Sect. 4). By varying c, we construct precision-recall curves (seeFigure 4). The average precision score is the area under the precision-recall curve.Representation Window Size Precision Recall Average Precisionlog-spectrograms 1,024 49.0% 40.5% 39.8%spectrograms 2,048 28.9% 52.5 % 32.9%log-spectrograms 2,048 61.9% 42.0% 48.8%log-ReLUgrams 2,048 58.9% 47.9% 49.3%MLP, 500 nodes 2,048 50.1% 58.0% 52.1%MLP, 2500 nodes 2,048 53.6% 62.3% 56.2%AvgPool, 2 stride 2,148 53.4% 62.5% 56.4%log-spectrograms 8,192 64.2% 28.6% 52.1%log-spectrograms 16,384 58.4% 18.1% 45.5%MLP, 500 nodes 16,384 54.4% 64.8% 60.0%CNN, 64 stride 16,384 60.5% 71.9% 67.8%Table 3: Benchmark results on MusicNet for models discussed in this paper. The learned repre-sentations are optimized for square loss with SGD using the Tensorflow library (Abadi et al.). Wereport the precision and recall corresponding to the best F1-score on validation data.A spectrogram of length nis computed from 2nsamples, so the linear 1024-point spectrogrammodel is directly comparable to the MLP runs with 2048 raw samples. Learned features4modestlyoutperform spectrograms for comparable window sizes. The discussion of windowing in Sect. 4.4partially explains this. Figure 5 suggests a second reason. Recall (Sect. 4.3) that the spectrogramfeatures can be interpreted as the magnitude of the signal’s inner product with sine waves of linearlyspaced frequencies. In contrast, the proposed networks learn weights with frequencies distributedsimilarly to the distribution of notes in MusicNet (Figure 5). This gives the network higher resolutionin the most critical frequency regions.0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5frequency (kHz)0.010.020.030.040.050.060.0notes (thousands)0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5frequency (kHz)0510152025303540nodesFigure 5: (Left) The frequency distribution of notes in MusicNet. (Right) The frequency distributionof learned nodes in a 500-node, two-layer ReLU network.ACKNOWLEDGMENTSWe thank Bob L. Sturm for his detailed feedback on an earlier version of the paper. We also thankBrian McFee and Colin Raffel for fruitful discussions. Sham Kakade acknowledges funding fromthe Washington Research Foundation for innovation in Data-intensive Discovery. Zaid Harchaouiacknowledges funding from the program ”Learning in Machines and Brains” of CIFAR.4A demonstration using learned MLP features to synthesize a musical performance is available on thedataset webpage: http://homes.cs.washington.edu/ ̃thickstn/demos.html8Published as a conference paper at ICLR 2017REFERENCESM. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. Corrado, A. Davis, J. Dean,M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y . Jia, R. Jozefowicz,L. Kaiser, M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah, M. Schus-ter, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V . Vanhoucke, V . Vasudevan, F. Vie-gas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y . Yu, and X. Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems. URL http://tensorflow.org/ .E. Benetos and S. Dixon. Joint multi-pitch detection using harmonic envelope estimation for poly-phonic music transcription. IEEE Selected Topics in Signal Processing , 2011.E. Benetos, S. Dixon, D. Giannoulis, H. Kirchoff, and A. Klapuri. Automatic music transcription:challenges and future directions. Journal of Intelligent Information Systems , 2013.T. Berg-Kirkpatrick, J. Andreas, and D. Klein. Unsupervised transcription of piano music. NIPS ,2014.S. Dieleman and B. Schrauwen. End-to-end learning for music audio. ICASSP , 2014.Z. Duan, B. Pardo, and C. Zhang. Multiple fundamental frequency estimation by modeling spectralpeaks and non-peak regions. TASLP , 2011.V . Emiya, R. Badeau, and B. David. Multipitch estimation of piano sounds using a new probabilisticspectral smoothness principle. TASLP , 2010.S. Ewert, M. M ̈uller, and P. Grosche. High resolution audio synchronization using chroma features.ICASSP , 2009.D. Garreau, R. Lajugie, S. Arlot, and F. Bach. Metric learning for temporal sequence alignment.NIPS , 2014.M. Goto, H. Hashiguchi, T. Nishimura, and R. Oka. RWC music database: Music genre databaseand musical instrument sound database. ISMIR , 2003.Ga ̈etan Hadjeres and Franc ̧ois Pachet. Deepbach: a steerable model for bach chorales generation.arXiv preprint , 2016.C. Harte. Towards Automatic Extraction of Harmony Information from Music Signals . PhD thesis,Department of Electrical Engineering, Queen Mary, University of London, 2010.N. Hu, R. B. Dannenberg, and G. Tzanetakis. Polyphonic audio matching and alignment for musicretrieval. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics , 2003.E. J. Humphrey, J. P. Bello, and Y . LeCun. Moving beyond feature design: Deep architectures andautomatic feature learning in music informatics. ISMIR , 2012.O. Izmirli and R. B. Dannenberg. Understanding features and distance functions for music sequencealignment. ISMIR , 2010.C. Joder, S. Essid, and G. Richard. Learning optimal features for polyphonic audio-to-score align-ment. TASLP , 2013.R. Kelz, M. Dorfer, F. Korzeniowski, S. B ̈ock, A. Arzt, and G. Widmer. On the potential of simpleframewise approaches to piano transcription. ISMIR , 2016.A. Khlif and V . Sethu. An iterative multi range non-negative matrix factorization algorithm forpolyphonic music transcription. ISMIR , 2015.F. Korzeniowsk and G. Widmer. Feature learning for chord recognition: the deep chroma extractor.ISMIR , 2016.R. Lajugie, P. Bojanowski, P. Cuvillier, S. Arlot, and F. Bach. A weakly-supervised discriminativemodel for audio-to-score alignment. ICASSP , 2016.9Published as a conference paper at ICLR 2017B. McFee and G. Lanckriet. Learning multi-modal similarity. JMLR , 2011.B. McFee, T. Bertin-Mahieux, D. P. W. Ellis, and G. Lanckriet. The million song dataset challenge.Proceedings of the 21st International Conference on World Wide Web , 2012.B. McFee, C. Raffel, D. Liang, D. P. W. Ellis, M. McVicar, E. Battenberg, and O. Nieto. librosa:Audio and music signal analysis in python. SCIPY , 2015.N. Orio and D. Schwarz. Alignment of monophonic and polyphonic music to a score. InternationalComputer Music Conference , 2001.G. Poliner and D. P. W. Ellis. A discriminative model for polyphonic piano transcription. EURASIPJournal on Applied Signal Processing , 2007.L. Rabiner and R. Schafer. Introduction to digital speech processing. Foundations and trends insignal processing , 2007.C. Raffel and D. P. W. Ellis. Large-scale content-based matching of MIDI and audio files. ISMIR ,2015.C. Raffel, B. McFee, E. J. Humphrey, J. Salamon, O. Nieto, D. Liang, and D. P. W. Ellis. mir eval:A transparent implementation of common mir metrics. ISMIR , 2014.C. Raphael. Automatic segmentation of acoustic musical signals using hidden markov models. IEEETransactions on Pattern Analysis and Machine Intelligence , 1999.O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge. IJCV ,2015.F. Soulez, X. Rodet, and D. Schwarz. Improving polyphonic and poly-instrumental music to scorealignment. ISMIR , 2003.K. Tokuda and H. Zen. Directly modeling voiced and unvoiced components in speech waveformsby neural networks. ICASSP , 2016.R. J. Turetsky and D. P. W. Ellis. Ground-truth transcriptions of real music from force-aligned midisyntheses. ISMIR , 2003.A. van den Oord, S. Dieleman, and B. Schrauwen. Deep content-based music recommendation.NIPS , 2013.A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner,A. Senior, and K. Kavukcuoglu. WaveNet: A generative model for raw audio. arXiv preprint ,2016.10Published as a conference paper at ICLR 2017A V ALIDATING THE MUSIC NET LABELSWe validate the aligned MusicNet labels with a listening test. We create an aural representation of analigned score-performance pair by mixing a short sine wave into the performance with the frequencyindicated by the score at the time indicated by the alignment. We can listen to this mix and, if thealignment is correct, the sine tones will exactly overlay the original performance; if the alignment isincorrect, the mix will sound dissonant.We have listened to sections of each recording in the aligned dataset: the beginning, several ran-dom samples of middle, and the end. Mixes with substantially incorrect alignments were rejectedfrom the dataset. Failed alignments are mostly attributable to mismatches between the midi and therecording. The most common reason for rejection is musical repeats. Classical music often containssections with indications that they be repeated a second time; in classical music performance culture,it is often acceptable to ignore these directions. If the score and performance make different choicesregarding repeats, a mismatch arises. When the score omits a repeat that occurs in the performance,the alignment typically warps over the entire repeated section, with correct alignments before andafter. When the score includes an extra repeat, the alignment typically compresses it into very shortsegment, with correct alignments on either side. We rejected alignments exhibiting either of theseissues from the dataset.From the aligned performances that we deemed sufficiently accurate to admit to the dataset, werandomly sampled 30 clips for more careful annotation and analysis. We weighted the sample tocover a wide coverage of recordings with various instruments, ensemble sizes, and durations. Foreach sampled performance, we randomly selected a 30 second clip. Using software transforms, itis possible to slow a recording down to approximately 1/4 speed. Two of the clips were too richlystructured and fast to precisely analyze (slowing the signal down any further introduces artifacts thatmake the signal difficult to interpret). Even in these two rejected samples, the alignments soundsubstantially correct.For the other 28 clips, we carefully analyzed the aligned performance mix and annotated everyalignment error. Two of the authors are classically trained musicians: we independently checked forerrors and we our analyses were nearly identical. Where there was disagreement, we used the morepessimistic author’s analysis. Over our entire set of clips we averaged a 4:0%error rate.Note that we do not catch every type of error. Mistaken note onsets are more easily identifiedthan mistaken offsets. Typically the release of one note coincides with the onset of a new note,which implicitly verifies the release. However, release times at the ends of phrases may be lessaccurate; these inaccuracies would not be covered by our error analysis. We were also likely to missperformance mistakes that maintain the meter of the performance, but for professional recordingssuch mistakes are rare.For stringed instruments, chords consisting of more than two notes are “rolled”; i.e. they are per-formed serially from the lowest to the highest note. Our alignment protocol cannot separate notesthat are notated simultaneously in the score; a rolled chord is labeled with a single starting time, usu-ally the beginning of the first note in the roll. Therefore, there is some time period at the beginningof a roll where the top notes of the chord are labeled but have not yet occurred in the performance.There are reasonable interpretations of labeling under which these labels would be judged incorrect.On the other hand, if the labels are used to supervise transcription then ours is likely the desiredlabeling.We can also qualitatively characterize the types of errors we observed. The most common types oferrors are anticipations and delays: a single, or small sequence of labels is aligned to a slightly earlyor late location in the time series. Another common source of error is missing ornaments and trills:these are short flourishes in a performance are sometimes not annotated in our score data, whichresults in a missing annotation in the alignment. Finally, there are rare performance errors in therecordings and transcription errors in the score.11Published as a conference paper at ICLR 2017B A LIGNMENT PARAMETER ROBUSTNESSThe definitions of audio featurization and the alignment cost function were contingent on severalparameter choices. These choices were optimized by systematic exploration of the parameter space.We investigated what happens as we vary each parameter and made the choices that gave the bestresults in our listening tests. Fine-tuning of the parameters yields marginal gains.The quality of alignments improves uniformly with the quality of synthesis. The time-resolutionof labels improves uniformly as the stride parameter decreases; minimization of stride is limitedby system memory constraints. We find that the precise phase-invariant feature specification haslittle effect on alignment quality. We experimented with spectrograms and log-spectrograms usingwindowed and un-windowed signals. Alignment quality seemed to be largely unaffected.The other parameters are governed by a tradeoff curve; the optimal choice is determined by balanc-ing desirable outcomes. The Fourier window size is a classic tradeoff between time and frequencyresolution. The `2norm can be understood as a tradeoff between the extremes of `1and`1. The`1norm is too egalitarian: the preponderance of errors due to synthesis quality add up and overwhelmthe signal. On the other hand, the `1norm ignores too much of the signal in the spectrogram. Thespectrogram cutoff, discussed in Sec. 3, is also a tradeoff between synthesis quality and maximaluse of informationC A DDITIONAL ERROR ANALYSISFor each model, using the test set described in Sect. 5, we report accuracy and error scores used bythe MIR community to evaluate the Multi-F0 systems. Definitions and a discussion of these metricsare presented in Poliner & Ellis (2007).Representation Acc Etot Esub Emiss Efa512-point log-spectrogram 28.5% .819 .198 .397 .2241024-point log-spectrogram 33.4% .715 .123 .457 .1351024-point log-ReLUgram 35.9% .711 .144 .377 .1904096-point log-spectrogram 24.7% .788 .085 .628 .0748192-point log-spectrogram 16.1% .866 .082 .737 .047MLP, 500 nodes, 2048 raw samples 36.8% .790 .206 .214 .370MLP, 2500 nodes. 2048 samples 40.4% .740 .177 .200 .363AvgPool, 5 stride, 2048 samples 40.5% .744 .176 .200 .369MLP, 500 nodes, 16384 samples 42.0% .735 .160 .191 .383CNN, 64 stride, 16384 samples 48.9% .634 .117 .164 .352Table 4: MIREX-style statistics, evaluated using the mir eval library (Raffel et al., 2014).12Published as a conference paper at ICLR 2017D P RECISION & R ECALL CURVES0.0 0.2 0.4 0.6 0.8 1.0recall0.00.20.40.60.81.0precisionFigure 6: The linear spectrogrammodel.0.0 0.2 0.4 0.6 0.8 1.0recall0.00.20.40.60.81.0precisionFigure 7: The 500 node, 2048raw sample MLP.0.0 0.2 0.4 0.6 0.8 1.0recall0.00.20.40.60.81.0precisionFigure 8: The 2500 node, 2048raw sample MLP.0.0 0.2 0.4 0.6 0.8 1.0recall0.00.20.40.60.81.0precisionFigure 9: The average poolingmodel.0.0 0.2 0.4 0.6 0.8 1.0recall0.00.20.40.60.81.0precisionFigure 10: The 500 node, 16384raw sample MLP.0.0 0.2 0.4 0.6 0.8 1.0recall0.00.20.40.60.81.0precisionFigure 11: The convolutionalmodel.13Published as a conference paper at ICLR 2017E A DDITIONAL RESULTSWe report additional results on splits of the test set described in Sect. 5.Model Features Precision Recall Average PrecisionMLP, 500 nodes 2048 raw samples 56.1% 62.7% 59.2%MLP, 2500 nodes 2048 raw samples 59.1% 67.8% 63.1%AvgPool, 5 stride 2048 raw samples 59.1% 68.2% 64.5%MLP, 500 nodes 16384 raw samples 60.2% 65.2% 65.8%CNN, 64 stride 16384 raw samples 65.9% 75.2% 74.4%Table 5: The Soni Ventorum recording of Mozart’s Wind Quintet K375 (MusicNet id 1819).Model Features Precision Recall Average PrecisionMLP, 500 nodes 2048 raw samples 35.4% 40.7% 28.0%MLP, 2500 nodes 2048 raw samples 38.3% 44.3% 30.9%AvgPool, 5 stride 2048 raw samples 38.6% 45.2% 31.7%MLP, 500 nodes 16384 raw samples 43.4% 51.3% 41.0%CNN, 64 stride 16384 raw samples 51.0% 57.9% 49.3%Table 6: The European Archive recording of Beethoven’s String Quartet No. 13 (MusicNet id 2382).Model Features Precision Recall Average PrecisionMLP, 500 nodes 2048 raw samples 55.6% 67.4% 64.1%MLP, 2500 nodes 2048 raw samples 60.1% 71.3% 68.6%AvgPool, 5 stride 2048 raw samples 59.6% 70.7% 68.1%MLP, 500 nodes 16384 raw samples 57.1% 76.3% 68.4%CNN, 64 stride 16384 raw samples 61.9% 80.1% 73.9%Table 7: The Kimiko Ishizaka recording of Bach’s Prelude in D major (MusicNet id 2303).14
B1ZXuTolx
Under review as a conference paper at ICLR 2017REVISITING DENOISING AUTO-ENCODERSLuis Gonzalo Sanchez Giraldo∗Department of Computer ScienceUniversity of MiamiCoral Gables, FL 33124, USAlgsanchez@cs.miami.eduABSTRACTDenoising auto-encoders (DAE)s were proposed as a simple ye t powerful wayto obtain representations in an unsupervised manner by lear ning a map that ap-proximates the clean inputs from their corrupted versions. However, the originalobjective function proposed for DAEs does not guarantee tha t denoising happensonly at the encoding stages. We argue that a better represent ation can be obtainedif the encoder is forced to carry out most of the denoising eff ort. Here, we proposea simple modification to the DAE’s objective function that ac complishes the abovegoal.1 I NTRODUCTIONAuto-encoders (AE)s are unsupervised learning algorithms that capture structure in data by findingan representation of the inputs (encoding) from which they c an be recovered at least approximately.By learning a transformation G(encoding) from the input x∈ X toz=G(x)∈ Z , the auto-encoder tries to capture the structure of the input. To guara ntee the transformation Gpreservesthe information about x, a decoder ̃G−1(an approximate inverse map) is also learned such that ameasure of fidelity E[D(X, ̃X)]between the input and its reconstruction is optimized.Learning non-linear encoding and decoding mappings has pro ved to be a non-trivial problem. Forexample, there is no consensus on what the dimensionality of the encoding space should be. On oneside of the spectrum, AE architectures such as bottle-neck n etworks, map input data x∈ X ⊆Rdtoa lower dimensional space Z ⊆ Rp, and then map it back to input space Xthrough dimensionalityexpansion. The intuition behind bottleneck networks is tha t the lower dimensional space forcesthe encoder to capture meaningful relations between the var iables in the input space. While this isclear when one is restricted to linear mappings, the problem becomes less well-understood if non-linear mappings are allowed. The idea of using a low dimensio nal latent space has been recentlyemployed in the context of modern generative models such as v ariational auto-encoders (V AE)sKingma & Welling (2013) and generative adversarial network s (GAN)s Goodfellow et al. (2014).On the other hand, AE architectures can contain over-comple te representations where inputs aremapped to a high dimensional encoding spaces; much higher th an the dimensionality of the inputspace (dim(X)<dim(Z)). Over-complete representations have proved very useful i n supervisedlearning. However, effective learning of this high dimensi onal mappings relies on specialized mech-anisms that attempt to avoid trivial solutions or on the larg e amount of constraint imposed by thesupervised task. Approaches such as sparse encoding, which can be applied to over-complete sce-narios, use the concept of effective dimensionality. A spar sity constraint induces an active set ofvariables with an expected L0norm smaller than the input space dimensionality. Many spar secoding procedures require solving an optimization problem at inference time since the encodingmapping is not explicit. However, it has been shown that effic ient inference can be made possibleby training a nonlinear feed-forward network to mimic the ou tput of a sparse encoding algorithmRanzato et al. (2006) and more recently with the proposal of t echniques that impose very strongsparsity constraints such as winner take all auto-encoders Makhzani & Frey (2015).An important set of techniques that do not necessarily fall i nto either extreme in terms of dimen-sionality are based on the concept of robustness of the repre sentation. A representation is said to be∗1Under review as a conference paper at ICLR 2017robust if is able to retain information about the input even i f the input or the representation undergoperturbations. For example, contractive auto-encoders (C AE)s Rifai et al. (2011a;b) penalize thesensitivity of the encoding map G(x)to perturbations of the input xby minimizing the expectedFrobenius norm of the Jacobian E[JG(X)]ofGwhile maximizing fidelity of the reconstruction.Within this category, learning representations by local de noising, or better known as de-noisingauto-encoders, provide a very general way to carry out unsup ervised learning. De-noising auto-encoders Vincent et al. (2008) were conceived as a very intui tive way to capture robustness in theencoding-decoding mapping by simply minimizing the error b etween the output reconstruction ofthe auto-encoder when a corrupted version of the input is fee d to the map and the uncorrupted input.However, as already pointed out in Rifai et al. (2011a), the D AE objective minimizes the recon-struction error, therefore DAEs lack of explicit robustnes s in the encoding phase. A workaround tothe above problem can make DAEs an excellent choice for unsup ervised learning. DAEs can be avery appealing alternative due to their simplicity for trai ning and elegant interpretation. Due to theirsimplicity in the training criterion, DAEs have the potenti al to adapt to different kinds of architec-tures and to scale with the application. We argue that the cur rent limitation of current DAEs can beovercome by making a very simple modification to the training objective leading to learning betterrepresentations in terms of robustness. The modified object ive for DAE and is justification are themain contributions of our work.2 D ENOISING AUTO-ENCODERSLetXbe the random variable representing the input, and ˆXbe a corrupted version obtained by anan stochastic operator qC(ˆX|X). The goal of the DAE is to learn a deterministic encoder-deco dercomposition f(·) = ̃G−1(G(·))such that, for Y=f(ˆX)the quantity E[D(Y,X)]is minimized.The corruption process leads to learning non-trivial trans formations of the input in particular forthe over-complete cases where identity mapping can be learn ed if there is no appropriate capacitycontrol mechanism in place. Let Gand ̃G−1be parametrized by θ={θenc,θdec}. Parameterlearning of the DAE can be formulated as the following optimi zation problem:minimizeθ∈Θ1NN/summationdisplayi=1EˆXi[D(fθ(ˆXi),xi)], (1)whereˆXi∼qC(ˆX|X=xi). As we already mentioned, a solution to (1) does not impose ex plicitrestrictions to the encoder. In this approach, only the dime nsionality and the range of the encodercan be controlled by modifying its architecture. Therefore , there exist the possibility that the encodercould be learning a feature map, for which some dimensions st ill carry the effects of perturbing theoriginal input. In many cases, the actual denoising could be taking place in the decoding phase.Below, we present a simple way to overcome the above limitati on based on a modification to theDAE objective (1).2.1 M ODIFIED DENOISING OBJECTIVEIn order to explicitly enforce robustness in the encoding ph ase, we can measure the amount ofdistortion in the encoder by comparing the resulting featur e values for the original uncorruptedinput against the feature values obtained from encoding its corrupted counterpart. Likewise Din(1), the difference between can be measured by a function Denc(·,·), for instance cross-entropy canbe used for encoder that map input values to [0,1]. This leads to the following modified optimizationproblem:minimizeθ∈Θ1NN/summationdisplayi=1EˆXi[D(fθ(ˆXi),xi)]+1λEˆXi[Denc(G(ˆXi),G(xi))], (2)whereλis a tradeoff parameter. The original DAE objective can be si mple obtained by setting λtoinfinity.2.2 A NALYSIS OF THE MODIFIED DAEThe modified objective (2) appears as an intuitive way to expl icitly enforce robustness in the encod-ing phase. Unlike the plain objective where Y=f(X), the stochastic operator qCin conjunction2Under review as a conference paper at ICLR 2017with the encoder function G(.)lead to a random map from an instance xofXto the random vari-ableZ|X=x. In this case, the minimization of the reconstruction error leads to maximizationof a lower bound on the mutual information between ZandX1. However, the information maxi-mization perspective does not provide insights about how th e noise acts as capacity control on theencoder. Since the only constraint on the entropy H(Z)is the encoder architecture, mutual informa-tionI(X;Z) =H(X)−H(X|Z) =H(Z)−H(Z|X)does not tell us much about the conditionalentropyH(Z|X), which is directly related to the encoder. Bear in mind that t he corruption processis sill under consideration, thus the map from XtoZis stochastic. The conditional entropy H(Z|X)can be upper bounded by H(Z|G(X))2:H(Z|X)≤H(Z|G(X)) = E ˆX,G(X)[−log(p(G(ˆX)|G(X)))] (3)∝EˆX,X[Denc(G(ˆX)|G(X))], (4)which corresponds to the population version of the second te rm of the objective (2).2.3 A RCHITECTURAL CONSIDERATIONS AND THE ENCODING LOSSThe above formulation assumes that the encoding loss term ma tches the conditional log likelihoodfun-ction. Thus, the encoding loss should also match the enc oding architecture. For instance, ifwe use sigmoidal units, we can assume a multivariate Bernoul li distribution of the code and usecross-entropy as the encoding loss. Architectures inducin g a continuous code space may requirefurther considerations for the choice of the loss or archite ctural constraints. For instance, a linearencoder with the euclidean distance as the encoding loss may lead to a trivial minimization of themodified objective. By reducing the norm of the encoder weigh ts, the distance between the codesof the clean and corrupted samples can be made arbitrarily sm all, only being compensated by scal-ing of the decoder weights. In this case, a simple way to circu mvent this issue is by using tiedweights in the decode. In linear case, the role of tied weight s is easy to understand. However, asimilar interpretation may not extend to non-linear cases. To avoid trivial shrinkage of the encodingspace, and without resorting to tied weights, we can employ a normalized distance function such assquared euclidean distance divided by the total variance of the encoded data or by adding a batchnormalization layer to the top layer of encoder.2.4 R ELATION TO OTHER APPROACHESIn a similar spirit to the proposed modified denoising auto-e ncoder objective, contractive auto-encoders achieve robustness in the encoder by explicitly co mputing a regularization term basedon theL2of the Jacobian of the encoder. In the regime of small Gaussia n perturbations the modifiedobjective can be approximated by the Jacobian of the encodin g transformation. Nevertheless, thesetting of the modified denoising auto-encoder is more gener al in terms of distances and forms ofinput corruption, which can lead to different properties to the ones obtained by manipulating theJacobian of the transformation. Another interpretation of the learning algorithm is as a Siamesenetwork of the encoders, where the goal is to map the clean and corrupted input to the “same” code.3 E XPERIMENTSThis section describe some of the experiments we have carrie d out with the modified DAE. We qual-itatively illustrate how in over-complete representation s, the DAE benefits from adding the penaltyto the encoding space.3.1 S YNTHETIC DATAGaussian distributed data: The first example corresponds to a set of data point drawn from abivariate Gaussian distribution with zero mean and covaria nce matrixΣX=/parenleftbigg1 0.950.95 1/parenrightbigg. (5)1Note that the information maximization view applies to conv entional auto-encoder, as well.The main dif-ference being the stochastic map2This follows from the data processing inequality.3Under review as a conference paper at ICLR 2017−4−3−2−101234−4−3−2−101234(a) Linear g(x) =x−4−3−2−101234−4−3−2−101234(b) ReLU g(x) = max{0,x}−4−3−2−101234−4−3−2−101234(c)logsigg(x) = 1/(1+exp( −x))Figure 1: Outputs for different activation functions of the modifed DAE with an over-completerepresentation when the inputs are Gaussian distributed.We use a linear encoding, that is g(x) =x, that is also over-complete since the encoder projectthe2-dimensional data points to 10different directions. The conventional auto encoder would overfit being able to achieve zero reconstruction error, but it wo n’t be able to implicitly retain what isthought to be the structure in the data. We also compare this o utput to the outputs of two nonlinearauto-encoders, one uses the logsig unitsg(x) = 1/(1+exp( −x)), and the other a rectified linearunits (ReLU) g(x) = max {0,x}. Figure 1 shows the outputs of the three auto-encoders on the−8 −6 −4 −2 0 2 4 6 8−4−3−2−101234(a) Linear g(x) =x−8 −6 −4 −2 0 2 4 6 8−4−3−2−101234(b) ReLU g(x) = max{0,x}−8 −6 −4 −2 0 2 4 6 8−4−3−2−101234(c)logsigg(x) = 1/(1+exp( −x))−8 −6 −4 −2 0 2 4 6 8−4−3−2−101234(d) SatLU g(x) = max{0,x}−max{0,x−1}Figure 2: Outputs for different activation functions of the modified DAE with an over-completerepresentation when the inputs are a mixture of Gaussian dis tributions.Gaussian distributed data. It can be seen that the outputs ap proximately align with what correspondsroughly to the first principal component of the data. Notice, that no bottleneck neither shrinkage ofparameters was explicitly defined. The parameters of our cos t function are λ= 1for the distortiontrade-off, and σ= 0.5for the noise level.Mixture of Gaussians: The second example, employs a mixture of three Gaussian dist ributions toshow the output of modified DAE in a nonlinear scenario, where an over-complete representation4Under review as a conference paper at ICLR 2017 −6−4−2 0246−6−4−20246(a) Non regularized AE −6 −4 −2 0 2 4 6−6−4−20246Input(b) DAE noise σ= 0.5 −6 −4 −2 0 2 4 6−6−4−20246Input(c) Modified DAE noise σ=0.5andλ= 1Figure 3: Energy ∝bardblx−ˆx∝bardbl2landscapes for different AE algorithmsfollowed by a nonlinearity can be advantageous. The means of and covariances of the mixturecomponents areμ1=/parenleftbigg2−2/parenrightbigg, μ2=/parenleftbigg−2−2/parenrightbigg, μ1=/parenleftbigg6−2/parenrightbigg;andΣ1=/parenleftbigg1−0.95−0.95 1/parenrightbigg,Σ2=Σ3=/parenleftbigg1 0.950.95 1/parenrightbigg,(6)respectively, and the mixing weights are p1= 0.5, andp2=p3= 0.25.Figure 2 shows the outputs of the four auto-encoders on the mi xture of Gaussian distributions. Theauto-encoders employ: linear, rectified linear, sigmoidal , and saturated linear units. It can be seenthat the outputs approximately align with what can be though as the principal curves of the data.Again, we want to stress that no bottleneck neither shrinkag e was explicitly defined. In this caseeach of the auto-encoder has 20 units for encoding, which wou ld easily over fit the data in theabsence of any regularization or add-hoc constraints such a s tied weights. The parameters of ourcost function are λ= 1 for the distortion level σ= 0.5. The linear units seem to fit the data,but as we previously mentioned they favor the principal comp onents. Increasing the noise wouldcollapse the reconstructed points into a line. This is not ne cessarily the case when nonlinear unitsare considered.Finally, in Figure 3 we show the resulting energy ∝bardblx−ˆx∝bardbl2landscapes for the over-complete auto-encoder with rectified linear units after being trained with : original DAE objective and the proposedmodified DAE objective. The modified objective makes the AE ca rve well-defined ravines in theenergy landscape at the points were majority of the data lies .3.2 MNISTHere, we observe the influence of the extra term in the objecti ve function of the DAE. We train asingle hidden layer DAE with logsig activation and cross entropy loss on both encoding and de-coding layers. The number of fully connected hidden units is 2048 . Input images are corruptedwith zero-mask noise with 30% corruption level. Once the DAE is trained for 100epochs, we usethe encoding layer as a feature extractor for a multi class lo gistic regression classifier. Figure 4(a),displays the average test error over 30runs of the DAE training using random initial conditions fordifferent values of λ. We remind the reader that there is no fine tuning of the weight s of the encoderlayer, after unsupervised pre-training only the weights of the logistic regression layer are trainedusing label information. Moreover, since the main goal of he re is to observe the influence of theencoding cost term, we focus on comparisons at the single hid den layer level rather than trying toincrease performance by stacking multiple layers.4 C ONCLUSIONSWe presented an algorithm for learning auto-encoders based on an modified objective for Denois-ing that explicitly enforces the denoising to be carried out during the encoding phase. Moreover,5Under review as a conference paper at ICLR 20172 3 4 51.71.81.922.12.22.32.42.52.6log10(λ)Test error % Original DA, λ = ∞Modified DA(a) Test errors for different values of λwe described how the modified objective can be understood as m inimizing an upper bound on theconditional entropy of the code given the inputs assuming th e corruption process to be part of theencoding. By simply minimizing the distance between the enc oding of the uncorrupted inputs andthe encoding of their corrupted counterparts, we show that r obustness of the auto encoder can beguaranteed at the encoding phase. Experiments using over-c omplete bases showed that the modifiedDAE was able to learn useful encoding mapping (representati on) and can also learn a regularizedinput-output in an implicit manner.REFERENCESIan J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu , David Warde-Farley, Sherjil Ozair,Aaron C. Courville, and Yoshua Bengio. Generative adversar ial nets. In Advances in Neu-ral Information Processing Systems 27: Annual Conference o n Neural Information ProcessingSystems 2014, December 8-13 2014, Montreal, Quebec, Canada , pp. 2672–2680, 2014. URLhttp://papers.nips.cc/paper/5423-generative-adversa rial-nets .Diederik P. Kingma and Max Welling. Auto-encoding variatio nal bayes. CoRR , abs/1312.6114,2013. URL http://arxiv.org/abs/1312.6114 .Alireza Makhzani and Brendan J. Frey. Winner-take-all auto encoders. In Advances in Neural In-formation Processing Systems 28: Annual Conference on Neur al Information Processing Sys-tems 2015, December 7-12, 2015, Montreal, Quebec, Canada , pp. 2791–2799, 2015. URLhttp://papers.nips.cc/paper/5783-winner-take-all-au toencoders .Marc’Aurelio Ranzato, Christopher Poultney, Sumit Chopra , and Yann LeCun. Efficient learning ofsparse representations with an energy-based model. In Neural Information Processing Systems ,2006.Salah Rifai, Gr ́ egoire Mesnil, Pascal Vincent, Xavier Mull er, Yoshua Bengio, Yann Dauphin, andXavier Glorot. Higher order contractive auto-encoder. In European Conference on MachineLearning and Principles and Practice of Knowledge Discover y in Databases (ECML PKDD) ,2011a.Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. Contractive auto-encoders: Explicit invariance during feature extraction. In28th International Conference onMachine Learning , 2011b.Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre -Antoine Manzagol. Extractingand composing robust features with denoising autoencoders . In Proceedings of the 25th In-ternational Conference on Machine Learning , ICML ’08, pp. 1096–1103, New York, NY ,USA, 2008. ACM. ISBN 978-1-60558-205-4. doi: 10.1145/1390 156.1390294. URLhttp://doi.acm.org/10.1145/1390156.1390294 .6
BkbY4psgg
Published as a conference paper at ICLR 2017MAKING NEURAL PROGRAMMING ARCHITECTURESGENERALIZE VIA RECURSIONJonathon Cai, Richard Shin, Dawn SongDepartment of Computer ScienceUniversity of California, BerkeleyBerkeley, CA 94720, USAfjonathon,ricshin,dawnsong g@cs.berkeley.eduABSTRACTEmpirically, neural networks that attempt to learn programs from data have exhib-ited poor generalizability. Moreover, it has traditionally been difficult to reasonabout the behavior of these models beyond a certain level of input complexity. Inorder to address these issues, we propose augmenting neural architectures witha key abstraction: recursion. As an application, we implement recursion in theNeural Programmer-Interpreter framework on four tasks: grade-school addition,bubble sort, topological sort, and quicksort. We demonstrate superior generaliz-ability and interpretability with small amounts of training data. Recursion dividesthe problem into smaller pieces and drastically reduces the domain of each neu-ral network component, making it tractable to prove guarantees about the overallsystem’s behavior. Our experience suggests that in order for neural architecturesto robustly learn program semantics, it is necessary to incorporate a concept likerecursion.1 I NTRODUCTIONTraining neural networks to synthesize robust programs from a small number of examples is a chal-lenging task. The space of possible programs is extremely large, and composing a program that per-forms robustly on the infinite space of possible inputs is difficult—in part because it is impracticalto obtain enough training examples to easily disambiguate amongst all possible programs. Never-theless, we would like the model to quickly learn to represent the right semantics of the underlyingprogram from a small number of training examples, not an exhaustive number of them.Thus far, to evaluate the efficacy of neural models on programming tasks, the only metric that hasbeen used is generalization of expected behavior to inputs of greater complexity (Vinyals et al.(2015), Kaiser & Sutskever (2015), Reed & de Freitas (2016), Graves et al. (2016), Zaremba et al.(2016)). For example, for the addition task, the model is trained on short inputs and then tested onits ability to sum inputs with much longer numbers of digits. Empirically, existing models sufferfrom a common limitation—generalization becomes poor beyond a threshold level of complexity.Errors arise due to undesirable and uninterpretable dependencies and associations the architecturelearns to store in some high-dimensional hidden state. This makes it difficult to reason about whatthe model will do when given complex inputs.One common strategy to improve generalization is to use curriculum learning, where the model istrained on inputs of gradually increasing complexity. However, models that make use of this strategyeventually fail after a certain level of complexity (e.g. the single-digit multiplication task in Zarembaet al. (2016), the bubble sort task in Reed & de Freitas (2016), and the graph tasks in Graves et al.(2016)). In this version of curriculum learning, even though the inputs are gradually becoming morecomplex, the semantics of the program is succinct and does not change. Although the model isexposed to more and more data, it might learn spurious and overly complex representations of theprogram, as suggested in Zaremba et al. (2016). That is to say, the network does not learn the trueprogram semantics.In this paper, we propose to resolve these issues by explicitly incorporating recursion into neuralarchitectures. Recursion is an important concept in programming languages and a critical tool to1Published as a conference paper at ICLR 2017reduce the complexity of programs. We find that recursion makes it easier for the network to learnthe right program and generalize to unknown situations. Recursion enables provable guarantees onneural programs’ behavior without needing to exhaustively enumerate all possible inputs to the pro-grams. This paper is the first (to our knowledge) to investigate the important problem of provablegeneralization properties of neural programs. As an application, we incorporate recursion into theNeural Programmer-Interpreter architecture and consider four sample tasks: grade-school addition,bubble sort, topological sort, and quicksort. Empirically, we observe that the learned recursive pro-grams solve all valid inputs with 100% accuracy after training on a very small number of examples,out-performing previous generalization results. Given verification sets that cover all the base casesand reduction rules, we can provide proofs that these learned programs generalize perfectly. This isthe first time one can provide provable guarantees of perfect generalization for neural programs.2 T HEPROBLEM AND OURAPPROACH2.1 T HEPROBLEM OF GENERALIZATIONWhen constructing a neural network for the purpose of learning a program, there are two orthogonalaspects to consider. The first is the actual model architecture. Numerous models have been proposedfor learning programs; to name a few, this includes the Differentiable Neural Computer (Graveset al., 2016), Neural Turing Machine (Graves et al., 2014), Neural GPU (Kaiser & Sutskever, 2015),Neural Programmer (Neelakantan et al., 2015), Pointer Network (Vinyals et al., 2015), HierarchicalAttentive Memory (Andrychowicz & Kurach, 2016), and Neural Random Access Machine (Kurachet al., 2016). The architecture usually possesses some form of memory, which could be internal(such as the hidden state of a recurrent neural network) or external (such as a discrete “scratch pad”or a memory block with differentiable access). The second is the training procedure, which consistsof the form of the training data and the optimization process. Almost all architectures train onprogram input/output pairs. The only model, to our knowledge, that does not train on input-outputpairs is the Neural Programmer-Interpreter (Reed & de Freitas, 2016), which trains on syntheticexecution traces.To evaluate a neural network that learns a neural program to accomplish a certain task, one commonevaluation metric is how well the learned model Mgeneralizes. More specifically, when Mistrained on simpler inputs, such as inputs of a small length, the generalization metric evaluates howwellMwill do on more complex inputs, such as inputs of much longer length. Mis considered tohave perfect generalization if Mcan give the right answer for any input, such as inputs of arbitrarylength.As mentioned in Section 1, all approaches to neural programming today fare poorly on this general-ization issue. We hypothesize that the reason for this is that the neural network learns to spuriouslydepend on specific characteristics of the training examples that are irrelevant to the true programsemantics, such as length of the training inputs, and thus fails to generalize to more complex inputs.In addition, none of the current approaches to neural programming provide a method or even aim toenable provable guarantees about generalization. The memory updates of these neural programs areso complex and interdependent that it is difficult to reason about the behaviors of the learned neuralprogram under previously unseen situations (such as problems with longer inputs). This is highlyundesirable, since being able to provide the correct answer in all possible settings is one of the mostimportant aspects of any learned neural program.2.2 O URAPPROACH USING RECURSIONIn this paper, we propose that the key abstraction of recursion is necessary for neural programs togeneralize. The general notion of recursion has been an important concept in many domains, in-cluding mathematics and computer science. In computer science, recursion (as opposed to iteration)involves solving a larger problem by combining solutions to smaller instances of the same problem.Formally, a function exhibits recursive behavior when it possesses two properties: (1) Base cases—terminating scenarios that do not use recursion to produce answers; (2) A set of rules that reduces allother problems toward the base cases. Some functional programming languages go so far as not todefine any looping constructs but rely solely on recursion to enable repeated execution of the samecode.2Published as a conference paper at ICLR 2017In this paper, we propose that recursion is an important concept for neural programs as well. Infact, we argue that recursion is an essential element for neural programs to generalize, and makes ittractable to prove the generalization of neural programs. Recursion can be implemented differentlyfor different neural programming models. Here as a concrete and general example, we consider ageneral Neural Programming Architecture (NPA), similar to Neural Programmer-Interpreter (NPI)in Reed & de Freitas (2016). In this architecture, we consider a core controller, e.g., an LSTMin NPI’s case, but possibly other networks in different cases. There is a (changing) list of neuralprograms used to accomplish a given task. The core controller acts as a dispatcher for the programs.At each time step, the core controller can decide to select one of the programs to call with certainarguments. When the program is called, the current context including the caller’s memory state isstored on a stack; when the program returns, the stored context is popped off the stack to resumeexecution in the previous caller’s context.In this general Neural Programming Architecture, we show it is easy to support recursion. In par-ticular, recursion can be implemented as a program calling itself. Because the context of the calleris stored on a stack when it calls another program and the callee starts in a fresh context, this en-ables recursion simply by allowing a program to call itself. In practice, we can additionally use tailrecursion optimization to avoid problems with the call stack growing too deep. Thus, any generalNeural Programming Architecture supporting such a call structure can be made to support recursion.In particular, this condition is satisfied by NPI, and thus the NPI model naturally supports recursion(even though the authors of NPI did not consider this aspect explicitly).By nature, recursion reduces the complexity of a problem to simpler instances. Thus, recursionhelps decompose a problem and makes it easier to reason about a program’s behavior for previouslyunseen situations such as longer inputs. In particular, given that a recursion is defined by twoproperties as mentioned before, the base cases and the set of reduction rules, we can prove a recursiveneural program generalizes perfectly if we can prove that (1) it performs correctly on the base cases;(2) it learns the reduction rules correctly. For many problems, the base cases and reduction rulesusually consist of a finite (often small) number of cases. For problems where the base cases may beextremely large or infinite, such as certain forms of motor control, recursion can still help reduce theproblem of generalization to these two aspects and make the generalization problem significantlysimpler to handle and reason about.As a concrete instantiation, we show in this paper that we can enable recursive neural programs in theNPI model, and thus enable perfectly generalizable neural programs for tasks such as sorting wherethe original, non-recursive NPI program fails. As aforementioned, the NPI model naturally supportsrecursion. However, the authors of NPI did not consider explicitly the notion of recursion and as aconsequence, did not learn recursive programs. We show that by modifying the training procedure,we enable the NPI model to learn recursive neural programs. As a consequence, our learned neuralprograms empirically achieve perfect generalization from a very small number of training examples.Furthermore, given a verification input set that covers all base cases and reduction rules, we canformally prove that the learned neural programs achieve perfect generalization after verifying itsbehavior on the verification input set. This is the first time one can provide provable guarantees ofperfect generalization for neural programs.We would also like to point out that in this paper, we provide as an example one way to train arecursive neural program, by providing a certain training execution trace to the NPI model. However,our concept of recursion for neural programs is general. In fact, it is one of our future directions toexplore new ways to train a recursive neural program without providing explicit training executiontraces or with only partial or non-recursive traces.3 A PPLICATION TO LEARNING RECURSIVE NEURAL PROGRAMS WITH NPI3.1 B ACKGROUND : NPI A RCHITECTUREAs discussed in Section 2, the Neural Programmer-Interpreter (NPI) is an instance of a NeuralProgrammer Architecture and hence it naturally supports recursion. In this section, we give a briefreview of the NPI architecture from Reed & de Freitas (2016) as background.3Published as a conference paper at ICLR 2017We describe the details of the NPI model relevant to our contributions. We adapt machinery fromthe original paper slightly to fit our needs. The NPI model has three learnable components: atask-agnostic core, a program-key embedding, and domain-specific encoders that allow the NPI tooperate in diverse environments.The NPI accesses an external environment, Q, which varies according to the task. The core moduleof the NPI is an LSTM controller that takes as input a slice of the current external environment, viaa set of pointers, and a program and arguments to execute. NPI then outputs the return probabilityand next program and arguments to execute. Formally, the NPI is represented by the following setof equations:st=fenc(et;at)ht=flstm(st;pt;ht1)rt=fend(ht);pt+1=fprog(ht);at+1=farg(ht)tis a subscript denoting the time-step; fencis a domain-specific encoder (to be described later) thattakes in the environment slice etand arguments at;flstm represents the core module, which takesin the statestgenerated by fenc, a program embedding pt2RP, and hidden LSTM state ht;fenddecodes the return probability rt;fprog decodes a program key embedding pt+1;1andfargdecodesargumentsat+1. The outputs rt;pt+1;at+1are used to determine the next action, as described inAlgorithm 1. If the program is primitive, the next environmental state et+1will be affected by ptandat, i.e.et+1fenv(et;pt;at). As with the original NPI architecture, the experiments for thispaper always used a 3-tuple of integers at= (at(1);at(2);at(3)).Algorithm 1 Neural programming inference1:Inputs : Environment observation e, programp, arguments a, stop threshold 2:function RUN(e;p;a )3:h 0;r 04: whiler< do5:s fenc(e;a);h flstm(s;p;h )6:r fend(h);p2 fprog(h);a2 farg(h)7: ifpis a primitive function then8: e fenv(e;p;a ).9: else10: function RUN(e;p 2;a2)A description of the inference procedure is given in Algorithm 1. Each step during an executionof the program does one of three things: (1) another subprogram along with associated argumentsis called, as in Line 10, (2) the program writes to the environment if it is primitive, as in Line 8,or (3) the loop is terminated if the return probability exceeds a threshold , after which the stackframe is popped and control is returned to the caller. In all experiments, is set to 0.5. Each time asubprogram is called, the stack depth increases.The training data for the Neural Programmer-Interpreter consists of full execution traces for theprogram of interest. A single element of an execution trace consists of a step input-step output pair,which can be synthesized from Algorithm 1: this corresponds to, for a given time-step, the stepinput tuple (e;p;a )and step output tuple (r;p2;a2). An example of part of an addition task trace,written in shorthand, is given in Figure 1. For example, a step input-step output pair in Lines 2 and3 of the left-hand side of Figure 1 is (ADD1, WRITE OUT 1). In this pair, the step input runsa subprogram ADD1 that has no arguments, and the step output contains a program WRITE thathas arguments of OUT and 1. The environment and return probability are omitted for readability.Indentation indicates the stack is one level deeper than before.It is important to emphasize that at inference time in the NPI, the hidden state of the LSTM controlleris reset (to zero) at each subprogram call, as in Line 3 of Algorithm 1 ( h 0). This functionality1The original NPI paper decodes to a program key embedding kt2RKand then computes a programembedding pt+1, which we also did in our implementation, but we omit this for brevity.4Published as a conference paper at ICLR 2017Non-Recursive1ADD2 ADD13 WRITE OUT 14 CARRY5 PTR CARRY LEFT6 WRITE CARRY 17 PTR CARRY RIGHT8 LSHIFT9 PTR INP1 LEFT10 PTR INP2 LEFT11 PTR CARRY LEFT12 PTR OUT LEFT13 ADD114 ...Recursive1ADD2 ADD13 WRITE OUT 14 CARRY5 PTR CARRY LEFT6 WRITE CARRY 17 PTR CARRY RIGHT8 LSHIFT9 PTR INP1 LEFT10 PTR INP2 LEFT11 PTR CARRY LEFT12 PTR OUT LEFT13 ADD14 ...Figure 1: Addition Task. The non-recursive trace loops on cycles of ADD1 and LSHIFT, whereasin the recursive version, the ADD function calls itself (bolded).is critical for implementing recursion, since it permits us to restrict our attention to the currentlyrelevant recursive call, ignoring irrelevant details about other contexts.3.2 R ECURSIVE FORMULATIONS FOR NPI PROGRAMSWe emphasize the overall goal of this work is to enable the learning of a recursive program. Thelearned recursive program is different from neural programs learned in all previous work in an impor-tant aspect: previous approaches do not explicitly incorporate this abstraction, and hence generalizepoorly, whereas our learned neural programs incorporate recursion and achieve perfect generaliza-tion.Since NPI naturally supports the notion of recursion, a key question is how to enable NPI to learnrecursive programs. We found that changing the NPI training traces is a simple way to enable this.In particular, we construct new training traces which explicitly contain recursive elements and showthat with this type of trace, NPI easily learns recursive programs. In future work, we would like todecrease supervision and construct models that are capable of coming up with recursive abstractionsthemselves.In what follows, we describe the way in which we constructed NPI training traces so as to makethem contain recursive elements and thus enable NPI to learn recursive programs. We describe therecursive re-formulation of traces for two tasks from the original NPI paper—grade-school additionand bubble sort. For these programs, we re-use the appropriate program sets (the associated subpro-grams), and we refer the reader to the appendix of Reed & de Freitas (2016) for further details onthe subprograms used in addition and bubble sort. Finally, we implement recursive traces for ourown topological sort and quicksort tasks.Grade School Addition. For grade-school addition, the domain-specific encoder isfenc(Q;i 1;i2;i3;i4;at) =MLP ([Q(1;i1);Q(2;i2);Q(3;i3);Q(4;i4);at(1);at(2);at(3)]);where the environment Q2R4NKis a scratch-pad that contains four rows (the first input num-ber, the second input number, the carry bits, and the output) and Ncolumns.Kis set to 11, torepresent the range of 10 possible digits, along with a token representing the end of input.2Atany given time, the NPI has access to values pointed to by four pointers in each of the four rows,represented by Q(1;i1);Q(2;i2);Q(3;i3), andQ(4;i4).The non-recursive trace loops on cycles of ADD1 and LSHIFT. ADD1 is a subprogram that addsthe current column (writing the appropriate digit to the output row and carrying a bit to the nextcolumn if needed). LSHIFT moves the four pointers to the left, to move to the next column. Theprogram terminates when seeing no numbers in the current column.Figure 1 shows examples of non-recursive and recursive addition traces. We make the trace recursiveby adding a tail recursive call into the trace for the ADD program after calling ADD1 and LSHIFT,2The original paper uses K= 10 , but we found it necessary to augment the range with an end token, inorder to terminate properly.5Published as a conference paper at ICLR 2017Non-Recursive1BUBBLESORT2 BUBBLE3 PTR 2 RIGHT4 BSTEP5 COMPSWAP67 RSHIFT8 PTR 1 RIGHT9 PTR 2 RIGHT10 BSTEP11 COMPSWAP12 SWAP 1 213 RSHIFT14 PTR 1 RIGHT15 PTR 2 RIGHT16 RESET17 LSHIFT18 PTR 1 LEFT19 PTR 2 LEFT20 LSHIFT21 PTR 1 LEFT22 PTR 2 LEFT23 PTR 3 RIGHT24 BUBBLE25 ...Partial Recursive1BUBBLESORT2 BUBBLE3 PTR 2 RIGHT4 BSTEP5 COMPSWAP67 RSHIFT8 PTR 1 RIGHT9 PTR 2 RIGHT10 BSTEP11 COMPSWAP12 SWAP 1 213 RSHIFT14 PTR 1 RIGHT15 PTR 2 RIGHT16 RESET17 LSHIFT18 PTR 1 LEFT19 PTR 2 LEFT20 LSHIFT21 PTR 1 LEFT22 PTR 2 LEFT23 PTR 3 RIGHT24 BUBBLESORT25 BUBBLE26 ...Full Recursive1BUBBLESORT2 BUBBLE3 PTR 2 RIGHT4 BSTEP5 COMPSWAP67 RSHIFT8 PTR 1 RIGHT9 PTR 2 RIGHT10 BSTEP11 COMPSWAP12 SWAP 1 213 RSHIFT14 PTR 1 RIGHT15 PTR 2 RIGHT16 BSTEP17 RESET18 LSHIFT19 PTR 1 LEFT20 PTR 2 LEFT21 LSHIFT22 PTR 1 LEFT23 PTR 2 LEFT24 LSHIFT25 PTR 3 RIGHT26 BUBBLESORT27 BUBBLE28 ...Figure 2: Bubble Sort Task. The non-recursive trace loops on cycles of BUBBLE and RESET.The difference between the partial recursive and full recursive versions is in the indentation of Lines10-15 and 20-22 (bolded), since in the full recursive version, BSTEP and LSHIFT are made tailrecursive; the final calls to BSTEP and LSHIFT return immediately as they occur after the pointerreaches the end of the array. Also note that COMPSWAP conditionally swaps numbers under thebubble pointers.as in Line 13 of the right-hand side of Figure 1. Via the recursive call, we effectively forget that thecolumn just added exists, since the recursive call to ADD starts with a new hidden state for theLSTM controller. Consequently, there is no concept of length relevant to the problem, which hastraditionally been an important focus of length-based curriculum learning.Bubble Sort. For bubble sort, the domain-specific encoder isfenc(Q;i 1;i2;i3;at) =MLP ([Q(1;i1);Q(1;i2);i3==length;a t(1);at(2);at(3)]);where the environment Q2R1NKis a scratch-pad that contains 1 row, to represent the state ofthe array as sorting proceeds in-place, and Ncolumns.Kis set to 11, to denote the range of possiblenumbers (0 through 9), along with the start/end token (represented with the same encoding) whichis observed when a pointer reaches beyond the bounds of the input. At any given time, the NPI hasaccess to the values referred to by two pointers, represented by Q(1;i1)andQ(1;i2);. The pointersat indexi1andi2are used to compare the pair of numbers considered during the bubble sweep,swapping them if the number at i1is greater than that in i2. These pointers are referred to as bubblepointers. The pointer at index i3represents a counter internal to the environment that is incrementedonce after each pass of the algorithm (one cycle of BUBBLE and RESET); when incremented anumber of times equal to the length of the array, the flag i3==length becomes true and terminatesthe entire algorithm .The non-recursive trace loops on cycles of BUBBLE and RESET, which logically represents onebubble sweep through the array and reset of the two bubble pointers to the very beginning of thearray, respectively. In this version, there is a dependence on length: BSTEP and LSHIFT are calleda number of times equivalent to one less than the length of the input array, in BUBBLE and RESETrespectively.Inside BUBBLE and RESET, there are two operations that can be made recursive. BSTEP, usedin BUBBLE, compares pairs of numbers, continuously moving the bubble pointers once to the righteach time until reaching the end of the array. LSHIFT, used in RESET, shifts the pointers left untilreaching the start token.6Published as a conference paper at ICLR 2017We experiment with two levels of recursion—partial and full. Partial recursion only adds a tailrecursive call to BUBBLESORT after BUBBLE and RESET, similar to the tail recursive calldescribed previously for addition. The partial recursion is not enough for perfect generalization, aswill be presented later in Section 4. Full recursion, in addition to making the aforementioned tailrecursive call, adds two additional recursive calls; BSTEP and LSHIFT are made tail recursive.Figure 2 shows examples of traces for the different versions of bubble sort. Training on the fullrecursive trace leads to perfect generalization, as shown in Section 4. We performed experimentson the partially recursive version in order to examine what happens when only one recursive call isimplemented, when in reality three are required for perfect generalization.Algorithm 2 Depth First Search Topological Sort1:Color all vertices white.2:Initialize an empty stack Sand a directed acyclic graph DAG to traverse.3:Begin traversing from Vertex 1 in the DAG.4:function TOPOSORT(DAG )5: while there is still a white vertex u:do6: color[u] = grey7:vactive =u8: do9: ifvactive has a white child vthen10: color[v] = grey11: pushvactive ontoS12: vactive =v13: else14: color[vactive ] = black15: Writevactive to result16: ifSis empty then pass17: elsepop the top vertex off Sand set it to vactive18: whileSis not emptyTopological Sort. We choose to implement a topological sort task for graphs. A topological sortis a linear ordering of vertices such that for every directed edge (u;v)fromutov,ucomes beforevin the ordering. This is possible if and only if the graph has no directed cycles; that is to say, itmust be a directed acyclic graph (DAG). In our experiments, we only present DAG’s as inputs andrepresent the vertices as values ranging from 1;:::;n , where the DAG contains nvertices.Directed acyclic graphs are structurally more diverse than inputs in the two tasks of grade-schooladdition and bubble sort. The degree for any vertex in the DAG is variable. Also the DAG can havepotentially more than one connected component, meaning it is necessary to transition between thesecomponents appropriately.Algorithm 2 shows the topological sort task of interest. This algorithm is a variant of depth firstsearch. We created a program set that reflects the semantics of Algorithm 2. For brevity, we referthe reader to the appendix for further details on the program set and non-recursive and recursivetrace-generating functions used for topological sort.For topological sort, the domain-specific encoder isfenc(DAG;Q color;pstack;pstart;vactive;childList;a t)=MLP ([Qcolor(pstart);Qcolor(DAG [vactive ][childList [vactive ]]);pstack == 1;at(1);at(2);at(3)]);whereQcolor2RU4is a scratch-pad that contains Urows, each containing one of four colors(white, gray, black, invalid) with one-hot encoding. Uvaries with the number of vertices in thegraph. We further have Qresult2NU, a scratch-pad which contains the sorted list of vertices atthe end of execution, and Qstack2NU, which serves the role of the stack Sin Algorithm 2. Thecontents ofQresult andQstack are not exposed directly through the domain-specific encoder; rather,we define primitive functions which manipulate these scratch-pads.The DAG is represented as an adjacency list where DAG [i][j]refers to the j-th child of vertex i.There are 3 pointers ( presult;pstack;pstart),presult points to the next empty location in Qresult ,7Published as a conference paper at ICLR 2017pstack points to the top of the stack in Qstack , andpstart points to the candidate starting node fora connected component. There are 2 variables (vactive andvsave);vactive holds the active vertex(as in Algorithm 2) and vsave holds the value of vactive before executing Line 12 of Algorithm 2.childList2NUis a vector of pointers, where childList [i]points to the next child under consider-ation for vertex i.The three environment observations aid with control flow in Algorithm 2. Qcolor(pstart)containsthe color of the current start vertex, used in the evaluation of the condition in the WHILE loop in Line5 of Algorithm 2. Qcolor(DAG [vactive ][childList [vactive ]])refers to the color of the next child ofvactive , used in the evaluation of the condition in the IF branch in Line 9 of Algorithm 2. Finally,the boolean pstack == 1 is used to check whether the stack is empty in Line 18 of Algorithm 2.An alternative way of representing the environment slice is to expose the values of the absolutevertices to the model; however, this makes it difficult to scale the model to larger graphs, since largevertex values are not seen during training time.We refer the reader to the appendix for the non-recursive trace generating functions. Inthe non-recursive trace, there are four functions that can be made recursive—TOPOSORT,CHECK CHILD, EXPLORE, and NEXT START, and we add a tail recursive call to each ofthese functions in order to make the recursive trace. In particular, in the EXPLORE function,adding a tail recursive call resets and stores the hidden states associated with vertices in a stack-likefashion. This makes it so that we only need to consider the vertices in the subgraph that are cur-rently relevant for computing the sort, allowing simpler reasoning about behavior for large graphs.The sequence of primitive operations (MOVE and WRITE operations) for the non-recursive andrecursive versions are exactly the same.Quicksort. We implement a quicksort task, in order to demonstrate that recursion helps with learn-ing divide-and-conquer algorithms. We use the Lomuto partition scheme; the logic for the recursivetrace is shown in Algorithm 3. For brevity, we refer the reader to the appendix for information aboutthe program set and non-recursive and recursive trace-generating functions for quicksort. The logicfor the non-recursive trace is shown in Algorithm 4 in the appendix.Algorithm 3 Recursive Quicksort1:Initialize an array Ato sort.2:Initializeloandhito be 1 andn, wherenis the length of A.3:4:function QUICK SORT(A;lo;hi )5: iflo<hi :then6: p = PARTITION (A;lo;hi )7: QUICKSORT (A;lo;p1)8: QUICKSORT (A;p+ 1;hi)9:10:function PARTITION (A;lo;hi )11:pivot =lo12: forj2[lo;hi1] :do13: ifA[j]A[hi]then14: swapA[pivot ]withA[j]15: pivot =pivot + 116: swapA[pivot ]withA[hi]17: returnpivotFor quicksort, the domain-specific encoder isfenc(Qarray;QstackLo;QstackHi;plo;phi;pstackLo;pstackHi;ppivot;pj;at) =MLP ([Qarray(pj)Qarray(phi);pj==phi;QstackLo (pstackLo1)<Q stackHi (pstackHi1);pstackLo == 1;at(1);at(2);at(3)]);whereQarray2RU11is a scratch-pad that contains Urows, each containing one of 11 values (oneof the numbers 0 through 9 or an invalid state). Our implementation uses two stacks QstackLo and8Published as a conference paper at ICLR 2017QstackHi , each in RU, that store the arguments to the recursive QUICKSORT calls in Algorithm 3;before each recursive call, the appropriate arguments are popped off the stack and written to ploandphi.There are 6 pointers ( plo;phi;pstackLo;pstackHi;ppivot;pj).ploandphipoint to the loandhiindices of the array, as in Algorithm 3. pstackLo andpstackHi point to the top (empty) positionsinQstackLo andQstackHi .ppivot andpjpoint to the pivot andjindices of the array, used inthe PARTITION function in Algorithm 3. The 4 environment observations aid with control flow;QstackLo (pstackLo1)<Q stackHi (pstackHi1) implements the lo<hi comparison in Line 5 ofAlgorithm 3, pstackLo == 1 checks if the stacks are empty in Line 18 of Algorithm 4, and the otherobservations (all involving ppivot orpj) deal with logic in the PARTITION function.Note that the recursion for quicksort is not purely tail recursive and therefore represents a morecomplex kind of recursion that is harder to learn than in the previous tasks. Also, compared to thebubble pointers in bubble sort, the pointers that perform the comparison for quicksort (the COMP-SWAP function) are usually not adjacent to each other, making quicksort less local than bubblesort. In order to compensate for this, ppivot andpjrequire special functions (MOVE PIVOT LOand MOVE JLO) to properly set them to loin Lines 11 and 12 of the PARTITION function inAlgorithm 3.3.3 P ROVABLY PERFECT GENERALIZATIONWe show that if we incorporate recursion, the learned NPI programs can achieve provably perfectgeneralization for different tasks. Provably perfect generalization implies the model will behavecorrectly, given any valid input. In order to claim a proof, we must verify the model producescorrect behavior over all base cases and reductions, as described in Section 2.We propose and describe our verification procedure. This procedure verifies that all base cases andreductions are handled properly by the model via explicit tests. Note that recursion helps make thisprocess tractable, because we only need to test a finite number of inputs to show that the modelwill work correctly on inputs of unbounded complexity. This verification phase only needs to beperformed once after training.Formally, verification consists of proving the following theorem:8i2V;M(i)+P(i)whereidenotes a sequence of step inputs (within one function call), Vdenotes the set of validsequences of step inputs, Mdenotes the neural network model, Pdenotes the correct program, andP(i)denotes the next step output from the correct program. The arrow in the theorem refers toevaluation, as in big-step semantics. The theorem states that for the same sequence of step inputs,the model produces the exact same step output as the target program it aims to learn. M, as describedin Algorithm 1, processes the sequence of step inputs by using an LSTM.Recursion drastically reduces the number of configurations we need to consider during the verifi-cation phase and makes the proof tractable, because it introduces structure that eliminates infinitelylong sequences of step inputs that would otherwise need to be considered. For instance, for recursiveaddition, consider the family Fof addition problems anan1:::a 1a0+bnbn1:::b 1b0where noCARRY operations occur. We prove every member of Fis added properly, given that subproblemsS=fanan1+bnbn1,an1an2+bn1bn2;:::;a 1a0+b1b0gare added properly.Without using a recursive program, such a proof is not possible, because the non-recursive programruns on an arbitrarily long addition problem that creates correspondingly long sequences of stepinputs; in the non-recursive formulation of addition, ADD calls ADD1 a number of times that isdependent on the length of the input. The core LSTM module’s hidden state is preserved over allthese ADD1 calls, and it is difficult to interpret with certainty what happens over longer timestepswithout concretely evaluating the LSTM with an input of that length. In contrast, each call tothe recursive ADD always runs for a fixed number of steps, even on arbitrarily long problemsinF, so we can test that it performs correctly on a small, fixed number of step input sequences.This guarantees that the step input sequences considered during verification contain all step inputsequences which arise during execution of an unseen problem in F, leading to generalization to anyproblem inF. Hence, if all subproblems in Sare added correctly, we have proven that any memberofFwill be added correctly, thus eliminating an infinite family of inputs that need to be tested.9Published as a conference paper at ICLR 2017To perform the verification as described here, it is critical to construct Vcorrectly. If it is too small,then execution of the program on some input might require evaluation of M(i)on somei =2V, andso the behavior of M(i)might deviate from P(i). If it is too large, then the semantics of Pmight notbe well-defined on some elements in V, or the spurious step input sequences may not be reachablefrom any valid problem input (e.g., an array for quicksort or a DAG for topological sort).To construct this set, by using the reference implementation of each subprogram, we construct amapping between two sets of environment observations: the first set consists of all observationsthat can occur at the beginning of a particular subprogram’s invocation, and the second set containsthe observations at the end of that subprogram. We can obtain this mapping by first consideringthe possible observations that can arise at the beginning of the entry function (ADD, BUBBLE-SORT, TOPOSORT, and QUICKSORT) for some valid program input, and iteratively applyingthe observation-to-observation mapping implied by the reference implementation’s step output atthat point in the execution. If the step output specifies a primitive function call, we need to reasonabout how it can affect the environment so as to change the observation in the next step input. Fornon-primitive subprograms, we can update the observation-to-observation mapping currently asso-ciated with the subprogram and then apply that mapping to the current set. By iterating with thisprocedure, and then running Pon the input observation set that we obtain for the entry point func-tion, we can obtain Vprecisely. To make an analogy to MDPs, this procedure is analogous to howvalue iteration obtains the correct value for each state starting from any initialization.An alternative method is to run Pon many different program inputs and then observe step inputsequences which occur, to create V. However, to be sure that the generated Vis complete (coversall the cases needed), we need to check all pairs of observations seen in adjacent step inputs (in par-ticular, those before and after a primitive function call), in a similar way as if we were constructingVfrom scratch. Given a precise definition of P, it may be possible to automate the generation of VfromPin future work.Note thatVshould also contain the necessary reductions, which corresponds to making the recursivecalls at the correct time, as indicated by P.After finding V, we construct a set of problem inputs which, when executed on P, create exactly thestep input sequences which make up V. We call this set of inputs the verification set ,SV.Given a verification set, we can then run the model on the verification set to check if the producedtraces and results are correct. If yes, then this indicates that the learned neural program achievesprovably perfect generalization.We note that for tasks with very large input domains, such as ones involving MNIST digits or speechsamples, the state space of base cases and reduction rules could be prohibitively large, possiblyinfinite. Consequently, it is infeasible to construct a verification set that covers all cases, and theverification procedure we have described is inadequate. We leave this as future work to devise averification procedure more appropriate to this setting.4 E XPERIMENTSAs there is no public implementation of NPI, we implemented a version of it in Keras that is asfaithful to the paper as possible. Our experiments use a small number of training examples.Training Setup. The training set for addition contains 200 traces. The maximum problem lengthin this training set is 3 (e.g., the trace corresponding to the problem “109 + 101”).The training set for bubble sort contains 100 traces, with maximum problem length of 2 (e.g., thetrace corresponding to the array [3,2]).The training set for topological sort contains 6 traces, with one synthesized from a graph of size 5and the rest synthesized from graphs of size 7.The training set for quicksort contains 4 traces, synthesized from arrays of length 5.The same set of problems was used to generate the training traces for all formulations of the task,for non-recursive and recursive versions.10Published as a conference paper at ICLR 2017Table 1: Accuracy on Randomly Generated Problems for Bubble SortLength of Array Non-Recursive Partially Recursive Full Recursive2 100% 100% 100%3 6.7% 23% 100%4 10% 10% 100%8 0% 0% 100%20 0% 0% 100%90 0% 0% 100%We train using the Adam optimizer and use a 2-layer LSTM and task-specific state encoders for theexternal environments, as described in Reed & de Freitas (2016).4.1 R ESULTS ON GENERALIZATION OF RECURSIVE NEURAL PROGRAMSWe now report on generalization for the varying tasks.Grade-School Addition. Both the non-recursive and recursive learned programs generalize onall input lengths we tried, up to 5000 digits. This agrees with the generalization of non-recursiveaddition in Reed & de Freitas (2016), where they reported generalization up to 3000 digits. However,note that there is no provable guarantee that the non-recursive learned program will generalize to allinputs, whereas we show later that the recursive learned program has a provable guarantee of perfectgeneralization.In order to demonstrate that recursion can help learn and generalize better, for addition, we trainedonly on traces for 5 arbitrarily chosen 1-digit addition sum examples. The recursive version can gen-eralize perfectly to long problems constructed from these components (such as the sum “822+233”,where “8+2” and “2+3” are in the training set), but the non-recursive version fails to sum these longproblems properly.Bubble Sort. Table 1 presents results on randomly generated arrays of varying length for thelearned non-recursive, partially recursive, and full recursive programs. For each length, we test eachprogram on 30 randomly generated problems. Observe that partially recursive does slightly betterthan non-recursive for the setting in which the length of the array is 3, and that the fully recursiveversion is able to sort every array given to it. The non-recursive and partially recursive versions areunable to sort long arrays, beyond length 8.Topological Sort. Both the non-recursive and recursive learned programs generalize on all graphswe tried, up to 120 vertices. As before, the non-recursive learned program lacks a provable guaranteeof generalization, whereas we show later that the recursive learned program has one.In order to demonstrate that recursion can help learn and generalize better, we trained a non-recursiveand recursive model on just a single execution trace generated from a graph containing 5 nodes3forthe topological sort task. For these models, Table 2 presents results on randomly generated DAGsof varying graph sizes (varying in the number of vertices). For each graph size, we test the learnedprograms on 30 randomly generated DAGs. The recursive version of topological sort solves all graphinstances we tried, from graphs of size 5 through 70. On the other hand, the non-recursive versionhas low accuracy, beginning from size 5, and fails completely for graphs of size 8 and beyond.Quicksort. Table 3 presents results on randomly generated arrays of varying length for the learnednon-recursive and recursive programs. For each length, we test each program on 30 randomly gen-erated problems. Observe that the non-recursive program’s correctness degrades for length 11 andbeyond, while the recursive program can sort any given array.3The corresponding edge list is [(1, 2), (1, 5), (2, 4), (2, 5), (3, 5)].11Published as a conference paper at ICLR 2017Table 2: Accuracy on Randomly Generated Problems for Topological SortNumber of Vertices Non-Recursive Recursive5 6.7% 100%6 6.7% 100%7 3.3% 100%8 0% 100%70 0% 100%Table 3: Accuracy on Randomly Generated Problems for QuicksortLength of Array Non-Recursive Recursive3 100% 100%5 100% 100%7 100% 100%11 73.3% 100%15 60% 100%20 30% 100%22 20% 100%25 3.33% 100%30 3.33% 100%70 0% 100%As mentioned in Section 2.1, we hypothesize the non-recursive programs do not generalize wellbecause they have learned spurious dependencies specific to the training set, such as length of theinput problems. On the other hand, the recursive programs have learned the true program semantics.4.2 V ERIFICATION OF PROVABLY PERFECT GENERALIZATIONWe describe how models trained with recursive traces can be proven to generalize, by using theverification procedure described in Section 3.3. As described in the verification procedure, it ispossible to prove our learned recursive program generalizes perfectly by testing on an appropriateset of problem inputs, i.e., the verification set. Recall that this verification procedure cannot beperformed for the non-recursive versions, since the propagation of the hidden state in the core LSTMmodule makes reasoning difficult and so we would need to check an unbounded number of examples.We describe the base cases, reduction rules, and the verification set for each task in Appendix A.6.For each task, given the verification set, we check the traces and results of the learned, to-be-verifiedneural program (described in Section 4.1; and for bubble sort, Appendix A.6) on the verificationset, and ensure they match the traces produced by the true program P. Our results show that for alllearned, to-be-verified neural programs, they all produced the same traces as those produced by Pon the verification set. Thus, we demonstrate that recursion enables provably perfect generalizationfor different tasks, including addition, topological sort, quicksort, and a variant of bubble sort.Note that the training set can often be considerably smaller than the verification set, and despitethis, the learned model can still pass the entire verification set. Our result shows that the trainingprocedure and the NPI architecture is capable of generalizing from the step input-output pairs seenin the training data to the unseen ones present in the verification set.5 C ONCLUSIONWe emphasize that the notion of a neural recursive program has not been presented in the literaturebefore: this is our main contribution. Recursion enables provably perfect generalization. To the bestof our knowledge, this is the first time verification has been applied to a neural program, provid-12Published as a conference paper at ICLR 2017ing provable guarantees about its behavior. We instantiated recursion for the Neural Programmer-Interpreter by changing the training traces. In future work, we seek to enable more tasks withrecursive structure. We also hope to decrease supervision, for example by training with only partialor non-recursive traces, and to develop novel Neural Programming Architectures integrated directlywith a notion of recursion.ACKNOWLEDGMENTSThis material is in part based upon work supported by the National Science Foundation under GrantNo. TWC-1409915, DARPA under Grant No. FA8750-15-2-0104, and Berkeley Deep Drive. Anyopinions, findings, and conclusions or recommendations expressed in this material are those of theauthor(s) and do not necessarily reflect the views of National Science Foundation and DARPA.REFERENCESMarcin Andrychowicz and Karol Kurach. Learning efficient algorithms with hierarchical attentivememory. CoRR , abs/1602.03218, 2016. URL http://arxiv.org/abs/1602.03218 .Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR , abs/1410.5401,2014. URL http://arxiv.org/abs/1410.5401 .Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwiska, Sergio Gmez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou,Adri Puigdomnech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain,Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hass-abis. Hybrid computing using a neural network with dynamic external memory. Nature , 538(7626):471–476, October 2016. ISSN 0028-0836, 1476-4687. doi: 10.1038/nature20101. URLhttp://www.nature.com/doifinder/10.1038/nature20101 .Lukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. CoRR , abs/1511.08228, 2015.URLhttp://arxiv.org/abs/1511.08228 .Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random access machines. ERCIMNews , 2016(107), 2016. URL http://ercim-news.ercim.eu/en107/special/neural-random-access-machines .Arvind Neelakantan, Quoc V . Le, and Ilya Sutskever. Neural programmer: Inducing latent programswith gradient descent, 2015.Scott Reed and Nando de Freitas. Neural programmer-interpreters. ICLR , 2016.Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in Neural In-formation Processing Systems 28: Annual Conference on Neural Information Processing Sys-tems 2015, December 7-12, 2015, Montreal, Quebec, Canada , pp. 2692–2700, 2015. URLhttp://papers.nips.cc/paper/5866-pointer-networks .Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithmsfrom examples. In Proceedings of the 33nd International Conference on Machine Learning, ICML2016, New York City, NY, USA, June 19-24, 2016 , pp. 421–429, 2016. URL http://jmlr.org/proceedings/papers/v48/zaremba16.html .13Published as a conference paper at ICLR 2017A APPENDIXA.1 P ROGRAM SET FOR NON-RECURSIVE TOPOLOGICAL SORTProgram Descriptions Calls ArgumentsTOPOSORT Perform topologicalsort on graphTRA VERSE,NEXT START,WRITE, MOVENONETRA VERSE Traverse graph untilstack is emptyCHECK CHILD, EX-PLORENONECHECK CHILD Check if a whitechild exists; if so, setchildList [vactive ]topoint to itMOVE NONEEXPLORE Repeatedly traversesubgraphs until stackis emptySTACK,CHECK CHILD,WRITE, MOVENONESTACK Interact with stack,either pushing orpoppingWRITE, MOVE PUSH, POPNEXT START Movepstart untilreaching a whitevertex. If a whitevertex is found, setpstart to point to it;this signifies the startof a traversal of anew connected com-ponent. If no whitevertex is found, theentire execution isterminatedMOVE NONEWRITE Write a value eitherto environment (e.g.,to color a vertex)or variable (e.g., tochange the value ofvactive )NONE Described belowMOVE Move a pointer(e.g.,pstart orchildList [vactive ])up or downNONE Described belowArgument Sets for WRITE and MOVE.WRITE. The WRITE operation has the following arguments:ARG 1(Main Action): COLOR CURR, COLOR NEXT, ACTIVE START, ACTIVE NEIGHB,ACTIVE STACK, SA VE, STACK PUSH, STACK POP, RESULTCOLOR CURR colors vactive , COLOR NEXT colors Vertex DAG [vactive ][childList [vactive ]],ACTIVE START writes pstart tovactive , ACTIVE NEIGHB writesDAG [vactive ][childList [vactive ]]tovactive , ACTIVE STACK writes Qstack(pstack)tovactive , SA VE writes vactive tovsave,STACKPUSH pushesvactive to the top of thestack,STACKPOP writes a null value to the top of the stack, and RESULT writesvactive toQresult (presult ).ARG 2(Auxiliary Variable): COLOR GREY , COLOR BLACKCOLOR GREY and COLOR BLACK color the given vertex grey and black, respectively.14Published as a conference paper at ICLR 2017MOVE. The MOVE operation has the following arguments:ARG 1(Pointer):presult ,pstack ,pstart,childList [vactive ],childList [vsave]Note that the argument is the identity of the pointer, not what the pointer points to; in other words,ARG 1can only take one of 5 values.ARG 2(Increment or Decrement): UP, DOWNA.2 T RACE -GENERATING FUNCTIONS FOR TOPOLOGICAL SORTA.2.1 N ON-RECURSIVE TRACE -GENERATING FUNCTIONS1// Top level topological sort call2TOPOSORT() {3 while ( Qcolor (pstart )is a valid color): // color invalid when all vertices explored4 WRITE(ACTIVE_START)5 WRITE(COLOR_CURR, COLOR_GREY)6 TRAVERSE()7 MOVE( pstart , UP)8 NEXT_START()9}1011 TRAVERSE() {12 CHECK_CHILD()13 EXPLORE()14 }1516 CHECK_CHILD() {17 while ( Qcolor (DAG [vactive ][childList [vactive ]])is not white and is not invalid): // color invalid when all children explored18 MOVE( childList [vactive ], UP)19 }2021 EXPLORE() {22 do23 if (Qcolor (DAG [vactive ][childList [vactive ]])is white):24 WRITE(COLOR_NEXT, COLOR_GREY)25 STACK(PUSH)26 WRITE(SAVE)27 WRITE(ACTIVE_NEIGHB)28 MOVE( childList [vsave], UP)29 else:30 WRITE(COLOR_CURR, COLOR_BLACK)31 WRITE(RESULT)32 MOVE( presult , UP)33 if(pstack == 1 ):34 break35 else:36 STACK(POP)37 CHECK_CHILD()38 while (true)39 }4041 STACK(op) {42 if (op == PUSH):43 WRITE(STACK_PUSH)44 MOVE( pstack , UP)4546 if (op == POP):47 WRITE(ACTIVE_STACK)48 WRITE(STACK_POP)49 MOVE( pstack , DOWN)50 }5152 NEXT_START() {53 while( Qcolor (pstart )is not white and is not invalid): // color invalid when all vertices explored54 MOVE( pstart , UP)55 }15Published as a conference paper at ICLR 2017A.2.2 R ECURSIVE TRACE -GENERATING FUNCTIONSAltered Recursive Functions1// Top level topological sort call2TOPOSORT() {3 if (Qcolor (pstart )is a valid color): // color invalid when all vertices explored4 WRITE(ACTIVE_START)5 WRITE(COLOR_CURR, COLOR_GREY)6 TRAVERSE()7 MOVE( pstart , UP)8 NEXT_START()9 TOPOSORT() // Recursive Call10 }1112 CHECK_CHILD() {13 if (Qcolor (DAG [vactive ][childList [vactive ]])is not white and is not invalid): // color invalid when all children explored14 MOVE( childList [vactive ], UP)15 CHECK_CHILD() // Recursive Call16 }1718 EXPLORE() {19 if (Qcolor (DAG [vactive ][childList [vactive ]])is white):20 WRITE(COLOR_NEXT, COLOR_GREY)21 STACK(PUSH)22 WRITE(SAVE)23 WRITE(ACTIVE_NEIGHB)24 MOVE( childList [vsave], UP)25 else:26 WRITE(COLOR_CURR, COLOR_BLACK)27 WRITE(RESULT)28 MOVE( presult , UP)29 if(pstack == 1 ):30 return31 else:32 STACK(POP)33 CHECK_CHILD()34 EXPLORE() // Recursive Call35 }3637 NEXT_START() {38 if (Qcolor (pstart )is not white and is not invalid): // color invalid when all vertices explored39 MOVE( pstart , UP)40 NEXT_START() // Recursive Call41 }A.3 N ON-RECURSIVE QUICKSORTAlgorithm 4 Iterative Quicksort1:Initialize an array Ato sort and two empty stacks SloandShi.2:Initializeloandhito be 1 andn, wherenis the length of A.3:4:function PARTITION (A;lo;hi )5:pivot =lo6: forj2[lo;hi1] :do7: ifA[j]A[hi]then8: swapA[pivot ]withA[j]9: pivot =pivot + 110: swapA[pivot ]withA[hi]11: returnpivot12:13:function QUICK SORT(A;lo;hi )14: whileSloandShiare not empty: do15: Pop states off SloandShi, writing them to loandhi.16: p = PARTITION (A;lo;hi )17: Pushp+ 1andhitoSloandShi.18: Pushloandp1toSloandShi.16Published as a conference paper at ICLR 2017A.4 P ROGRAM SET FOR QUICKSORTProgram Descriptions Calls ArgumentsQUICKSORT Run the quicksortroutine in place forthe array A, forindices from lotohiNon-Recursive : PAR-TITION, STACK,WRITERecursive : same asnon-recursive version,along with QUICK-SORTImplicitly: arrayAto sort,lo,hiPARTITION Runs the partitionfunction. At end,pointerppivot ismoved to the pivotCOMPSWAP LOOP,MOVE PIVOT LO,MOVE JLO, SWAPNONECOMPSWAP LOOP Runs the FOR loopinside the partitionfunctionCOMPSWAP, MOVE NONECOMPSWAP ComparesA[pivot ]A[j]; ifso, perform a swapand increment ppivotSWAP, MOVE NONESET PIVOT LO Setsppivot toloin-dexNONE NONESET JLO Setspjtoloindex NONE NONESET JNULL Setspjto1 NONE NONESTACK Pusheslo=hi statesonto stacks SloandShi according toargument (describedbelow)WRITE, MOVE Described belowMOVE Moves pointer oneunit up or downNONE Described belowSWAP Swaps elements atgiven array indicesNONE Described belowWRITE Write a valueeither to stack(e.g.,QstackLo orQstackHi ) or topointer (e.g., tochange the value ofphi)NONE Described belowArgument Sets for STACK, MOVE, SWAP, WRITE.STACK. The STACK operation has the following arguments:ARG 1(Operation): STACK PUSH CALL1, STACK PUSH CALL2, STACK POPSTACK PUSH CALL1 pushes loandpivot1toQstackLo andQstackHi . STACK PUSH CALL2pushespivot + 1andhitoQstackLo andQstackHi . STACK POP pushes1 values toQstackLoandQstackHi .MOVE. The MOVE operation has the following arguments:ARG 1(Pointer):pstackLo ,pstackHi ,pj,ppivotNote that the argument is the identity of the pointer, not what the pointer points to; in other words,ARG 1can only take one of 4 values.ARG 2(Increment or Decrement): UP, DOWN17Published as a conference paper at ICLR 2017SWAP. The SWAP operation has the following arguments:ARG 1(Swap Object 1): ppivotARG 2(Swap Object 2): phi;pjWRITE. The WRITE operation has the following arguments:ARG 1(Object to Write): ENV STACK LO, ENV STACK HI,phi,ploENV STACK LO and ENV STACK HI represent QstackLo (pstackLo )andQstackHi (pstackHi ), re-spectively.ARG 2(Object to Copy): ENV STACK LOPEEK, ENV STACK HIPEEK,phi,plo,ppivot1,ppivot+ 1, RESETENV STACK LOPEEK and ENV STACK HIPEEK represent QstackLo (pstackLo1)andQstackHi (pstackHi1), respectively. RESET represents a 1 value.Note that the argument is the identity of the pointer, not what the pointer points to; in other words,ARG 1can only take one of 4 values, and ARG 2can only take one of 7 values.A.5 T RACE -GENERATING FUNCTIONS FOR QUICKSORTA.5.1 N ON-RECURSIVE TRACE -GENERATING FUNCTIONS1Initialize ploto 1 and phiton(length of array)2Initialize pjto134QUICKSORT() {5 while ( pstackLo6= 1):6 if (QstackLo (pstackLo1)< QstackHi (pstackHi1)):7 STACK(STACK_POP)8 else:9 WRITE( phi, ENV_STACK_HI_PEEK)10 WRITE( plo, ENV_STACK_LO_PEEK)11 STACK(STACK_POP)12 PARTITION()13 STACK(STACK_PUSH_CALL2)14 STACK(STACK_PUSH_CALL1)15 }1617 PARTITION() {18 SET_PIVOT_LO()19 SET_J_LO()20 COMPSWAP_LOOP()21 SWAP( ppivot; phi)22 SET_J_NULL()23 }2425 COMPSWAP_LOOP() {26 while ( pj6=phi):27 COMPSWAP()28 MOVE( pj, UP)29 }3031 COMPSWAP() {32 if (A[pj]A[phi]):33 SWAP( ppivot; pj)34 MOVE( ppivot , UP)35 }3637 STACK(op) {38 if (op == STACK_PUSH_CALL1):39 WRITE(ENV_STACK_LO, plo)40 WRITE(ENV_STACK_HI, ppivot1)41 MOVE( pstackLo , UP)42 MOVE( pstackHi , UP)4344 if (op == STACK_PUSH_CALL2):45 WRITE(ENV_STACK_LO, ppivot + 1)46 WRITE(ENV_STACK_HI, phi)47 MOVE( pstackLo , UP)48 MOVE( pstackHi , UP)4950 if (op == STACK_POP):51 WRITE(ENV_STACK_LO, RESET)52 WRITE(ENV_STACK_HI, RESET)53 MOVE( pstackLo , DOWN)54 MOVE( pstackHi , DOWN)55 }18Published as a conference paper at ICLR 2017A.5.2 R ECURSIVE TRACE -GENERATING FUNCTIONSAltered Recursive Functions1Initialize ploto 1 and phiton(length of array)2Initialize pjto134QUICKSORT() {5 if (QstackLo (pstackLo1)< QstackHi (pstackHi1)):6 PARTITION()7 STACK(STACK_PUSH_CALL2)8 STACK(STACK_PUSH_CALL1)9 WRITE( phi, ENV_STACK_HI_PEEK)10 WRITE( plo, ENV_STACK_LO_PEEK)11 QUICKSORT() // Recursive Call12 STACK(STACK_POP)13 WRITE( phi, ENV_STACK_HI_PEEK)14 WRITE( plo, ENV_STACK_LO_PEEK)15 QUICKSORT() // Recursive Call16 STACK(STACK_POP)17 }1819 COMPSWAP_LOOP() {20 if (pj6=phi):21 COMPSWAP()22 MOVE( pj, UP)23 COMPSWAP_LOOP() // Recursive Call24 }A.6 B ASE CASES ,REDUCTION RULES ,AND VERIFICATION SETSIn this section, we describe the space of base cases and reduction rules that must be covered for eachof the four sample tasks, in order to create the verification set.For addition, we analytically determine the verification set. For tasks other than addition, it is diffi-cult to analytically determine the verification set, so instead, we randomly generate input candidatesuntil they completely cover the base cases and reduction rules.Base Cases and Reduction Rules for Addition. For the recursive formulation of addition, weanalytically construct the set of input problems that cover all base cases and reduction rules. Weoutline how to construct this set.It is sufficient to construct problems where every transition between two adjacent columns is cov-ered. The ADD reduction rule ensures that each call to ADD only covers two adjacent columns,and so the LSTM only ever runs for a fixed number of steps necessary to process these two columns.We construct input problems by splitting into two cases: one case in which the left column containsa null value and another in which the left column does not contain any null values. We then constructproblem configurations that span all possible valid environment states (for instance, in order to forcethe carry bit in a column to be 1, one can add the sum “1+9” in the column to the right).The operations we need to be concerned most about are CARRY and LSHIFT, which induce partialenvironment states spanning two columns. It is straightforward to deal with all other operations,which do not induce partial environment states.Under the assumption that there are no leading 0’s (except in the case of single digits) and thetwo numbers to be added have the same number of digits, the verification set for addition contains20,181 input problems. The assumption of leading 0’s can be easily removed, at the cost of slightlyincreasing the size of the verification set. We made the assumption of equivalent lengths in order toparametrize the input format with respect to length, but this assumption can be removed as well.Base Cases and Reduction Rules for Bubble Sort. The original version of the bubblesort im-plementation exposes the values within the array. While this matches the description from Reed &de Freitas (2016), we found that this causes an unnecessary blowup in the size of Vand makes itmuch more difficult to construct the verification set. For purposes of verification, we replace thedomain-specific encoder with the following:fenc(Q;i 1;i2;i3;at) =MLP ([Q(1;i1)Q(1;i2);1i1length; 1i2length;i3==length;a t(1);at(2);at(3)]);19Published as a conference paper at ICLR 2017Table 4: Accuracy on Randomly Generated Problems for Variant of Bubble SortLength of Array Non-Recursive Recursive2 100% 100%3 100% 100%4 100% 100%5 100% 100%6 90% 100%7 86.7% 100%8 6.7% 100%9 0% 100%10 0% 100%12 0% 100%15 0% 100%70 0% 100%which directly exposes which of the two values pointed to is larger. This modification also enablesus to sort arrays containing arbitrary comparable elements.By reasoning about the possible set of environment observations created by all valid inputs, weconstructVusing the procedure described in Section 3.3. Using this modification, we constructed averification set consisting of one array of size 10.We also report on generalization results for the non-recursive and recursive versions of this variant ofbubble sort. Table 4 demonstrates that the accuracy of the non-recursive program degrades sharplywhen moving from arrays of length 7 to arrays of length 8. This is due to the properties of the trainingset – we trained on 2 traces synthesized from arrays of length 7 and 1 trace synthesized from an arrayof length 6. Table 4 also demonstrates that the (verified) recursive program generalizes perfectly.Base Cases and Reduction Rules for Topological Sort. For each function we use to implementthe recursive version of topological sort, we need to consider the set of possible environment obser-vation sequences we can create from all valid inputs and test that the learned program produces thecorrect behavior on each of these inputs. We have three observations: the color of the start node, thecolor of the active node’s next child to be considered, and whether the stack is empty. Na ̈ıvely, wemight expect to synthesize and test an input for any sequence created by combining the four possiblecolors in two variables and another boolean variable for whether the stack is empty (so 32 possibleobservations at any point), but for various reasons, most of these combinations are impossible tooccur at any given point in the execution trace.Through careful reasoning about the possible set of environment observations created by all validinputs, and how each of the operations in the execution trace affects the environment, we can con-structVusing the procedure described in Section 3.3. We then construct a verification set of size 73by ensuring that randomly generated graphs cover the analytically derived V. The model describedin the training setup of Section 4 (trained on 6 traces) was verified to be correct via the matchingprocedure described in Section 4.2.Base Cases and Reduction Rules for Quicksort. As with the others, we apply the proceduredescribed in Section 3.3 to construct Vand then empirically create a verification set which coversV. The verification set can be very small, as we found a 10-element array ([8,2,1,2,0,8,5,8,3,7]) issufficient to cover all of V. We note that an earlier version of quicksort we tried lacked primitiveoperations to directly move a pointer to another, and therefore needed more functions and observa-tions. As this complexity interfered with determining the base cases and reductions, we changed thealgorithm to its current form. Even though the earlier version also generalized just as well in prac-tice, relatively small differences in the formulation of the traces and the environment observationscan drastically change the difficulty of verification.20
ryTYxh5ll
Under review as a conference paper at ICLR 2017CONTENT 2VEC: SPECIALIZING JOINTREPRESENTATIONS OF PRODUCT IMAGES AND TEXTFOR THE TASK OF PRODUCT RECOMMENDATIONThomas Nedelec, Elena Smirnova & Flavian VasileCriteo ResearchParis, 32 Blanche, Franceft.nedelec,e.smirnova,f.vasile g@criteo.comABSTRACTWe propose a unified product embedded representation that is optimized for thetask of retrieval-based product recommendation. We generate this representationusing Content2Vec, a new deep architecture that merges product content infor-mation such as text and image, and we analyze its performance on hard recom-mendation setups such as cold-start and cross-category recommendations. In thecase of a normal recommendation regime where collaborative information signalis available, we merge the product co-occurrence information and propose a sec-ond architecture Content2vec+ and show its lift in performance versus non-hybridapproaches in both cold start and normal recommendation regimes.1 I NTRODUCTIONOnline product recommendation is now a key driver of demand, not only in E-commerce businessesthat recommend physical products, such as Amazon (Marshall, 2006), TaoBao (Xiang, 2013) andEbay (Academy, 2013), but also in online websites that recommend digital content such as news(Yahoo! - Agarwal et al. (2013), Google - Liu et al. (2010)), movies (Netflix - Bell & Koren (2007)),music (Spotify - Johnson (2015)), videos (YouTube - Covington et al. (2016)) and games (Xbox -Koenigstein et al. (2012)).Two of the most challenging aspects of recommendation in general and of product recommendationin particular, are scalability and freshness. The first one addresses the problem of making fast rec-ommendations in parallel, the second addresses the problem of updating recommendations based onreal-time user interaction. One of the most encountered architecture solutions for recommendationat scale divides the recommendation process in two stages: a candidate generation stage that prunesthe number of recommendable items from billions to a couple of hundreds, followed by a seconditem selection stage that decides the final set of items to be displayed to the user, as shown in Figure1 (see Mazare (2016), Cheng et al. (2016), Covington et al. (2016)).The first stage generally implies the pre-generation of an inverted index over the set of recommend-able products, paired with a real-time retrieval module, similarly to a search engine architecture.In our current paper we focus on the cases where the system supports vectorial product queries.The sources of the vectorial representations range from the set of co-occurring products, like in thecase of neighborhood-based collaborative filtering, to a low-dimensional representation producedvia matrix factorization or to an embedded representation produced via a deep neural network.The second stage takes the candidate set and decides the final list of recommendations, usually byoptimizing a ranking metric. This stage has in general a lot more constraints in terms of latency, dueto its use of real-time signal that makes its predictions not cacheable. Therefore, in terms of modelchoice, the first stage can be a lot more complex than the second. In terms of impact, the quality ofthe candidate set coming from the first stage is crucial, since this constitutes a hard threshold on theperformance of the second stage and of the overall system.Because of the feasibility of using a more complex model and the potential impact on the finalrecommendation performance, we choose to concentrate our efforts on the task of optimal candi-1Under review as a conference paper at ICLR 2017Figure 1: 2-Stage Recommender System Architecture.date generation. We formalize the problem as a link prediction task, where given a set of pastco-purchased products we try to predict unseen pairs of products. Related work in representationlearning for recommendation investigated the use of collaborative filtering (CF), text and productimages, but to our knowledge, there has been no attempt to unify all of these signals in a single rep-resentation. We see this as an opportunity to investigate the leveraging effect of generating a UnifiedProduct Representation via a deep-learning approach. In the following, we formally define the setof associated requirements we would like to satisfy:Relevance : the representation should be optimized for product recommendation relevance,as measured by the associated target metrics (in this case, modeling it as a link predictiontask and optimizing for the AUC of product pair prediction).Coverage : the representation should leverage all available product information (in ourcase, all product information available in the product catalog together with observed prod-uct co-occurrences).Cross-modality expressiveness : the representation should be able to account for interac-tions between various information sources such as text and image (can take into accountthe fact that the word ”red” and the ”red” color detector are correlated).Pair-wise expressiveness : the representation should be able to account for interactionsbetween the two products.Robustness : the representation should operate well (recommendation performance will notdegrade dramatically) in hard recommendation situations such as product cold-start (newproducts, new product pairs) and cross-category recommendation. These are importantuse-cases in product recommendation, when the product catalog has high churn (as in thecase of flash sales websites or classifieds) or the recommendation needs to leverage cross-advertiser signal (as in the case of new users and user acquisition advertising campaigns).This is a different goal from simply trying to optimize for relevance metrics, due to theinherent limitations of offline metrics in predicting future online performance.Retrieval-optimized : the representation should be adapted to a content-retrieval setup,both on the query and on the indexing side, meaning that the vectors should be eithersmall, sparse or both.2Under review as a conference paper at ICLR 2017We propose a modular deep architecture that leverages state-of-the-art architectures for generatingembedded representations for image, text and CF input, re-specializes the resulting product em-beddings and combines them into a single product vector. This is a very general architecture thatcan plugin any networks in the image and text domain and re-use them for the problem of productrecommendation, along with their gains in representation learning for the two domains. We investi-gate multiple ways of merging the modality-specific product information and propose a new type ofresidual-inspired unit, which we name Pairwise Residual Unit , that can model the joint aspects ofthe different product embeddings and show that it leads to good improvements.We analyze our proposed architecture on an Amazon dataset (McAuley et al., 2015) containinginformation on co-purchased products. We report our improvements versus a text and an image-based baseline, that was introduced in previous work by (cite Julian) and show improvements bothon normal and hard recommendation regimes such as cold-start and cross-category setups.Our approach is similar to the recent work by (Covington et al., 2016), that propose a solution forvideo recommendation at YouTube. Unlike their proposed solution, where, in order to support uservector queries, the candidate generation step co-embeds users and items, we are interested to co-embed just the product pairs, which generally has a much smaller dimension. In our approach, thepersonalization step can happen after the per-item candidates are retrieved.Our main contributions are the following:We propose a novel way of integrating deep-learning item representation in the context oflarge scale recommender system with a 2-stage serving architecture and introduce the newtask of Unified Product Representation for optimal candidate selection in both cold startand normal recommendation setups.We introduce a new deep architecture that merges content and CF signal for the task ofproduct recommendation and propose the Pairwise Residual Unit , a new learning compo-nent that models the joint product representations.We introduce two novel experimental setups (hard cold start, cross-category) and test thatthe proposed Content2Vec architecture satisfies the requirements we defined.Though the focus of our work is on improving product recommendation through representationlearning, we believe that simple extensions of our approach can be applied to many other recom-mendation scenarios.The rest of the paper goes as follows: In Section 2 we cover previous related work and the rela-tionship with our method. In Section 3 we present the Content2Vec model, followed by a detaileddescription of the resulting architecture in Section 4. In Section 5 we present the experimental setupand go over the results on Section 5.2. In Section 6 we summarize our findings and conclude withfuture directions of research.2 R ELATED WORKOur work fits in the new wave of deep learning based recommendation solutions, that similarly toclassical approaches can fall into 3 categories, namely collaborative filtering based, content based orhybrid approaches.Several approaches use neural networks to build better item representations based on the co-occurrence matrix. The Prod2Vec algorithm (see (Grbovic et al., 2015)) implements Word2Vec((Mikolov et al., 2013a), (Shazeer et al., 2016)), an algorithm that is at origin a shallow neurallanguage model, on sequences of product ids, to reach a low-dimensional representation of eachproduct. Among other embedding solutions that use the item relationship graph are the more recentextensions to Word2Vec algorithm such as Glove (Pennington et al., 2014) and SWIVEL (Shazeeret al., 2016) and the graph embedding solutions proposed in Node2Vec (Grover & Leskovec, 2016)and SDNE (Wang et al., 2016).Content-based methods recommend an item to a user based upon an item description and a userprofile ((Pazzani & Billsus, 2007)). This idea was deeply investigated in the information retrievalliterature: in the context of web search, DSSM (Huang et al., 2013) and its extensions (Shen et al.,2014)(C-DSSM) and (Shan et al., 2016) are some of the most successful methods that specialize3Under review as a conference paper at ICLR 2017query and document text embedding in order to predict implicit feedback signal such as documentclick-through rate. In the context of product recommendation, in (McAuley et al., 2015) the authorsfeed a pre-trained CNN (CNN trained on the ImageNet dataset, which is an image classification taskthat is very different from the task of image-based product recommendation) with products imagesand use the last layer of the network as the product embedding. This representation is subsequentlyused to compute similarities between products. Similarly, the authors in (Van den Oord et al., 2013)use CNNs to compute similarities between songs. Yosinski et al. (2014) show that the low layersof DNNs trained on different tasks are often similar and that good performance can be reached byfine-tuning a network previously trained on another task. In the case of recommendation systems,this fine tuning was implemented in Veit et al. (2015), where the authors specialize a GoogLeNetarchitecture to the task of predicting co-view events based on product pictures.The performance of Collaborative Filtering (CF) models is often higher than that of content-basedones but it suffers from the cold-start problem. To take advantage of the best of both worlds, hybridmodels use both sources of information in order to make recommendations. One possible way toincorporate product information is using it as side information in the product sequence model, asproposed in Meta-Prod2Vec (Vasile et al., 2016), leading to better product embeddings for productswith low signal (low number of co-occurrences). In this work we continue the investigation of usingboth types of signal, this time both at training and product recommendation time.3 C ONTENT 2VECMODELOur proposed approach takes the idea of specializing the input representations to the recommenda-tion task and generalizes it for multi-modality inputs, in order to leverage all product informationand in particular, product images and product title and description text.The main criteria for the Content2Vec architecture is to allow us to easily plugin new sources ofsignal and to replace existing embedding solutions with new versions. We are also interested inseparating product-level embeddings from pair-level embeddings, such that the network can generateproduct vectors that are readily indexable. As a result, the Content2Vec architecture has three typesof modules, as shown in Figure 2:Content-specific embedding modules that take raw product information and generate theassociated vectors. In this paper we cover embedding modules for text, image, categoricalattributes and product co-occurrences (for an example, see Figure 3).Overall product embedding modules that merge all the product information into a unifiedproduct representation.Pair embedding module that merges the product-to-product interactions and computes thefinal similarity score. In the case of retrieval-optimized product embeddings, this modulebecomes the inner-product between the two items and all interactions between them are tobe approximated within the product-level embedding modules.Content2Vec training follows the architecture, learning module-by-module. In the first stage, weinitialize the content-specific modules with embeddings from proxy tasks (classification for image,language modeling for text) and re-specialize them to the task of product recommendation. For thespecialization task, as mentioned in Section 1, we frame the objective as a link prediction task wherewe try to predict the pairs of products purchased together. We describe the loss function in Section3.1.In the second stage, we stack the modality-specific embeddings generated in the first stage into ageneral product vector and learn an additional residual vector using the same learning objective asin the specialization step. This will described in depth in Section 4.2.Finally, in the third stage, given the updated product vectors from stage two, we learn the linearcombination between the similarities of the product vectors and make the final prediction.3.1 L OSSFUNCTIONThe previous work on learning pair-wise item distances concentrated on using ranking (McFee &Lanckriet, 2010), siamese (Hadsell et al., 2006) or logistic loss (Zheng et al., 2015). For optimizingthe link prediction objective we choose the logistic similarity loss (eq. 1) that has the advantage of4Under review as a conference paper at ICLR 2017Figure 2: Content2Vec architecture combines content-specific modules with residual vector to pro-duce embedding vector for each product, then uses these vectors to compute similarities betweenproducts.having a fast approximation via Negative Sampling loss (Mikolov et al., 2013b) shown in eq. 2. Byusing Negative Sampling, the prediction step can scale up to large number of items, by using allpositive pairs and sampling the negatives on the fly.L() =XijXPOSij log(sim(ai;bj))XNEGij log(sim(ai;bj)) (1)LNS() =XijXPOSij(log(sim(ai;bj)) +kXl=1EnlPDlog(sim(ai;nl))) (2)where:= (ai;bj)is the set of model parameters, where aiandbjare the embedding vectors for the prod-ucts A and B,sim(ai;bj) = < a i;bj>+is the similarity function between aiandbj,andare scalarvalues,XPOSij is the frequency of the observed item pair ij(e.g. the frequency of the positive pair ij),XNEGij =XiXPOSij is the frequency of the unobserved item pair ij(we assume that all unob-served pairs are negatives),PDprobability distribution used to sample negative context examples nl,kis a hyper parameter specifying the number of negative examples per positive example.4 C ONTENT 2VECMODULES4.1 C ONTENT -SPECIFIC EMBEDDING MODULESContent-specific modules can have various architectures and are meant to be used separately in orderto increase modularity. Their role is to map all types of item signal into embedded representations.5Under review as a conference paper at ICLR 2017Figure 3: An example of using the content-specific modules to create embedded representations oftwo products with images, text and CF signal.In Figure 3 we give an illustrative example of mapping a pair of products to their vectorial represen-tations.In the following we analyze four types of input signal and embedding solutions for each one of them.For all of the modules, we use LNSloss (see eq. 2) as specialization loss.4.1.1 E MBEDDING PRODUCT IMAGES : ALEXNETModel and proxy task: CNN for Image Classification For generating the image embeddings wepropose reusing a model trained for image classification, as in previous work by (Krizhevsky et al.,2012) and (He & McAuley, 2015). In (He & McAuley, 2015), the authors have shown how to usethe Inception architecture (Szegedy et al., 2015) and specialize it for the product recommendationtask. However, the Inception architecture is very deep and requires extensive training time. For easeof experimentation we use AlexNet, which is a simpler architecture that was also a winner on theImageNet task (Krizhevsky et al., 2012) previously to Inception NN. In section 5.2 we will showthat, even if simpler, when combined with additional product text information, the AlexNet-basedsolution can perform very well on the recommendation task.For our experiments, we use the pretrained version of AlexNet available on Toronto’s universitywebsite. We experimented with two different ways to specialize the representation in order to com-pute product similarities. In the first one, we learn a weighted inner product between the two repre-sentations (fc7 layer of ImageNet). In the second one, we specialize the fc7 layer to detect productsimilarities. The second approach lead to much better performance and is the one for which wereport final results.4.1.2 E MBEDDING PRODUCT TEXT : W ORD2VEC AND CNN ON SENTENCESModel and proxy task: Word2Vec for Product Language Modeling For generating word em-beddings, we propose reusing Word2Vec Mikolov et al. (2013b), a model for generating languagemodels that has been employed in a various of text understanding tasks. More recently, it has beenshown in (Pennington et al., 2014) that Word2Vec is closely linked with matrix factorization tech-niques applied on the word co-occurrence matrix. For Content2Vec, we chose to pretrain Word2Vec6Under review as a conference paper at ICLR 2017on the entire product catalog text information and not use an available set of word embeddings suchas the one created on the Google Corpus. The main reason is that the text distribution within productdescriptions is quite different from the general distribution. For example the word ’jersey’ has a verydifferent conditional distribution within the product description corpus versus general online text.Text CNN (Kim, 2014) offers a simple solution for sentence-level embeddings using convolutions.The convolutions act as a form of n-gram filters, allowing the network to embed sentence-levelinformation and specializing word embeddings to higher-order tasks such as text classification orsentiment analysis. To the best of our knowledge, this is the first attempt to employ them for thetask of product recommendation. For our task, we generate sentences based on the product titles anddescriptions.4.1.3 E MBEDDING PRODUCT CO -OCCURRENCES : PROD2VECProd2Vec (Grbovic et al., 2015) is an extension of the Word2Vec algorithm to product shoppingsequences. As a result, Prod2Vec can be seen as a matrix factorization technique on the productco-occurence matrix. In Content2Vec, the Prod2Vec-based similarity contains all of the informationthat can be derived from the sequential aspect of the user behavior, without taking into account theper-product meta-data.4.1.4 E MBEDDING CATEGORICAL PRODUCT META -DATA : M ETA-PROD2VECMeta-Prod2Vec (Vasile et al., 2016) improves upon Prod2Vec by using the product meta-data sideinformation to regularize the final product embeddings. In Content2Vec, we can use the similar tech-nique of co-embedding product categorical information with product ids to generate the embeddingvalues for the categorical features.4.2 J OINT PRODUCT EMBEDDING : PAIRWISE RESIDUAL UNITAs stated in Section 1, the function of the product embedding module is two-fold: first, to modelall interactions that exist between the modality-specific embeddings with respect to the final opti-mization objective, and second, to approximate interaction terms between the products that cannotbe explained by a linear combination of the modality-specific similarities. With this in mind, weintroduce a new type of learning unit, the Pairwise Residual Unit (eq. 4), which similarly to theoriginal residual unit introduced in He et al. (2015) (eq. 3), allows the layers to learn incremental,i.e. residual representations (see Figure 4).In Hardt & Ma (2016) the authors motivate the use of residual units as helping preserve the repre-sentations learned in the previous layers. In our case we are interested in preserving the specializedimage and text representations and learn an additional representation for their interactions. Thoughin previous work, most the of the residual units are using at least two ReLU layers in the residualunit, we observe good results using just one. In order to model interactions between modalities, wecould also learn a fully connected layer initialized with identity that takes as input the concatenatedmodality-specific vectors. However, in order to have a smaller number of parameters and increasemodel comprehensibility, we would like to keep separate the modality-specific representations andto model the final prediction model as an ensemble.y=F(x) +x (3)y=sim(F(x1);F(x2)) +sim(x1;x2) (4)where:x1andx2are the two product embedding vectors (obtained by stacking the modality-specific vec-tors),sim(:;:)is a similarity function over two embedding vectors x1,x2,F(x)is a Rectified Linear Unit.To be able to measure the incremental value of introducing a residual vector we introduce a baselinearchitecture that computes the final prediction based on the linear combination of the modality-specific similarities denoted by Content2Vec-linear with the associated similarity function definedin eq. 5.7Under review as a conference paper at ICLR 2017Figure 4: Pairwise Residual Unitsimc2v(ai;bj) =Xm2Modalitieswm(simm(ai;bj)) (5)Under this notation, the residual-based architecture denoted as Content2Vec-res minimizesLNSwith the similarity function defined in eq. 6.simc2vres(ai;bj) =Xm2(Modalities +Residual )wm(simm(ai;bj)) (6)In order to learn the residual vector, we keep fixed the modality-specific similarities and co-trainthe final weights of each of the modalities together with the product-specific residual layers. Forexample, in the case of using only image and text signals, our final predictor can be defined as in eq7, wherePtxtandPimgare pre-set and wtxt,wimg,wresandPresare learned together:P(posja;b) =(wtxtPtxt(posjatxt;btxt) +wimgPimg(posjaimg;bimg) +wresPres(posjares;bres))(7)where:posis the positive outcome of products A and B being bought together and Pres(posja;b) =(<F ([atxt;aimg]);F([btxt;bimg])>+)In Section 5.2 we compare the performance of Content2Vec-res andContent2Vec-linear and showthat, as expected, the proposed architecture surpasses the performance of the linear model, whileallowing for a retrieval-based candidate scoring solution.4.3 P AIR EMBEDDING MODULEIn a retrieval-based architecture, the pair embedding module cannot support more than a simplelinear combination of the product embedding vectors, such that the final score can be computedvia inner-product. However, we are still interested to know the trade-off in performance between aninner-product-based candidate scoring and a model that allows for explicit interaction terms betweenthe items. To this end, we introduce two explicit interaction models: Content2Vec-crossfeat - a8Under review as a conference paper at ICLR 2017Figure 5: The two types of Pairwise Residual Units. By comparison with the first version thatoutputs a scalar, the second one outputs a vector that goes directly into the final prediction layermodel where we discretize the text and image-specific similarity scores and create explicit featureconjunctions between them and Content2Vec-embedpairs - a model where we use a similar techniquewith Paiwise Residual Unit , in this case modeling the residual of the linear similarity directly as avector in the pair embedding layer, as shown in Figure 5. In Section 5.2 we show that two modelshave as expected better performance than the linear model and that the pair embedding is slightlybetter.5 E XPERIMENTAL RESULTS5.1 D ATASETWe perform our evaluation on the publicly available Amazon dataset (McAuley et al., 2015) thatrepresents a collection of products that were co-bought on the Amazon website. Each item has arich description containing product image, text and category (any of the modalities can be missing).In terms of dimensionality, the dataset contains around 10M pairs of products. We concentrate onthe subgraph of Book and Movie product pairs, because both categories are large and they havea reasonable sized intersection. This allows us to look at recommendation performance on cross-category pairs (to evaluate a model trained only on Book pairs on predicting Movie co-bought items)and mixed category pairs (to evaluate the models on Book-Movie product pairs).Based on the full Book & Movies data we generate three datasets with different characteristics:The first dataset simulates a hard cold start regime , where all product pairs used in validation andtesting are over products unseen in training. This tests the hardest recommendation setup, where alltesting data is new. We decided to bench all of our hyperparameters on this regime and use the bestsetup on all datasets, since tuning on the harder dataset ensures the best generalization error (resultsshown in Table 1).The second dataset simulates a non-cold start regime , where the vast majority of the products in thetest set are available at training time. The dataset is generated by taking the top 100k most connectedproducts in the original dataset and keeping the links between them (results shown in Table 2).The third dataset simulates a soft cold start regime , where some of the products in the test set areavailable at training time. The dataset is generated by taking the top 200k most connected productsin the original dataset and sampling 10% of the links between them (results shown in Table 3).9Under review as a conference paper at ICLR 2017Hyper-parameters We fixed the sizes of embedding vectors for image CNN module to 4096hidden units, for text CNN module to 256, for Prod2Vec module to 50, for residual representationto 128. For optimization we use an Adam algorithm and we manually set the initial learning ratebased on the validation set performance. The batch sizes vary for different datasets. We train all themodels until validation set performance stops increasing.Evaluation task We evaluate the recommendation methods on the product link prediction task,similar to (He & McAuley, 2015). We consider the observed product pairs as positive examplesand all unknown pairs as negatives. We generate negative pairs according to the popularity of theproducts in the positive pairs (negative examples between popular products are more likely to begenerated) with a positive to negative ratio of 1:2.Evaluation metrics For the link prediction task, we use the Area Under Curve (AUC) of thePrecision/Recall curve as our evaluation metric.Competing methodsImageCNN : prediction based on specialized image embeddings similarityTextCNN : prediction based on specialized text embeddings similarityContent2Vec-linear : prediction based on the linear combination of text and image similar-itiesContent2Vec-crossfeat : prediction based on the linear combination of discretized imageand text similarities and their conjuctionsContent2Vec-res : prediction based on the linear combination of text and image similaritiesplus product-level residual vectors similaritiesContent2Vec-embedpairs : prediction based on the linear combination of text and imagesimilarities and a pair-level residual componentProd2Vec : prediction based on the product vectors coming from the decomposition of theco-purchase matrixContent2Vec+ : prediction based on the ensemble of Prod2Vec and Content2Vec models5.2 R ESULTSThe results on hard and soft cold start datasets (Tables 1, 3) show that our main proposed methodContent2Vec-res can leverage the additional signal provided by each of the input modalities in ajoint manner and leads to significant gains in AUC versus the one-signal baselines (ImageCNN,TextCNN) and their linear combination (Content2Vec-linear).From the point of view of robustness, Content2Vec-res learns product representations that performbetter than the baseline methods on out-of-sample recommendations such as cross-category pairsand mixed-category pairs (Table 1).We observe that adding an additional layer that represents pair-level interactions does not lead tobig improvements in either of the two models we investigated (Content2Vec-crossfeat,embedpairs),confirming that a product retrieval-based recommender system can achieve state-of-the-art results.Finally, Content2Vec-res+ , our proposed hybrid architecture that combines content and CF signalachieves better performance than the content and CF-only models, with bigger lifts in the case ofthe third dataset (Table 3) where the CF signal is weaker due to higher sparsity.10Under review as a conference paper at ICLR 2017Recommendation Model Books Movies MixedModels trained on Books datasetBook ImageCNN specialized 81% 78% 64%Book TextCNN 72% 79% 76%Book Content2Vec-linear 83% 83% 76%Book Content2Vec-crossfeat 86% 83% 83%Book Content2Vec-res 89% 83% 77%Book Content2Vec-embedpairs 90% 82% 77%Models trained on Movies datasetMovie ImageCNN specialized 59% 92% 60%Movie TextCNN 63% 90% 65%Movie Content2Vec-linear 64% 94% 65%Movie Content2Vec-crossfeat 62% 94% 63%Movie Content2Vec-res 60% 95% 66%Movie Content2Vec-embedpairs 64% 94% 65%Table 1: AUC results of image and text-based embeddings on hard cold-start dataset on Book, Movieand Mixed category test product pairs.Recommendation Model TestContent2Vec-linear 84%Content2Vec-res 87%Prod2Vec 96%Content2Vec-linear+ 97%Content2Vec-res+ 97%Table 2: AUC results on non cold-startdataset.Recommendation Model TestImageCNN 80%TextCNN 78%Content2vec-linear 88%Content2vec-res 89%Content2vec-embed pairs 90%Prod2vec 86%Content2vec-linear+ 89%Content2vec-res+ 92%Content2vec-embed pairs+ 92%Table 3: AUC results on soft cold-startdataset.6 C ONCLUSIONSThis work has several key contributions. We show how to use all product signal for the task of prod-uct recommendation using a modular architecture that can leverage fast evolving solutions for eachtype of input modality. We define a set of requirements for evaluating the resulting product embed-dings and show that our method leads to significant improvements over the single signal approacheson hard recommendation situations such as cold-start and cross-category evaluation. Finally, in or-der to model the joint aspects of the product embeddings we introduce a new type of learning unit,named Pairwise Residual Unit and show the resulting gains on a real product co-purchases dataset.In the current work we have addressed all but one of the desired requirements, namely generat-ing retrieval-optimized embeddings. For the next steps, we want to pursue sparse and compressedproduct representations, in order to help the performance of the final product retrieval system.REFERENCESDataStax Academy. Slideshare presentation. http://www.slideshare.net/planetcassandra/e-bay-nyc , March 2013. Accessed: 2016-04-08.Deepak Agarwal, Bee-Chung Chen, Pradheep Elango, and Raghu Ramakrishnan. Content recom-mendation on web portals. Communications of the ACM , 56(6):92–101, 2013.Robert M Bell and Yehuda Koren. Lessons from the netflix prize challenge. ACM SIGKDD Explo-rations Newsletter , 9(2):75–79, 2007.11Under review as a conference paper at ICLR 2017Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye,Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. Wide & deep learning for recom-mender systems. arXiv preprint arXiv:1606.07792 , 2016.Paul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recommendations.InProceedings of the 10th ACM Conference on Recommender Systems , pp. 191–198. ACM, 2016.Mihajlo Grbovic, Vladan Radosavljevic, Nemanja Djuric, Narayan Bhamidipati, Jaikit Savla, VarunBhagwan, and Doug Sharp. E-commerce in your inbox: Product recommendations at scale.InProceedings of the 21th ACM SIGKDD International Conference on Knowledge Discoveryand Data Mining , KDD ’15, pp. 1809–1818, New York, NY , USA, 2015. ACM. ISBN 978-1-4503-3664-2. doi: 10.1145/2783258.2788627. URL http://doi.acm.org/10.1145/2783258.2788627 .Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. 2016.Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariantmapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recogni-tion (CVPR’06) , volume 2, pp. 1735–1742. IEEE, 2006.Moritz Hardt and Tengyu Ma. Identity matters in deep learning. arXiv preprint arXiv:1611.04231 ,2016.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. arXiv preprint arXiv:1512.03385 , 2015.Ruining He and Julian McAuley. Vbpr: visual bayesian personalized ranking from implicit feed-back. arXiv preprint arXiv:1510.01784 , 2015.Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learningdeep structured semantic models for web search using clickthrough data. In Proceedings of the22nd ACM international conference on Conference on information & knowledge management ,pp. 2333–2338. ACM, 2013.Chris Johnson. algorithmic music recommendations at spotify, 2015.Yoon Kim. Convolutional neural networks for sentence classification. arXiv preprintarXiv:1408.5882 , 2014.Noam Koenigstein, Nir Nice, Ulrich Paquet, and Nir Schleyen. The xbox recommender system. InProceedings of the sixth ACM conference on Recommender systems , pp. 281–284. ACM, 2012.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-lutional neural networks. In Advances in neural information processing systems , pp. 1097–1105,2012.Jiahui Liu, Peter Dolan, and Elin Rønby Pedersen. Personalized news recommendation based onclick behavior. In Proceedings of the 15th international conference on Intelligent user interfaces ,pp. 31–40. ACM, 2010.Matt Marshall. Venture beat article. http://venturebeat.com/2006/12/10/aggregate-knowledge-raises-5m-from-kleiner-on-a-roll/ , December2006. Accessed: 2016-04-08.PierreEmmanuel Mazare. Product recommendation at criteo. http://labs.criteo.com/2016/09/product-recommendation-criteo/ , September 2016. Accessed: 2016-10-26.Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. Image-based rec-ommendations on styles and substitutes. In Proceedings of the 38th International ACM SIGIRConference on Research and Development in Information Retrieval , pp. 43–52. ACM, 2015.Brian McFee and Gert R Lanckriet. Metric learning to rank. In Proceedings of the 27th InternationalConference on Machine Learning (ICML-10) , pp. 775–782, 2010.12Under review as a conference paper at ICLR 2017Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represen-tations in vector space. arXiv preprint arXiv:1301.3781 , 2013a.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen-tations of words and phrases and their compositionality. In Advances in neural information pro-cessing systems , pp. 3111–3119, 2013b.Michael J Pazzani and Daniel Billsus. Content-based recommendation systems. In The adaptiveweb, pp. 325–341. Springer, 2007.Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global Vectors for WordRepresentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan-guage Processing (EMNLP) , pp. 1532–1543, Doha, Qatar, October 2014. Association for Com-putational Linguistics. URL http://www.aclweb.org/anthology/D14-1162 .Ying Shan, T Ryan Hoens, Jian Jiao, Haijing Wang, Dong Yu, and JC Mao. Deep crossing: Web-scale modeling without manually crafted combinatorial features. 2016.Noam Shazeer, Ryan Doherty, Colin Evans, and Chris Waterson. Swivel: Improving embeddingsby noticing what’s missing. arXiv preprint arXiv:1602.02215 , 2016.Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr ́egoire Mesnil. Learning semantic rep-resentations using convolutional neural networks for web search. In Proceedings of the 23rdInternational Conference on World Wide Web , pp. 373–374. ACM, 2014.Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re-thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567 , 2015.Aaron Van den Oord, Sander Dieleman, and Benjamin Schrauwen. Deep content-based music rec-ommendation. In Advances in Neural Information Processing Systems , pp. 2643–2651, 2013.Flavian Vasile, Elena Smirnova, and Alexis Conneau. Meta-prod2vec-product embeddings usingside-information for recommendation. arXiv preprint arXiv:1607.07326 , 2016.Andreas Veit, Balazs Kovacs, Sean Bell, Julian McAuley, Kavita Bala, and Serge Belongie. Learningvisual clothing style with heterogeneous dyadic co-occurrences. In Proceedings of the IEEEInternational Conference on Computer Vision , pp. 4642–4650, 2015.Daixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. 2016.Tracey Xiang. Technode article. http://technode.com/2013/06/14/how-does-taobao-uses-user-data/ , June 2013. Accessed: 2016-04-08.Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deepneural networks? In Advances in neural information processing systems , pp. 3320–3328, 2014.Lilei Zheng, Khalid Idrissi, Christophe Garcia, Stefan Duffner, and Atilla Baskurt. Logistic simi-larity metric learning for face verification. In 2015 IEEE International Conference on Acoustics,Speech and Signal Processing (ICASSP) , pp. 1951–1955. IEEE, 2015.13
BJ8fyHceg
Under review as a conference paper at ICLR 2017TUNING RECURRENT NEURAL NETWORKS WITH RE-INFORCEMENT LEARNINGNatasha Jaques12, Shixiang Gu134, Richard E. Turner3, Douglas Eck11Google Brain, USA2Massachusetts Institute of Technology, USA3University of Cambridge, UK4Max Planck Institute for Intelligent Systems, Germanyjaquesn@mit.edu, sg717@cam.ac.uk, ret26@cam.ac.uk, deck@google.comABSTRACTThe approach of training sequence models using supervised learning and next-stepprediction suffers from known failure modes. For example, it is notoriously diffi-cult to ensure multi-step generated sequences have coherent global structure. Wepropose a novel sequence-learning approach in which we use a pre-trained Recur-rent Neural Network (RNN) to supply part of the reward value in a ReinforcementLearning (RL) model. Thus, we can refine a sequence predictor by optimizingfor some imposed reward functions, while maintaining good predictive propertieslearned from data. We propose efficient ways to solve this by augmenting deepQ-learning with a cross-entropy reward and deriving novel off-policy methods forRNNs from KL control. We explore the usefulness of our approach in the contextof music generation. An LSTM is trained on a large corpus of songs to predictthe next note in a musical sequence. This Note-RNN is then refined using ourmethod and rules of music theory. We show that by combining maximum likeli-hood (ML) and RL in this way, we can not only produce more pleasing melodies,but significantly reduce unwanted behaviors and failure modes of the RNN, whilemaintaining information learned from data.1 I NTRODUCTIONGenerative modeling of music with deep neural networks is typically accomplished by training aRNN such as a Long Short-Term Memory (LSTM) network to predict the next note in a musicalsequence (e.g. Eck & Schmidhuber (2002)). Similar to a Character RNN (Mikolov et al., 2010),these Note RNNs can be used to generate novel melodies by initializing them with a short sequenceof notes, then repeatedly sampling from the model’s output distribution generated to obtain thenext note. While melodies and text generated in this way have recently garnered attention1, thistype of model tends to suffer from common failure modes, such as excessively repeating tokens, orproducing sequences that lack a consistent theme or structure. Such sequences can appear wanderingand random (see Graves (2013) for a text example).Music compositions adhere to relatively well-defined structural rules, making music an interestingsequence generation challenge. For example, music theory tells that groups of notes belong to keys,chords follow progressions, and songs have consistent structures made up of musical phrases. Ourresearch question is therefore whether such music-theory-based constraints can be learned by anRNN, while still allowing it to maintain note probabilities learned from data.To approach this problem we propose RL Tuner , a novel sequence learning approach in which RLis used to impose structure on an RNN trained on data. The reward function in our framework com-bines task-related rewards with the probability of a given action originally learned by the pre-trainedRNN. Thus, our model directly preserves inforamtion about the original probability distributionslearned from data, while allowing us to explicitly control the trade-off between the influence of data1http://www.theverge.com/2016/6/1/11829678/google-magenta-melody-art-generative-artificial-intelligence1Under review as a conference paper at ICLR 2017and heuristic rewards. This is an important novel direction of research, because in many tasks theavailable reward functions are not a perfect metric that alone will lead to the best task performancein the real world (e.g. BLEU score). Unlike previous work (e.g. (Ranzato et al., 2015), (Bahdanauet al., 2016), (Norouzi et al., 2016), (Li et al., 2016)) we do not use ML training as a way to simplybootstrap the training of an RL model, but rather we rely mainly on information learned from data,and use RL only as a way to refine characteristics of the output by imposing structural rules.This paper contributes to the sequence training and RL literature by a) proposing a novel method forcombining ML and RL training; b) showing the connection between this approach and StochasticOptimal Control (SOC)/KL-control with a pre-trained RNN as a prior policy; c) showing the ex-plicit relationships among a generalized variant of -learning (Rawlik et al., 2012), G-learning (Foxet al.), and Q-learning with log prior augmentation; d) being the first work to explore generalized-learning and G-learning with deep neural networks, serving as a reference for exploring KL-regularized RL objectives with deep Q-learning; e) empirically comparing generalized -learning,G-learning, and Q-learning with log prior augmentation for the first time; and f) applying this newtechnique to the problem of music generation, and showing through an empirical study that thismethod produces melodies which are more melodic, harmonious, interesting, and rated as signifi-cantly more subjectively pleasing, than those of the original Note RNN . We suggest that the RL Tunermethod could have potential applications in a number of areas as a general way to refine existingrecurrent models trained on data by imposing constraints on their behavior.2 B ACKGROUND2.1 D EEPQ-L EARNINGIn RL, an agent interacts with an environment. Given the state of the environment at time t,st,the agent takes an action ataccording to its policy (atjst), receives a reward r(st;at), and theenvironment transitions to a new state, st+1.The agent’s goal is to maximize reward over a sequenceof actions, with a discount factor of applied to future rewards. The optimal deterministic policyis known to satisfy the following Bellman optimality equation,Q(st;at;) =r(st;at) +Ep(st+1jst;at)[maxat+1Q(st+1;at+1;)] (1)whereQ(st;at) =E[P1t0=tt0tr(st0;at0)]is theQfunction of a policy .Q-learning tech-niques (Watkins & Dayan, 1992; Sutton et al., 1999) learn this optimal Q function by iterativelyminimizing the Bellman residual. The optimal policy is given by (ajs) = arg max aQ(s;a).DeepQ-learning(Mnih et al., 2013) uses a neural network called the deep Q-network (DQN) to ap-proximate the QfunctionQ(s;a;). The network parameters are learned by applying stochasticgradient descent (SGD) updates with respect to the following loss function,L() =E[(r(s;a) +maxa0Q(s0;a0;)Q(s;a;))2] (2)whereis the exploration policy, and is the parameters of the Target Q-network (Mnih et al.,2013) that is held fixed during the gradient computation. The moving average of is used asasproposed in (Lillicrap et al., 2016). Exploration can be performed with either the -greedy methodor Boltzmann sampling. Additional standard techniques such as replay memory (Mnih et al., 2013)andDeep Double Q-learning (Hasselt et al., 2015) are used to stablize and improve learning.2.2 M USIC GENERATION WITH LSTMPrevious work with music generation using deep learning (e.g. (Eck & Schmidhuber, 2002), (Sturmet al., 2016)) has involved training an RNN to learn to predict the next note in a monophonic melody;we call this type of model a Note RNN . Often, the Note RNN is implemented using a Long Short-Term Memory (LSTM) network (Gers et al., 2000). LSTMs are networks in which each recurrentcell learns to control the storage of information through the use of an input gate, output gate, andforget gate. The first two gates control whether information is able to flow into and out of the cell,and the latter controls whether or not the contents of the cell should be reset. Due to these properties,LSTMs are better at learning long-term dependencies in the data, and can adapt more rapidly to newdata (Graves, 2013). A softmax function can be applied to the final outputs of the network to obtain2Under review as a conference paper at ICLR 2017the probability the network places on each note, and softmax cross-entropy loss can be used to trainthe model via back propagation through time (BPTT) (Graves & Schmidhuber, 2005). However,as previously described, the melodies generated by this model tend to wander, and lack musicalstructure; we will show that they are also perceived as less musically pleasing by listeners. In thenext section, we will show how to improve this model with RL.3 RL T UNER DESIGNGiven a trained Note RNN , the goal is to teach it concepts about music theory, while still maintainingthe information about typical melodies originally learned from data. To accomplish this task, wepropose RL Tuner , a novel sequence training method incorporating RL. We use an LSTM trained ondata (the Note RNN ) to supply the initial weights for three networks in RL Tuner : the Q-network andTarget Q-network in the DQN algorithm as described in Section 2.1, and a Reward RNN . Therefore,theQ-network is a recurrent LSTM model, with architecture identical to that of the original NoteRNN . The Reward RNN is used to supply part of the reward value used to train the model, and isheld fixed during training.In order to formulate music generation as an RL problem, we treat placing the next note in themelody as taking an action. The state of the environment sconsists of the previous note, and theinternal state of the LSTM cells of both the Q-network and the Reward RNN . Thus,Q(a;s)canbe calculated by initializing the recurrent Q-network with the appropriate memory cell contents,running it for one time step using the previous note, and evaluating the output value for the action a.The next action can be selected with either a Boltzmann sampling or -greedy exploration strategy.Given action a, the reward can be computed by combining probabilities learned from the trainingdata with knowledge of music theory. We define a set of music-theory based rules (described inSection 3.2) to impose constraints on the melody that the model is composing through a rewardsignalrMT(a;s). For example, if a note is in the wrong key, then the model receives a negativereward. However, it is necessary that the model still be “creative,” rather than learning a simplemelody that can easily exploit these rewards. Therefore, we use the Reward RNN — or equivalentlythe trained Note RNN — to compute logp(ajs), the log probability of a note agiven a melody s,and incorporate this into the reward function. Figure 1 illustrates these ideas.Figure 1: A Note RNN is trained on MIDI files and supplies the initial weights for the Q-networkandTarget-Q-network , and final weights for the Reward RNN .The total reward given at time tis therefore:r(s;a) = logp(ajs) +rMT(a;s)=c (3)wherecis a constant controlling the emphasis placed on the music theory reward. Given the DQNloss function in Eq. 2 and modified reward function in Eq. 3, the new loss function and learnedpolicy for RL Tuner are,L() =E[(logp(ajs) +rMT(a;s)=c+maxa0Q(s0;a0;)Q(s;a;))2] (4)(ajs) =(a= arg maxaQ(s;a;)): (5)3Under review as a conference paper at ICLR 2017Thus, the modified loss function forces the model to learn that the most valuable actions are thosethat conform to the music theory rules, but still have high probability in the original data.3.1 R ELATIONSHIP TO KL C ONTROLThe technique described in Section 3 has a close connection to stochastic optimal control(SOC) (Stengel, 1986) and in particular, KL control (Todorov, 2006; Kappen et al., 2012; Rawliket al., 2012). SOC casts the optimal planning in stochastic environments as inference in graphicalmodels, and enables direct application of probabilistic inference techniques such as Expectation-Maximization (EM) and message passing for solving the control problem (Attias, 2003; Toussaint& Storkey, 2006; Toussaint, 2009). Rawlik et al. (2012); Kappen et al. (2012) then introduced KLcontrol, a generic formulation of the SOC as Kullback-Leibler (KL) divergence minimization, andconnected to prior work on RL with additional KL cost (Todorov, 2006). Since our primary focus isto connect with DQNs, we specifically focus on the work by Rawlik et al. (2012) as they derive atemporal-difference-based approach on which we build our methods.KL control formulation defines a prior dynamics or policy, and derives a variant of the control or RLproblem as performing approximate inference in a graphical model. Let be a trajectory of stateand action sequences, p()be a prior dynamics, and r()be the reward of the trajectory. Then, anadditional binary variable bis introduced and a graphical model is defined as p(;b) =p()p(bj),wherep(b= 1j) =er()=candcis the temperature variable. An approximation to p(jb= 1) canbe derived using the variational free-energy method, and this leads to a cost with a similar form tothe RL problem previously defined, but with an additional penalty based on the KL divergence fromthe prior trajectory,logp(jb= 1) = logZp()p(bj)d (6)Eq()[logp()p(bj)logq()] (7)=Eq()[r()=cKL[q()jjp()]] =Lv(q) (8)whereq()is the variational distribution. Rewriting the variational objective Lv(q)in Eq. 6 in termsof policy, we get the following RL objective with KL-regularization, also known as KL control,Lv() =E[Xtr(st;at)=cKL[(jst)jjp(jst)]]: (9)In contrast, the objective in Section 3 is,Lv() =E[Xtr(st;at)=c+ logp(atjst)]: (10)The difference is that Eq. 9 includes an entropy regularizer, and thus a different off-policy methodfromQ-learning is required. A generalization of -learning (Rawlik et al., 2012), and G-learning(Fox et al.)2are two off-policy methods for solving the KL-regularized RL problem, where addi-tional generalized- andGfunctions are defined and learned instead of Q. We implement both ofthese algorithms as well, treating the prior policy as the conditional distribution p(ajs)defined bythe trained Note RNN . To the best of our knowledge, this is the first application of KL-regularizedoff-policy methods with deep neural networks to sequence modeling tasks. The two methods aregiven below respectively,L() =E[(logp(ajs) +rMT(s;a)=c+logXa0e(s0;a0;)(s;a;))2] (11)(ajs)/e(s;a;)(12)L() =E[(rMT=c(s;a) +logXa0elogp(a0js0)+G(s0;a0;)G(s;a;))2] (13)(ajs)/p(ajs)eG(s;a;): (14)2The methods in the original papers are derived for different motivations and presented in different forms asdescribed in Section 4, but we refer them using their names as the derivations follow closely from the papers.4Under review as a conference paper at ICLR 2017Both methods can be seen as instances of KL-regularized deep Q-learning, and they also sub-sume entropy-regularized deep Q-learning by removing the logp(ajs)term. The main differ-ence between the two methods is the definition of the action-value functions generalized- andG. In factG-learning can be directly derived from generalized -learning by reparametrizing(s;a) = logp(ajs)+G(s;a). TheG-function does not give the policy directly but instead needs tobe dynamically mixed with the prior policy probabilities. While this computation is straight-forwardfor discrete action domains as here, extensions to continuous action domains require additional con-siderations such as normalizability of advantage function parametrizations (Gu et al., 2016). TheKL control-based derivation also has another benefit in that the stochastic policies can be directlyused as an exploration strategy, instead of heuristics such as -greedy or additive noise (Mnih et al.,2013; Lillicrap et al., 2016). The derivations for both methods are included in the appendix forcompleteness.3.2 M USIC -THEORY BASED REWARDA central question of this paper is whether RL can be used to constrain a sequence learner such thatthe sequences it generates adhere to a desired structure. To test this hypothesis, we developed severalrules that we believe describe more pleasant-sounding melodies, taking inspiration from a text onmelodic composition (Gauldin, 1995). We do not claim these characteristics are exhaustive, strictlynecessary for good composition, or even particularly interesting. They simply serve the purpose ofguiding the model towards traditional composition structure. It is therefore crucial to apply the RLTuner framework to retain the knowledge learned from real songs in the training data.Following the principles set out on page 42 of Gauldin’s book (Gauldin, 1995), we define the rewardfunctionrMT(a;s)to encourage melodies to have the following characteristics. All notes shouldbelong to the same key, and the melody should begin and end with the tonic note of the key; e.g. ifthe key is C-major, this note would be middle C. This note should occur in the first beat and last 4beats of the melody. Unless a rest is introduced or a note is held, a single tone should not be repeatedmore than four3times in a row. To encourage variety, we penalize the model if the melody is highlycorrelated with itself at a lag of 1, 2, or 3 beats. The penalty is applied when the auto-correlationcoefficient is greater than .15. The melody should avoid awkward intervals like augmented 7ths,or large jumps of more than an octave. Gauldin also indicates good compositions should move bya mixture of small steps and larger harmonic intervals, with emphasis on the former; the rewardvalues for intervals reflect these requirements. When the melody moves with a large interval (a 5thor more) in one direction, it should eventually be resolved by a leap back or gradual movement inthe opposite direction. Leaping twice in the same direction is negatively rewarded. The highestnote of the melody should be unique, as should the lowest note. Finally, the model is rewarded forplaying motifs , which are defined as a succession of notes representing a short musical “idea”; in ourimplementation, a bar of music with three or more unique notes. Since repetition has been shown tobe key to emotional engagement with music (Livingstone et al., 2012), we also sought to train themodel to repeat the same motif within a melody.4 R ELATED WORKGenerative modeling of music with RNNs has been explored in a variety of contexts, including gen-erating Celtic folk music (Sturm et al., 2016), or performing Blues improvisation (Eck & Schmid-huber, 2002). Other approaches have examined RNNs with richer expressivity, latent-variables fornotes, or raw audio synthesis (Boulanger-Lewandowski et al., 2012; Gu et al., 2015; Chung et al.,2015). Recently, impressive performance in generating music from raw audio has been attained withconvolutional neural networks with receptive fields at various time scales (Dieleman et al., 2016).Although the application of RL to RNNs is a relatively new area, recent work has attempted to com-bine the two approaches. MIXER (Mixed Incremental Cross-Entropy Reinforce) (Ranzato et al.,2015) uses BLEU score as a reward signal to gradually introduce a RL loss to a text translationmodel. After initially training the model using cross-entropy, the training process is repeated usingcross-entropy loss for the Ttokens in a sequence (where Tis the length of the sequence), and3While the number four can be considered a rough heuristic, avoiding excessively repeated notes and staticmelodic contours is Gauldin’s first rule of melodic composition (Gauldin, 1995).5Under review as a conference paper at ICLR 2017using RL for the remainder of the sequence. Another approach (Bahdanau et al., 2016) applies anactor-critic method and uses BLEU score directly to train a critic network to output the value ofeach word, where the actor is again initialized with the policy of an RNN trained with next-step pre-diction. Reward-augmented maximum likelihood (Norouzi et al., 2016) augments the standard MLwith a sequence-level reward function and connects it with the above RL training methods. Theseapproaches assume that the complete task reward specification is available. They pre-train a goodpolicy with supervised learning so that RL can be used to learn with the true task objective, sincetraining with RL from scratch is difficult. RL Tuner instead only uses rewards to correct certainproperties of the generated data, while learning most information from data. This is important sincein many sequence modeling applications such as music or language generation, the true reward func-tion is not available or imperfect and ultimately the model should rely on learning from data. TheRL Tuner method provides an elegant and flexible framework for correcting undesirable behaviorsof RNNs that can arise from limited training data or imperfect training algorithms.SeqGAN (Yu et al., 2016) applies RL to an RNN by using a discriminator network — similar tothose used in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) — to classify therealism of a complete sequence, and this classifier-based reward is used as a reward signal to theRNN. The approach is applied to a number of generation problems, including music generation.Although the model obtained improved MSE and BLEU scores on the Nottingham music dataset,it is not clear how these scores map to the subjective quality of the samples (Husz ́ar, 2015), and nosamples are provided with the paper. In contrast, we provide both samples and quantitative resultsdemonstrating that our approach improves the metrics defined by the reward function. Further, weshow that RL Tuner can be used to explicitly correct undesirable behaviors of an RNN, which couldbe useful in a broad range of applications.Also related to our work is that of Li and colleagues Li et al. (2016), in which the authors pre-train amodel with MLE and then use RL to impose heuristic rules designed to improve the dialog generatedby the model. However, after pre-training, only the heuristic rewards are used for further training,which alters the model to optimize only for the heuristic rewards, whereas our approach allows themodel to retain information learned from data, while explicitly controlling the trade-off between theinfluence of data and heuristic reward with the cparameter. While Li and colleagues do use theoutputs of the pre-trained model as part of one of the heuristic reward functions, it is only to teachthe model to choose dialog turns that minimize the probability that the pre-trained model placeson “dull” responses, such as “I don’t know”. However, our approach directly penalizes divergencefrom the probability distribution learned by the MLE model for every response, allowing the modelto retain information about the full space of sequences originally learned from data.Finally, as discussed in Section 3.1, our approach is related to stochastic optimal control(SOC) (Stengel, 1986) and KL control (Todorov, 2006; Kappen et al., 2012; Rawlik et al., 2012),in particular the two off-policy, model-free methods, -learning (Rawlik et al., 2012) and G-learning (Fox et al.). Both approaches solve a KL-regularized RL problem, in which a term isintroduced to the reward objective to penalize KL divergence from some prior policy. While ourmethods rely on similar derivations presented in these papers, there are some key differences. First,these techniques have not been applied to DQNs or RNNs, or as a way to fine-tune a pre-trainedRNN with additional desired charateristics. Secondly, our methods have different motivations andforms from the original papers: original -learning (Rawlik et al., 2012) restricts the prior policyto be the policy at the previous iteration and solves the original RL objective with conservative,KL-regularized policy updates, similar to conservative policy gradient methods (Kakade, 2001; Pe-ters et al., 2010; Schulman et al., 2015). The original G-learning (Fox et al.) penalizes divergencefrom a simple uniform prior policy in order to cope with over-estimation of target Qvalues, andincludes scheduling for the temperature parameter c. Lastly, our work includes the Q-learning ob-jective with additional cross-entropy reward as a comparable alternative, and provides for the firsttime comparisons among the three methods for incorporating prior knowledge in RL.5 E XPERIMENTSTo train the Note RNN , we extract monophonic melodies from a corpus of 30,000 MIDI songs.Melodies are quantized at the granularity of a sixteenth note, so each time step corresponds to onesixteenth of a bar of music. We encode a melody using two special events plus three octaves of notes.6Under review as a conference paper at ICLR 2017The special events are used to introduce rests and notes with longer durations, and are encoded as0 =note off ,1 =no event . Three octaves of pitches, starting from MIDI pitch 48, are then encodedas2 =C3;3 =C#3;4 =D3;:::;37 = B5. For example, the sequence f4;1;0;1gencodes an eighthnote with pitch D3, followed by an eighth note rest. As the melodies are monophonic, playinganother note implicitly ends the last note that was played without requiring an explicit note offevent. Thus the sequence f2;4;6;7gencodes a melody of four sixteenth notes: C3, D3, E3, F3. Alength-38 one-hot encoding of these values is used for both network input and network output.TheNote RNN consists of one LSTM layer of 100 cells, and was trained for 30,000 iterations witha batch size of 128. Optimization was performed with Adam (Kingma & Ba, 2014), and gradientswere clipped to ensure the L2 norm was less than 5. The learning rate was initially set to .5, and amomentum of 0.85 was used to exponentially decay the learning rate every 1000 steps. To regularizethe network, a penalty of = 2:5105was applied to the L2 norm of the network weights.Finally, the losses for the first 8 notes of each sequence were not used to train the model, since itcannot reasonably be expected to accurately predict them with no context. The trained Note RNNeventually obtained a validation accuracy of 92% and a log perplexity score of .2536.The learned weights of the Note RNN were used to initialize the three sub-networks in the RL Tunermodel. Each RL Tuner model was trained for 1,000,000 iterations, using the Adam optimizer, abatch size of 32, and clipping gradients in the same way. The reward discount factor was =:5. TheTarget-Q-network’s weights were gradually updated to be similar to those of the Q-network ()according to the formula (1)+, where=:01is the Target- Q-network update rate. Wereplicated our results for a number of settings for the weight placed on the music-theory rewards, c;we present results for c=:5below because we believe them to be most musically pleasing. Similarly,we replicated the results using both -greedy and Boltzmann exploration, and present the resultsusing-greedy exploration below.We compare three methods for implementing RL Tuner :Q-learning, generalized -learning, andG-learning, where the policy defined by the trained Note RNN is used as the cross entropy reward inQ-learning and the prior policy in G- and generalized -learning. These approaches are comparedto both the original performance of the Note RNN , and a model trained using only RL and no priorpolicy. Model evaluation is performed every 100,000 training epochs, by generating 100 melodiesand assessing the average rMTandlogp(ajs).All of the code for RL Tuner , including a checkpointed version of the trained Note RNN is availableathttps://github.com/natashamjaques/magenta/tree/rl-tuner .6 R ESULTSTable 1 provides quantitative results in the form of performance on the music theory rules to whichwe trained the model to adhere; for example, we can assess the fraction of notes played by the modelwhich belonged to the correct key, or the fraction of melodic leaps that were resolved. The statisticswere computed by randomly generating 100,000 melodies from each model.Metric Note RNN Q GNotes excessively repeated 63.3% 0.0% 0.02% 0.03%Mean autocorrelation - lag 1 -.16 -.11 -.10 .55Mean autocorrelation - lag 2 .14 .03 -.01 .31Mean autocorrelation - lag 3 -.13 .03 .01 17Notes not in key 0.1% 1.00% 0.60% 28.7%Melodies starting with tonic 0.9% 28.8% 28.7% 0.0%Leaps resolved 77.2% 91.1% 90.0% 52.2%Melodies with unique max note 64.7% 56.4% 59.4% 37.1%Melodies with unique min note 49.4% 51.9% 58.3% 56.5%Notes in motif 5.9% 75.7% 73.8% 69.3%Notes in repeated motif 0.007% 0.11% 0.09% 0.01%Table 1: Statistics of music theory rule adherence based on 100,000 randomly initialized melodiesgenerated by each model. The top half of the table contains metrics that should be near zero, whilethe bottom half contains metrics that should increase. Bolded entries represent significant improve-ments over the Note RNN baseline.7Under review as a conference paper at ICLR 2017The results above demonstrate that the application of RL is able to correct almost all of the targeted“bad behaviors” of the Note RNN , while improving performance on the desired metrics. For ex-ample, the original LSTM model was extremely prone to repeating the same note; after applyingRL, we see that the number of notes belonging to some excessively repeated segment has droppedfrom 63% to nearly 0% in all of the RL Tuner models. While the metrics for the G model did notimprove as consistently, the Qandmodels successfully learned to play in key, resolve melodicleaps, and play motifs. The number of melodies that start with the tonic note has also increased,melody auto-correlation has decreased, and repeated motifs have increased slightly. The degree ofimprovement on these metrics is related to the magnitude of the reward given for the behavior. Forexample, a strong penalty of -100 was applied each time a note was excessively repeated, while areward of only 3 was applied at the end of a melody for unique extrema notes (which most likelyexplains the lack of improvement on this metric). The reward values could be adjusted to improvethe metrics further, however we found that these values produced the most pleasant melodies.While the metrics indicate that the targeted behaviors of the RNN have improved, it is not clearwhether the models have retained information about the training data. Figure 2a plots the averagelogp(ajs)as produced by the Reward RNN for melodies generated by the models every 100,000training epochs; Figure 2b plots the average rMT. Included in the plots is an RL only model trainedusing only the music theory rewards, with no information about logp(ajs). Since each model isinitialized with the weights of the trained Note RNN , we see that as the models quickly learn toadhere to the music theory constraints, logp(ajs)falls from its initial point. For the RL only model,logp(ajs)reaches an average of -3.65, which is equivalent to an average p(ajs)of approximately0.026. Since there are 38 actions, this represents essentially a random policy with respect to thedistribution defined by the Note RNN . Figure 2a shows that each of our models ( Q,, andG)attain higher logp(ajs)values than this baseline, indicating they have maintained information aboutthe data probabilities. The G-learning implementation scores highest on this metric, at the cost ofslightly lower average rMT. This compromise between data probability and adherence to musictheory could explain the difference in Gmodel’s performance on the music theory metrics in Table1. Finally, while c= 0:5produced melodies that sounded better subjectively, we found that byincreasing the cparameter it is possible to train all the models to have even higher average logp(ajs).050000010000001500000200000025000003000000Training epoch 4.5 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.50.0Average Reward over 100 compositionsQΨGRL only(a)Note RNN reward: logp(ajs)0 500000 1000000 1500000 2000000 2500000 3000000Training epoch−200−150−100−50050Average Reward over 100 compositionsQΨGRL only (b) Music theory rewardFigure 2: Average reward obtained by sampling 100 melodies every 100,000 training epochs. Thethree models are compared to a model trained using only the music theory rewards rMT.The question remains whether the RL-tuned models actually produce more pleasing melodies. Toanswer it, we conducted a user study via Amazon Mechanical Turk in which participants were askedto rate which of two randomly selected melodies they preferred on a Likert scale. A total of 192ratings were collected; each model was involved in 92 of these comparisons. Figure 3 plots thenumber of comparisons in which a melody from each model was selected as the most musicallypleasing. A Kruskal-Wallis H test of the ratings showed that there was a statistically significantdifference between the models, 2(3) = 109:480;p < 0:001. Mann-Whitney U post-hoc testsrevealed that the melodies from all three RL Tuner models (Q,, andG) had significantly higherratings than the melodies of the Note RNN ,p < : 001. TheQandmelodies were also ratedas significantly more pleasing than those of the Gmodel, but did not differ significantly from eachother. The sample melodies used for the study are available here: goo.gl/XIYt9m ; we encouragereaders to judge their quality for themselves.8Under review as a conference paper at ICLR 20170 10 20 30 40 50 60 70 80 90Number of times preferredΨQGNote RNNModelFigure 3: The number of times a melody from each modelwas selected as most musically pleasing. Error bars re-flect the std. dev. of a binomial distribution fit to thebinary win/loss data from each model.Listening to the samples produced bytheNote RNN reveals that they aresometimes dischordant and usually dull;the model tends to place rests fre-quently, repeat the same note, and pro-duce melodies with little variation. Incontrast, the melodies produced by theRL Tuner models are more varied andinteresting. The Gmodel tends to pro-duce energetic and chaotic melodies,which include sequences of repeatednotes. This repetition is likely be-cause the G policy as defined in Eq.14 directly mixes p(ajs)with the out-put of the G network, and the NoteRNN strongly favours repeating notes.The most pleasant-sounding melodiesare generated by the Qandmodels.These melodies stay firmly in key andfrequently choose more harmonious in-terval steps, leading to melodic and pleasant melodies. However, it is clear they have retainedinformation about the training data; for example, the sample q2.wav in the sample directory endswith a seemingly familiar riff.7 D ISCUSSION AND FUTURE WORKWe have derived a novel sequence learning framework which uses RL rewards to correct propertiesof sequences generated by an RNN, while keeping much of the information learned from supervisedtraining on data. We proposed and evaluated three alternative techniques for achieving this, andshowed promising results on music generation tasks.While we acknowledge that the simple monophonic melodies generated by these models — whichare based on overly simplistic rules of melodic composition — do not approach the level of artisticmerit of human composers, we believe this study provides a proof-of-concept that encoding domainknowledge using our method can help the outputs of an LSTM adhere to a more consistent structure.The musical complexity of the songs is limited not just by the heuristic rules, but also by the nu-merical encoding, which cannot represent the dynamics and expressivity of a musical performance.However, although these simple melodies cannot surpass those of human musicians, attempting totrain a model to generate aesthetically pleasing outputs in the absence of a better metric of humantaste than log-likelihood is a problem of broader interest to the artificial intelligence community.In addition to the ability to train models to generate pleasant-sounding melodies, we believe ourapproach of using RL to refine RNN models could be promising for a number of applications. Forexample, it is well known that a common failure mode of RNNs is to repeatedly generate the sametoken. In text generation and automatic question answering, this can take the form of repeatedlygenerating the same response (e.g. “How are you?” !“How are you?”!“How are you?” ...).We have demonstrated that with our approach we can correct for this unwanted behavior, while stillmaintaining information that the model learned from data. Although manually writing a rewardfunction may seem unappealing to those who believe in training models end-to-end based only ondata, that approach it is limited by the quality of the data that can be collected. If the data containshidden biases, this can lead to highly undesirable consequences. Recent research has shown that theword2vec embeddings in popular language models trained on standard corpora consistently containthe same harmful biases with respect to race and gender that are revealed by implicit associationtests on humans (Caliskan-Islam et al., 2016). In contrast to relying solely on possibly biased data,our approach allows for encoding high-level domain knowledge into the RNN, providing a general,alternative tool for training sequence models.9Under review as a conference paper at ICLR 2017ACKNOWLEDGMENTSThis work was supported by Google Brain, the MIT Media Lab Consortium, and Canada’s NaturalSciences and Engineering Research Council (NSERC). We thank Dzmitry Bahdanau, Greg Wayne,Sergey Levine, and Timothy Lillicrap for helpful discussions on RL and stochastic optimal control.REFERENCESHagai Attias. Planning by probabilistic inference. In AISTATS , 2003.Bahdanau et al. An actor-critic algorithm for sequence prediction. arXiv preprint:1607.07086 , 2016.Boulanger-Lewandowski, Bengio, and Vincent. Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. arXivpreprint:1206.6392 , 2012.Caliskan-Islam, Bryson, and Narayanan. Semantics derived automatically from language corporanecessarily contain human biases. arXiv preprint:1608.07187 , 2016.Chung, Kastner, Dinh, Goel, Courville, and Bengio. A recurrent latent variable model for sequentialdata. In NIPS , pp. 2980–2988, 2015.Dieleman et al. Wavenet: A generative model for raw audio. arXiv preprint:1609.03499 , 2016.Eck and Schmidhuber. Finding temporal structure in music: Blues improvisation with LSTM recur-rent networks. In Neural Networks for Signal Processing , pp. 747–756. IEEE, 2002.Fox, Pakman, and Tishby. Taming the noise in reinforcement learning via soft updates.Gauldin. A practical approach to eighteenth-century counterpoint . Waveland Pr Inc, 1995.Gers, Schmidhuber, and Cummins. Learning to forget: Continual prediction with LSTM. Neuralcomputation , 12(10):2451–2471, 2000.Goodfellow et al. Generative adversarial nets. In NIPS , pp. 2672–2680, 2014.Graves. Generating sequences with recurrent neural networks. arXiv preprint:1308.0850 , 2013.Graves and Schmidhuber. Framewise phoneme classification with bidirectional LSTM and otherneural network architectures. Neural Networks , 18(5):602–610, 2005.Gu, Ghahramani, and Turner. Neural adaptive sequential monte carlo. In NIPS , pp. 2629–2637,2015.Gu, Lillicrap, Sutskever, and Levine. Continuous Deep Q-Learning with model-based acceleration.InICML , 2016.Van Hasselt, Guez, and Silver. Deep reinforcement learning with double Q-learning. CoRR,abs/1509.06461 , 2015.Husz ́ar. How (not) to train your generative model: Scheduled sampling, likelihood, adversary?arXiv preprint:1511.05101 , 2015.Kakade. A natural policy gradient. In NIPS , volume 14, pp. 1531–1538, 2001.Kappen, G ́omez, and Opper. Optimal control as a graphical model inference problem. Machinelearning , 87(2):159–182, 2012.Kingma and Ba. Adam: A method for stochastic optimization. arXiv preprint:1412.6980 , 2014.Jiwei Li, Will Monroe, Alan Ritter, and Dan Jurafsky. Deep reinforcement learning for dialoguegeneration. arXiv preprint arXiv:1606.01541 , 2016.Lillicrap et al. Continuous control with deep reinforcement learning. ICLR , 2016.10Under review as a conference paper at ICLR 2017Livingstone, Palmer, and Schubert. Emotional response to musical repetition. Emotion , 12(3):552,2012.Mikolov et al. Recurrent neural network based language model. In Interspeech , volume 2, pp. 3,2010.Mnih et al. Playing atari with deep reinforcement learning. arXiv preprint:1312.5602 , 2013.Norouzi et al. Reward augmented maximum likelihood for neural structured prediction. arXivpreprint:1609.00150 , 2016.Peters, M ̈ulling, and Altun. Relative entropy policy search. In AAAI . Atlanta, 2010.Ranzato, Chopra, Auli, and Zaremba. Sequence level training with recurrent neural networks. arXivpreprint:1511.06732 , 2015.Rawlik, Toussaint, and Vijayakumar. On stochastic optimal control and reinforcement learning byapproximate inference. Proceedings of Robotics: Science and Systems VIII , 2012.Schulman, Levine, Moritz, Jordan, and Abbeel. Trust region policy optimization. In ICML , 2015.Robert F Stengel. Stochastic optimal control . John Wiley and Sons New York, New York, 1986.Sturm, Santos, Ben-Tal, and Korshunova. Music transcription modelling and composition usingdeep learning. arXiv preprint:1604.08723 , 2016.Sutton et al. Policy gradient methods for reinforcement learning with function approximation. InNIPS , volume 99, pp. 1057–1063, 1999.Todorov. Linearly-solvable markov decision problems. In NIPS , pp. 1369–1376, 2006.Marc Toussaint. Robot trajectory optimization using approximate inference. In Proceedings of the26th annual international conference on machine learning , pp. 1049–1056. ACM, 2009.Marc Toussaint and Amos Storkey. Probabilistic inference for solving discrete and continuous statemarkov decision processes. In Proceedings of the 23rd international conference on Machinelearning , pp. 945–952. ACM, 2006.Watkins and Dayan. Q-learning. Machine learning , 8(3-4):279–292, 1992.Yu, Zhang, Wang, and Yu. SeqGAN: Sequence generative adversarial nets with policy gradient.arXiv preprint:1609.05473 , 2016.11Under review as a conference paper at ICLR 20178 A PPENDIX(a)Note RNN (b)Q (c) (d)GFigure 4: Probability distribution over the next note generated by each model for a sample melody.Probability is shown on the vertical axis, with red indicating higher probability. Note 0 is note offand note 1 is no event .8.1 O FF-POLICY METHODS DERIVATIONS FOR KL- REGULARIZED REINFORCEMENTLEARNINGGiven the KL-regularized RL objective defined in Eq. 9, the value function is given by,V(st;) =E[Xt0tr(st0;at0)=cKL[(jst0)jjp(jst0)]] (15)8.1.1 G ENERALIZED -LEARNINGThe following derivation is based on modifications to (Rawlik et al., 2012) and resembles the deriva-tion in Fox et al.. We define the generalized function as,(st;at;) =r(st;at)=c+ logp(atjst) (16)+Ep(st+1jst;at)E[Xt0t+1r(st0;at0)=cKL[(jst0)jjp(jst0)]] (17)=r(st;at)=c+ logp(atjst) +Ep(st+1jst;at)[V(st+1;)] (18)The value function can be expressed as,V(st;) =E[(st;at;)] +H[] (19)=E[(st;at;)log(atjst)] (20)Fixing (st;at) = (st;at;)and constraining to be a probability distribution, the optimalgreedy policy update can be derived by functional calculus, along with the corresponding optimalvalue function,(atjst)/e(st;at)(21)V(st;) = logXate(st;at)(22)Given Eq. 18 and 22, the following Bellman optimality equation for generalized function isderived, and the -learning loss in Eq. 11 directly follows.(st;at;) =r(st;at)=c+ logp(atjst) +Ep(st+1jst;at)[logXat+1e(st+1;at+1;)] (23)8.1.2G-LEARNINGThe following derivation is based on (Fox et al.) with small modifications. We define the Gfunctionas,G(st;at;) =r(st;at)=c+Ep(st+1jst;at)E[Xt0t+1r(st0;at0)=cKL[(jst0)jjp(jst0)]] (24)=r(st;at)=c+Ep(st+1jst;at)[V(st+1;)] = (st;at;)logp(atjst) (25)12Under review as a conference paper at ICLR 2017Similar derivation as above can be applied.V(st;) =E[G(st;at;)]KL[(jst0)jjp(jst0)] (26)=E[G(st;at;)log(atjst)logp(atjst)] (27)(atjst)/p(atjst)eG(st;at)(28)V(st;) = logXatp(atjst)eG(st;at)(29)G(st;at;) =r(st;at)=c+Ep(st+1jst;at)[logXat+1p(at+1jst+1)eG(st+1;at+1;)] (30)Alternatively, the above expression for G-learning can be derived from -learning by simplereparametrization with (s;a) =G(s;a) + logp(ajs)in Eq. 23.13
r17RD2oxe
Under review as a conference paper at ICLR 2017DEEPNEURAL NETWORKS AND THE TREE OF LIFEYan Wang, Kun HeyComputer Science DepartmentHuazhong University of Science and Technologyfyanwang, brooklet60 g@hust.edu.cnJohn E. Hopcroft, Yu SunComputer Science DepartmentCornell Universityfjeh, ys646g@cs.cornell.eduABSTRACTIn Evolutionary Biology, species close in the tree of evolution are identified bysimilar visual features. In computer vision, deep neural networks perform imageclassification by learning to identify similar visual features. This leads to an in-teresting question: is it possible to leverage the advantage of deep networks toconstruct a tree of life? In this paper, we make the first attempt at building thephylogenetic tree diagram by leveraging the high-level features learned by deepneural networks. Our method is based on the intuition that if two species sharesimilar features, then their cross activations in the softmax layer should be high.Based on the deep representation of convolutional neural networks trained for im-age classification, we build a tree of life for species in the image categories ofImageNet. Further, for species not in the ImageNet categories that are visuallysimilar to some category, the cosine similarity of their activation vectors in thesame layer should be high. By applying the inner product similarity of the activa-tion vectors at the last fully connected layer for different species, we can roughlybuild their tree of life. Our work provides a new perspective to the deep repre-sentation and sheds light on possible novel applications of deep representation toother areas like Bioinformatics.1 I NTRODUCTIONDeep learning transforms the data into compact intermediate representations akin to principal com-ponents, and derives layered structures by removing the redundancy in representations (Li Deng,2014). In recent years, deep learning has demonstrated great success with significant improve-ment in various artificial intelligence applications, including speech recognition (Sak et al., 2015),image recognition (Ciresan et al., 2012; Cir; Krizhevsky et al., 2012), and natural language process-ing (Vinyals et al., 2015; Socher et al., 2013).Convolutional Neural Networks (CNNs) are mainly designed for image and video recognition. Typ-ical CNN architecture alternates convolutional layers and pooling layers, followed by several fullyconnected or sparsely connected layers with a final softmax as the classification layer. Milestonesinclude the 16-layer AlexNet (Krizhevsky et al., 2012), the 19-layer VGG (Simonyan & Zisserman,2014), and the 22-layer GoogleNet (Szegedy et al., 2015). By adding identity function as a shortcut, He et al. (2015) are able to build a substantially deeper ResNet with 152 layers, which receivedthe first place on the ILSVRC 2015 image classification task (Russakovsky et al., 2015). Other verydeep networks include the highway network with depths up to 100 layers (Srivastava et al., 2015).Eldan & Shamir (2016) provide a theoretical justification that reveals the utility of having deepernetworks rather than wider networks, implying that future progress will lead to the development ofeven deeper networks.Understanding the deep representations of neural networks has become increasingly difficultas the state-of-the-art models have more layers. This problem is important because it will help usunderstand the intrinsic mechanism of deep neural networks and explore possible novel applicationsbased on the understanding. Ballester & de Ara ́ujo (2016) show how CNNs, trained to identify ob-jects primarily in photos, could be used for abstract sketch recognition. Gatys et al. (2015a;b) utilizeThe first three authors contribute equally.yCorresponding author.1Under review as a conference paper at ICLR 2017the correlations between feature maps to synthesize natural textures and transfer artistic style withhigh perceptual quality. In Bioinformatics, deep neural networks are used for the analysis of medi-cal images for cancer detection (Cirean et al., 2013) as well as drug discovery and toxicology (Dahlet al., 2014; Ramsundar et al., 2015; Wallach et al., 2015). A deep-learning approach based on theautoencoder architecture has been adopted to predict Gene Ontology annotations and gene-functionrelationships (Chicco et al., 2014).The Tree of Life refers to the compilation of a comprehensive phylogenetic (or evolutionary)database rooted at the last universal common ancestor of life on Earth. Over the course of hundredsof millions of years, the splitting and subsequent divergence of lineages has produced the tree of life,which has as its leaves the many species of organisms (Darwin, 1859). Here we refer to a phyloge-netic tree, evolutionary tree or tree of life as a branching diagram showing the inferred genealogicalrelationships (Evaluate how close two species are in the evolutionary history, as evaluated by ob-served heritable traits, such as DNA sequences) among various biological species (Hug et al., 2016).This is an important problem in evolutionary biology and many attempts have been made (Darwin,1859; Doolittle & Bapteste, 2007; Bapteste et al., 2009; Edwards, 2009). Originally tree of life wasmanually built based on the understanding of the evolution history or the visual similarity of thespecies. Today modern techniques have been applied based on the gene similarity.Our contributions are two-fold:1) Provides a potential solution to the important problem of constructing a biology evolutionary tree.We propose a novel approach in constructing a tree of life using the deep representation of CNNstrained for image classification. We conjuncture that the hierarchical feature representation learnedby deep networks can be leveraged to quantify the visual similarity of the species. In this way, wemight be able to construct tree of life using their feature similarity.2) Gives insight into the representations produced by deep neural networks.We conjecture that if images of two training categories share some similar features, then their crossactivations in the softmax layer should be high. Hence we could evaluate the genetic distance ofspecies within the training categories. Based on the deep representation of several typical CNNs,AlexNet (Krizhevsky et al., 2012), VGG (Simonyan & Zisserman, 2014) and ResNet (He et al.,2015) that are trained for ImageNet classification, we construct tree of life for dozens of species inthe thousands of ImageNet categories of the training dataset.For species not in the training categories that are visually similar to some species in the trainingdataset, could we still utilize their deep representation in order to judge the relationship amongdifferent species? We conjuncture that they show high cosine similarity of the activation vectors inhigh-level layers. By applying the inner product similarity of the activation vectors at the last fullyconnected layer for different species, we present empirical evidence that through transfer learningwe could roughly construct their tree of life.Experiments show that the proposed method using deep representation is very competitive to humanbeings in building the tree of life based on the visual similarity of the species. We also try on net-works at different epochs during the training, and the quality of tree of life increases over the courseof training. The performance among the three networks, AlexNet, VGG and ResNet, improves withthe improvement of their classification quality.2 T HEPROPOSED METHOD2.1 D ATA COLLECTIONWe have two important criterions in mind while constructing our image dataset. 1) We would likeeach image category, which corresponds to a node in the tree (i.e. a species), to have enough samplessuch that a statistic from the network activations is reasonably robust to noise. 2) There exists aground truth hierarchy on the image categories, so we can objectively evaluate the effectiveness ofour method.2Under review as a conference paper at ICLR 2017Fortunately, the ImageNet 2012 Classification dataset provides the raw material we need. Thisdataset contains 1000 categories of common life objects, and each category contains 1000 imagesas the training data. Also, those categories correspond exactly to nodes in the WordNet hierarchy.WordNet (Miller, 1995) is a large lexical database of English, where words are grouped into setsof cognitive synonyms (synsets), each expressing a distinct concept and synsets are interlinked bymeans of conceptual-semantic and lexical relations.For the reference network, we select three popular CNNs (AlexNet, VGG-16 and ResNet-152)trained on ImageNet. The top 5 classification errors of AlexNet, VGG and ResNet are 15:3%,9:9%and6:7%respectively. So they all learn the features of the images very well and we couldleverage their deep representations for the ToL construction.To find a small branch of the phylogenetic tree in order to do the reconstruction, we choose a setAof genealogically close species (species close in the evolutionary tree of life as evaluated by thebranch distance) from the 1000 ImageNet categories. And for each category A2A, we use all the1000 images from the training dataset to get robust result.For the ground truth, in the smallest WordNet subtree that contains A: 1) we could just consider thecategories inAand their positions in this WordNet subtree and build a smallest ground truth tree T1A.2) we could additional consider some categories outside Ain this WordNet subtree. Then the groundtruth tree T2Acontains some categories outside the ImageNet training categories. Note that nodesinT1Ais basically the intersection of nodes in T2Aand nodes in the 1000 ImageNet categories. Foreach category outside the 1000 training categories, we also use the 1000 images from the ImageNetdatabase1.2.2 S IMILARITY EVALUATIONWe input all selected images for species in T1AorT2Ato a reference network and execute the feedforward pass. The feature maps (i.e. the activation vectors) of the last fully connected (FC) layerand the softmax layer are used to build the distance matrix.1)The Probability Method. ForT1A, each class is in the training set and their ground truth labelsare among the ones represented by the softmax layer. So we utilize the probability distribution ofthe images at the softmax layer in order to build a distance matrix. Specifically, for two classes ofimages AandBin the categories of A, we consider their cross activations in the softmax layer. Foreach image a2A, we obtain the predicted probability Pa2Bthat this image belongs to node B, andwe calculate the average of these values, named PA2B.PA2B=Xa2APa2B (1)For each image b2B, we obtain the predicted probability Pb2Athat this image belongs to node A,and we calculate the average of these values, named PB2A.PB2A=Xb2BPb2A (2)The closer the genealogical relationship of AandB, the higher the cross predicted probability valueshould be. As the cross confidence is close to zero, we use the logistic function to enlarge the value.Then we add “” to assign lower value to closer species and to keep the value nonnegative.DAB=0 ifA=Blog(0:5PA2B+ 0:5PB2A)ifA6=B(3)2)The Inner Product Method. ForT2A, as some species are not in the 1000 classification cate-gories, we use the centroid vector of the activations at the last fully connected (FC) layer for eachspecies, and calculate the dot product of the two unitized centroid vectors to get their cosine simi-larity. Then we add “ ” to assign lower value to closer species.DAB=logvAvBjjvAjjjjvBjj(4)1The only exception is for Bassarisk which only contains 694 images.3Under review as a conference paper at ICLR 20172.3 C ONSTRUCTING THE TREE OF LIFEBased on the distance matrix, we have three methods, namely “Approximation Central Point”, “Min-imum Spanning Tree”, and “Multidimensional Scaling”, to construct a tree of life.1)The “Approximation Central Point”(ACP) based method. In the ACP based method, we builda tree bottom up by recursively merging two species points, say AandB, with the smallest distance,and setting the distance of the new point to other points as the average distance of AandBto otherpoints respectively.2)The “Minimum Spanning Tree” (MST) based method. In the MST based method, we firstconstruct a Minimum Spanning Tree (MST) based on the distance matrix. Then we build a treefrom the root to the leaves, recursively split the current MST subtree into two parts by removing itslongest edge until there is only one node in each subtree. In this way we build a “tree” with all theleaves corresponding to the species and closest species are splitted in the end.3)The “Multidimensional Scaling”(MDS) based method. In the MDS based method, accordingtoD, we know distances among the points which corresponds to the species. We first apply theMDS (Multi-Dimensional Scaling) (Borg & Groenen, 2005) algorithm to do dimension reductionand project the species points into a two dimensional subspace. Then we build a tree bottom up byrecursively merging two points with the smallest Euclidean distance in the two dimensional subspaceand regard the midpoint of the two merging points as the new representative point.Our following experiments show that MST and MDS show similar performance but ACP is consid-erably weaker.3 E XPERIMENTS AND ANALYSISWe conduct a plenty set of experiments to build several branches of the phylogenetic trees of differ-ent granularity. To test whether our method could distinguish tiny visual differences, we first choosegenealogically very close species, such as a set of fish species or a set of canine species, and con-struct their tree of life. Then, to test whether our method has good scalability for larger species, suchas dog, cat, fish, etc., we choose 39 different large species to build a more general tree of life andverify whether different breeds of one large species like dogs could be grouped together. In addi-tion, to evaluate the ability of constructing hierarchical trees based on the visual similarity of imagesoutside the Biology, we choose some vehicle categories from the ImageNet dataset (Russakovskyet al., 2015) and build a vehicle tree.For the methods, we use the probability method in Section 2.2 to build the distance matrix, andapply ACP, MST, and MDS based methods to build the tree of life. For the inner product methodin Section 2.2, the results is slightly weaker, but it can deal with species or categories outside thetraining set. For details of inner product method, the readers are referred to the Appendix.3.1 C ONSTRUCTING FINE-GRAINED TREE OF LIFETo construct fine-grained tree of life, we select several fish species of high visual similarity and testwhether we could identify the tiny differences of the features. We pick six fish species from theImageNet training set and for each species, we input all the 1000 images in the training dataset tothe ResNet network.Figure 1 shows that the tree of life constructed by MST and MDS coincides with the hierarchial treebuilt on WordNet. The hierarchical tree constructed by ACP does not coincide with the ground truthat all. The reason may be that in any triangle ABC , the edge length from Ato the median of BC,sayD, is shorter than the average length of edge ABandAC. IfAis far more from symmetric asevaluated by edge BC, the recalculated distance of AD does not accurately represent the distanceofAto the merged set of fB; Cg.Our results demonstrate that deep CNNs could capture the local features as well as the global fea-tures simultaneously. As to rebuild tree of life for genealogically close species, we need both featuresof different granularity like the animal’s size, skin texture and shape. For instance, the texture of a4Under review as a conference paper at ICLR 2017lionfish is very similar to that of a goldfish, then we need other features like the size to distinguishthe two species.As another example, we choose 11 very similar canine species and build a relatively lager tree, asillustrated in Figure 3. We can correctly build the canine tree, possibly according to their fur textureand shape features. The reconstructed quality is as good as what human beings could reconstructbased on the visual similarity.ACP method MST method MDS method WordNetFigure 1: Trees of life for fish species. The first three trees are constructed by our methods, and thefourth tree is the ground truth using WordNet. The hierarchy of MST and MDS coincides with thatof the WordNet.ResNet VGG AlexNetFigure 2: Constructed tree of life for families of species by different networks. Species of the fivefamilies are in different colors. ResNet and VGG can correctly cluster the species but AlexNet doesnot. Build by MST based method.3.2 C ONSTRUCTING COARSE -GRAINED TREE OF LIFEFigure 2 shows the coarse-grained tree of life for clustering species of different families by differentnetworks: ResNet, VGG and AlexNet. We pick 38 species from five families: bird, canine, plant,fish and feline.ResNet and VGG can correctly cluster the species by families, while AlexNet hasmakes some mistakes. This result indicates that deep networks with higher classification qualitylearn the deep representations better, such that the Tree of Life built based on the deep representationalso have different reconstruction quality.To show that we not only correctly cluster the species, but also ensure the correct hierarchy withineach family, we further construct a tree containing 20 species of five families, as illustrated in Figure4.5Under review as a conference paper at ICLR 2017Figure 3: A constructed tree of life for 11canine species. Closer species show shorterdistance. Build by MDS based method.Figure 4: A constructed small tree of life fordifferent families of species. We not onlycorrectly cluster each family of species, butalso present correct hierarchy of the specieswithin each family. Build by MDS basedmethod.MST method WordNetFigure 5: A constructed vehicle tree. Our result looks more reasonable than that of the WordNet.Build by the MDS method.3.3 C ONSTRUCTING A VEHICLE TREETo show the ability of building hierarchical tree for other objects other than animals, we pick eightvehicle categories from the ImageNet training set. Vehicles are very different from the animals.6Under review as a conference paper at ICLR 2017Their shapes are kind of fixed and they can only do certain motions like going forward or turningaround. Images of vehicles do not embed abundant features as the animal images do.Nevertheless, our method still output good results, as shown in Figure 5. We cluster the ambulance,fire truck and garbage truck together, all of which have big carriages. While in WordNet, the ambu-lance is close to model T, convertible and cab, but the three do not have carriage and they are muchsmaller than ambulance. Our result is more reasonable than the WordNet provides.4 C ONCLUSIONBy leveraging the similarity of features extracted automatically by deep learning techniques, webuild a tree of life for various biological species, either belonging to the training categories or not.The results are highly competitive to the level of human beings in building the tree of life based onthe visual similarity of the images. Our work provides new understandings to the deep representationof neural networks and sheds light on possible novel applications of deep learning in the area ofBioinformatics. An intriguing future work would be on how to utilize deep learning techniques tobuild a more delicate tree of life based on the gene similarity of the species.ACKNOWLEDGMENTSThis research work was supported by US Army Research Office(W911NF-14-1-0477) and NationalScience Foundation of China(61472147).REFERENCESPedro Ballester and Ricardo Matsumura de Ara ́ujo. On the performance of googlenet and alexnetapplied to sketches. In AAAI , pp. 1124–1128, 2016.Eric Bapteste, Maureen A O’Malley, Robert G Beiko, Marc Ereshefsky, J Peter Gogarten, LauraFranklin-Hall, Franc ̧ois-Joseph Lapointe, John Dupr ́e, Tal Dagan, Yan Boucher, et al. Prokaryoticevolution and the tree of life are two different things. Biology direct , 4(1):1, 2009.Ingwer Borg and Patrick JF Groenen. Modern multidimensional scaling: Theory and applications .Springer Science & Business Media, 2005.Davide Chicco, Peter J. Sadowski, and Pierre Baldi. Deep autoencoder neural networks for geneontology annotation predictions. In 5th ACM Conference on Bioinformatics, Computational Bi-ology, and Health Informatics, BCB , pp. 533–540, 2014.Dan C. Cirean, Alessandro Giusti, Luca M. Gambardella, and Jrgen Schmidhuber. Mitosis detectionin breast cancer histology images using deep neural networks. In MICCAI , pp. 411–418, 2013.Dan C. Ciresan, Ueli Meier, Jonathan Masci, and J ̈urgen Schmidhuber. Multi-column deep neuralnetwork for traffic sign classification. Neural Networks , 32:333–338, 2012.George E. Dahl, Navdeep Jaitly, and Ruslan Salakhutdinov. Multi-task neural networks for QSARpredictions. CoRR , abs/1406.1231, 2014.Charles Darwin. On the origin of species by means of natural selection. Nature , pp. 502, 1859.W Ford Doolittle and Eric Bapteste. Pattern pluralism and the tree of life hypothesis. Proceedingsof the National Academy of Sciences , 104(7):2043–2049, 2007.Scott V Edwards. Is a new and general theory of molecular systematics emerging? Evolution , 63(1):1–19, 2009.Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In COLT , pp.907–940, 2016.7Under review as a conference paper at ICLR 2017Leon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXivpreprint arXiv:1508.06576 , 2015a.Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis using convolutionalneural networks. In NIPS , pp. 262–270, May 2015b.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. CoRR , abs/1512.03385, 2015.Laura A. Hug, Brett J.Baker, Karthik Anantharaman, Christopher T. Brown, Alexander J. Probst,and et al. A new view of the tree of life : Nature microbiology. Nature Microbiology , pp. 16048,2016.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo-lutional neural networks. In NIPS , pp. 1097–1105, 2012.Dong Yu Li Deng. Deep learning: Methods and applications. Technical report, May 2014.George A Miller. Wordnet: a lexical database for english. Communications of the ACM , 38(11):39–41, 1995.Bharath Ramsundar, Steven M. Kearnes, Patrick Riley, Dale Webster, David E. Konerding, andVijay S. Pande. Massively multitask networks for drug discovery. CoRR , abs/1502.02072, 2015.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.Hasim Sak, Andrew Senior, Kanishka Rao, Francoise Beaufays, and Johan Schalkwyk. Googlevoice search: faster and more accurate. September 2015. URL https://research.googleblog.com/2015/09/google-voice-search-faster-and-more.html .Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. CoRR , abs/1409.1556, 2014.Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y . Ng. Parsing with composi-tional vector grammars. In ACL, pp. 455–465, 2013.Rupesh Kumar Srivastava, Klaus Greff, and J ̈urgen Schmidhuber. Training very deep networks. InNIPS , pp. 2377–2385, 2015.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov,Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions.InCVPR , pp. 1–9, 2015.Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural imagecaption generator. In CVPR , pp. 3156–3164, 2015.Izhar Wallach, Michael Dzamba, and Abraham Heifets. Atomnet: A deep convolutional neuralnetwork for bioactivity prediction in structure-based drug discovery. CoRR , abs/1510.02855,2015.8Under review as a conference paper at ICLR 2017APPENDIXTo test the inner product method in Section 2.2, that can build tree of the species not in the trainingset, we select 5 species not in the training set and 14 species in the training set. We choose 1000images for each species except for Bassarisk which only contains 694 images. We show the resultson ResNet using the MDS based method. Figure 6 illustrates the result.MDS method WordNetFigure 6: Constructing tree of life containing some species not in training set (marked by pink point).We use inner product method to build the distance matrix. Only coati is in the wrong leaf of the tree.9
H1_QSDqxl
Under review as a conference paper at ICLR 2017RULE MINING IN FEATURE SPACEStefano Teso & Andrea PasseriniDepartment of Information Engineering and Computer ScienceUniversity of TrentoTrento, Italyfteso, passerini g@disi.unitn.itABSTRACTRelational embeddings have emerged as an excellent tool for inferring novel factsfrom partially observed knowledge bases. Recently, it was shown that someclasses of embeddings can also be exploited to perform a simplified form of rulemining. By interpreting logical conjunction as a form of composition between re-lation embeddings, simplified logical theories can be mined directly in the spaceof latent representations. In this paper, we present a method to mine full-fledgedlogical theories, which are significantly more expressive, by casting the semanticsof the logical operators to the space of the embeddings. In order to extract relevantrules in the space of relation compositions we borrow sparse reconstruction pro-cedures from the field of compressed sensing. Our empirical analysis showcasesthe advantages of our approach.1 I NTRODUCTIONKnowledge Bases (KB) capture relational knowledge about a domain of choice by modelling entitiesand facts relating them. In so doing, KBs allow for rich answers to user queries, as happens with theknowledge panels powered by the Google Knowledge Graph. Furthermore, KBs can be mined forrules, i.e. patterns of relations which are frequently found to hold in the KB. Mining theories fromdata is the task of Rule Mining (Dzeroski & Lavrac, 2000) and Inductive Logic Programming (Dze-roski & Lavrac, 1994; Muggleton et al., 1992).Classical ILP methods mine theories by searching over the (exponentially large) space of logical the-ories, resorting to language biases and heuristics to simplify the learning problem. While powerful,pure ILP methods do not scale to large relational datasets, preventing them from mining Web-scaleKBs such as YAGO (Hoffart et al., 2013) and DBpedia (Auer et al., 2007). Further, purely logicalmethods can not gracefully deal with noise. Next-generation miners that specialize on large KBs,such as AMIE (Gal ́arraga et al., 2015), work around these issues by trading off theory expressivenessfor runtime efficiency.A general strategy for processing huge datasets is dimensionality reduction : instead of working onthe original KB directly, one first squeezes it to a summary of manageable size, and then performsthe required operations on the summary itself. Common summarization techniques for relationaldata include relational factorization (Nickel et al., 2011; London et al., 2013; Riedel et al., 2013)and representation learning (Bordes et al., 2011; Socher et al., 2013). The core idea is to learncompressed latent representations, or embeddings , of entities and relations able to reconstruct theoriginal KB by minimizing a suitable reconstruction loss. Until recently, relational embeddings havebeen mostly employed for link prediction and knowledge base completion (Nickel et al., 2016).However, Yang et al. (2015) have shown that low-dimensional representations can also be exploitedto perform a simplified form of theory learning. Their paper shows that, under reasonable assump-tions, a simple nearest neighbor algorithm can recover logical rules directly from the fixed sizeembeddings of a KB, with potential runtime benefits. Furthermore, since the embeddings general-ize beyond the observed facts, the rules are implicitly mined over a completion of the KB. Despitethe novelty of their insight, their proposed method has several major downsides. First, their simpleapproach is limited to extracting rules as conjunctions of relations, with no support for logical dis-1Under review as a conference paper at ICLR 2017junction and negation. Second, the rules are mined independently of one another, which can lead toredundant theories and compromise generalization ability and interpretability.Building on the insights of Yang et al. (2015), we propose a novel approach to theory learning fromlow-dimensional representations. We view theory learning as a special sparse recovery problem.In this setting, a logical theory is merely an algebraic combination of embedded relations that bestreconstructs the original KB, in a sense that will be made clear later. The recovery problem can besolved with specialized compressed sensing algorithms, such as Orthogonal Matching Pursuit (Patiet al., 1993) or variants thereof. Our approach offers two key advantages: it automatically modelsthe inter-dependency between different rules, discouraging redundancy in the learned theory, and itsupports all propositional logic connectives, i.e. conjunction, disjunction, and negation. Our empiri-cal analysis indicates that our method can mine satisfactory theories in realistic KBs, demonstratingits ability to discover diverse and interpretable sets of rules. Additionally, our method can in princi-ple be applied to “deeper” embeddings, that is, embeddings produced by deep models that take intoconsideration both relational and feature-level aspects of the data.The paper is structured as follows. In the next section we introduce the required background mate-rial. We proceed by detailing our approach in Section 3 and evaluating it empirically in Section 4.We discuss relevant related work in Section 5, and conclude with some final remarks in Section 6.2 B ACKGROUNDIn this section we briefly overview the required background. Let us start with the notation we willuse. We write column vectors xin bold-face, matrices Xin upper-case, and third-order tensors Xincalligraphic upper-case. Xkis thekth frontal slice of the tensor X, and vec(X)is the vectorization(flattening) of X. We denote the usual Frobenius matrix norm as kXkF:=qPijx2ij, the numberof non-zero entries as kXk0, the setf1;:::;ngas[n]and the Cartesian product of `setsf1;:::;ngas[n]`. We reserve typewriter fonts for logical entities Ann and relations motherOf .Knowledge Bases and Theories. Aknowledge base (KB) is a collection of known true factsabout a domain of interest. As an example, a KB about kinship relations may include facts such as(Ann;motherOf;Bob), which states that Ann is known to be the mother of Bob. In the followingwe will usenandmto denote the number of distinct entities and relations in the KB, respectively.With a slight abuse of notation, we will refer to logical constants and relations (e.g. Ann andmotherOf ) by their index in the KB ( e2[n]orr2[m], respectively). Triples not occurring in theKB are unobserved, i.e. neither true nor false.Given an input KB, the goal of theory learning, also known as Inductive Logic Programming (Mug-gleton et al., 1992), is to induce a compact logical theory that both explains the observed facts andgeneralizes to the unobserved ones. Most ILP methods extract theories in definite clausal form,which offers a good compromise between expressiveness and efficiency. A theory in this form is animplicitly conjoined set of Horn rules, i.e. rules like:8e;e02[n] (e;uncleOf;e0)(9e002[n] (e;brotherOf ;e00)^(e00;parentOf;e0)Here(represents logical entailment. The left-hand side is called the head of the rule, while theright-hand side is the body . The semantics of Horn rules are simple: whenever the body is satisfiedby a given set of entities and relations, so is the head. The length of a rule is the number of relationsappearing in its body; the above is a length 2 rule.Classical ILP approaches cast theory learning as a search problem over the (exponentially large)space of candidate theories. When there are no negative facts, as in our case, the quality of a theoryis given by the number of true facts it entails. In practice, learning is regularized by the size of thetheory (number and length of the rules) to encourage compression, generalization and interpretabil-ity. Due to the combinatorial nature of the problem, the search task is solved heuristically, e.g. bysearching individual Horn rules either independently or sequentially, or by optimizing surrogate ob-jective functions. A language bias, provided by a domain expert, is often employed to guide thesearch toward more promising theories. Please see (Dzeroski & Lavrac, 1994; Muggleton et al.,1992) for more details.2Under review as a conference paper at ICLR 2017Relational embeddings. Relational embedding techniques learn a low-dimensional latent repre-sentation of a KB. In order to ground the discussion, we focus on a prototypical factorization method,RESCAL (Nickel et al., 2011; 2012); many alternative formulations can be seen as variations or gen-eralizations thereof. We stress, however, that our method can be applied to other kinds of relationalembeddings, as sketched in Section 6. For a general treatment of the subject, see Nickel et al. (2016).InRESCAL , each entity e2[n]in the KB is mapped to a vector xe2Rd, and each binary relationr2[m]to a matrix Wr2Rdd. These parameters are learned from data. Here d2[n]is auser-specified constant (the rank) controlling the amount of compression. The key idea is to modelthe plausibility, or score , of each fact as a function of its embedding. In particular, in RESCAL thescore of a fact (e;r;e0)is given by the bilinear product:score (e;r;e0) := (xe)>Wrxe0=dXi=1dXj=1xeiWrijxe0jThe bilinear product measures how similar xeandWrxe0are: the higher the dot product, the higherthe score.The embeddings can be expressed compactly in tensor form by grouping the entity vectors side-by-side into a matrix X2Rdn, and stacking the relation matrices into a tensor W2Rddm.The embeddings (X;W)are learned so as to reconstruct the original KB as accurately as possible,modulo regularization. More formally, let Y2f 0;1gnnmbe a tensor such that Yree0evaluates to1 if the fact (e;r;e0)appears in the KB, and to 0 otherwise. The learned embeddings should satisfyYree0score (e;r;e0)for all possible triples (e;r;e0). Learning equates to solving the optimizationproblem:minW;XmXr=1kYrX>WrXk2F+ kXk2F+mXr=1kWrk2F!(1)The second summand is a quadratic regularization term, whose impact is modulated by the >0hyperparameter. Note that the entity embeddings Xare shared between relations. Choosing dnforces RESCAL to learn more compressed latent features, that hopefully better generalize overdistinct facts, at the cost of a potentially larger reconstruction error. While the optimization problemis non-convex and can not be solved exactly in general, RESCAL pairs clever initialization with analternating least squares procedure to obtain good quality solutions (Nickel et al., 2011).In the next section we will see how theory learning can be generalized to work directly on theembeddings produced by RESCAL and analogous models.3 R ULE MINING IN FEATURE SPACEIn this section we detail our take on rule mining. Given a knowledge base in tensor form Y, our goalis to learn a theory Tthat (1) entails many of the observed facts and few of the unobserved ones,and (2) is composed of few, diverse rules, for improved generalization.The theoryTincludes rules for all possible relations h2[m], where the relation is the head of therule and the body is an “explanation” of the relation as a (logical) combination of relations. Let Thbe the set of rules for head h. In our setting, This a conjunction of Horn rules, where each rule isat most`long1;`is provided by the user. Following Yang et al. (2015), we require the rules to beclosed paths , i.e. to be in the following form:(e1;h;e`+1)((e1;b1;e2)^(e2;b2;e3)^:::^(e`;b`;e`+1) (2)Herehis the head relation, and b1;:::;blare the body relations; quantifiers have been left implicit.Formally, a Horn rule is a closed path if (i) consecutive relations share the middle argument, and(ii) the left argument of the head appears as the first argument of the body (and conversely for theright argument). This special form enables us to cast theory learning in terms of Boolean matrixoperations, as follows.1For the sake of exposition, in the following we only consider rules exactly `long; as a matter of fact, theminers we consider can return rules of length `or shorter.3Under review as a conference paper at ICLR 2017LetYbe a knowledge base and h2[m]the target head relation. Note that the conjunction of Hornrules with the same head relation hamounts to the disjunction of their bodies. Due to requirement(1), the set of rules targeting hshould approximate the truth values of h, i.e.YhWB2ThVb2BYbHereBis the body of a rule, and the logical connectives operate element-wise. In order to learnTfromY, we define a loss function that encourages the above condition. We define the loss(Yh;Th)as the accuracy of reconstruction of Yhw.r.t.Th, written as:(Yh;Th) :=YhWB2ThVb2BYb0(3)whereis the element-wise exclusive OR operator and kk0computes the misclassification errorofThoverYh. Minimizing Eq. (3) unfortunately is a hard combinatorial problem. We will nextshow how to approximate the latter as a continuous sparse reconstruction problem.The relaxed reconstruction problem. Our goal is to approximate Eq. (3) in terms of algebraicmatrix operations over the relation embeddings W. First, we replace conjunctions with productsbetween the embeddings of the relations along the path in the body of the rule, i.e.Vb2BYbX>Qb2BWbXThe idea is that a linear operator Wbmaps the embedding of the left argument of relation btovectors similar to the embedding of the right one, as per Eq. 1. For instance, WmotherOfwill mapthe embedding of Ann to a vector with high dot product w.r.t. the embedding of Bob. The closedpath represented by the conjunction of the relations in the body Bis emulated by composition ofembeddings and obtained by repeated applications of this mapping (Yang et al., 2015).Second, we replace disjunctions with sums:YhX>WhXX>PB2ThQb2BWbXIntuitively, each path should represent an alternative explanation for the head relation, so that twoentities are in relation hif at least one path (approximately) maps the left entity to the right one. Di-versity between these alternatives will be enforced by imposing orthogonality between the mappingsof the corresponding paths during the mining procedure, as explained later on in the section.Clearly, the set of rules This unknown and needs to be learned in solving the reconstruction problem.We thus let the summation run over all possible paths of length `, i.e.[m]`, adding a coefficient Bfor each candidate path. The problem boils down to learning these alphas:minX>WhXPB2[m]`BX>Qb2BWbXF(4)In principle, the coefficients Bshould be zero-one; however, we relax them to be real-valued toobtain a tractable optimization problem. This choice has another beneficial side effect: the relaxedformulation gives us a straightforward way to introduce negations in formulas, thus augmenting theexpressiveness of our approach beyond purely Horn clauses. The idea builds on the concept of setdifference from set theory. A relation like brotherOf can be explained by the rule “a sibling whois not a sister”. This could be represented in the space of the embeddings as the difference betweenthesiblingOf mapping (accounting for both brothers and sisters) and the sisterOf one. Morespecifically, siblingOf^:sisterOf would be encoded as WsiblingOfWsisterOf. Wethus allowto also take negative values, with the interpretation that negative bodies are negatedand conjoint (rather than disjoint) with the rest of the formula.The last step is to get rid of the instances X, and mine rules for head honly in terms of their abilityto reconstruct its embedding Wh. This is justified by the observation (Yang et al., 2015; Gu et al.,2015; Neelakantan et al., 2015; Garc ́ıa-Dur ́an et al., 2015) that the embeddings are learned so thattheir composition is close to that of the embedding of the head.Putting everything together, we obtain an optimization problem of the form:minWhPB2[m]`BQb2BWbF(5)4Under review as a conference paper at ICLR 2017# triples # entities # relationsNations 3243 14 56Kinship 10790 104 26UMLS 6752 135 49Family 5984 628 24Table 1: Number of entities and relations of all datasets.for each target head h. Upon finding the coefficients , we convert them into a logic theory basedon their sign and magnitude. First, only bodies with absolute coefficients larger than a threshold >0are retained. Each body is then converted to the conjuction of the relations it contains. Bodieswith positive coefficients are disjunctively combined with the rest of the formula, while bodies withnegative coefficients are added as conjunctions of their negations. The final theory for the minedrule can be written as:YhWB:B>Vb2BYb^:WB:B<Vb2BYb(6)Solving the reconstruction problem. Equation 5 is a matrix recovery problem in Frobenius norm.Instead of solving it directly, we leverage the norm equivalence kABkF=kvec(A)vec(B)k2to reinterpret it as a simpler vector recovery problem. Most importantly, since most of the candidatepathsBcan not explain the head h, the recovery problem is typically sparse . Sparse recoveryproblems are a main subject of study in compressed sensing (Cand `es et al., 2006), and a multitudeof algorithms can be employed to solve them, including Orthogonal Matching Pursuit (OMP) (Patiet al., 1993), Basis Pursuit (Chen et al., 1998), and many recent alternatives. In Appendix A weshow how minimizing the sparse recovery problem in Eq. 5 equates to minimizing an upper boundof the total loss.Two features of the above problem stand out. First, if the target theory is sparse enough, existingrecovery algorithms can solve the reconstruction to global optimality with high probability (Cand `eset al., 2006). We do not explicitly leverage this perk; we leave finding conditions guaranteeing per-fect theory recovery to future work. Second and most importantly, reconstruction algorithms choosethe non-zero coefficients Bso that the corresponding path embeddingsQb2BWbare mutually or-thogonal . This means that similar paths will not be mined together, thus encouraging rule diversity,as per requirement (2).4 E MPIRICAL EVALUATIONWe compare our method, dubbed Feature Rule Miner ( FRM for short), against two variants of thekNN-based theory miner of Yang et al. (2015) on four publicly available knowledge bases: Nations,Kinship and UMLS from Kemp et al. (2006), and Family from Fang et al. (2013). The KB statisticscan be found in Table 1. Given that FRM requires the relational embeddings Wto be normalized(with respect to the Frobenius norm), we compare it against both the original kNN-based miner,which mines the unnormalized embeddings, and a variant that uses the normalized embeddingsinstead, for the sake of fairness.The miners were tested in a 10-fold cross-validation setting. We computed the relational embeddingsover the training sets using non-negative RESCAL (Krompaß et al., 2013)2variant with the defaultparameters (500 maximum iterations, convergence threshold 105). The size of the embeddings dwas set to a reasonable value for each KB: 100 for Family, 25 for Kinship and UMLS, and 5 forNations. We configured all competitors to mine at most 100rules for each head relation. The kNNdistance threshold was set to 100(although the actual value used is chosen dynamically, as doneby Yang et al. (2015)). The desired reconstruction threshold of OMP was set ot 103. Finally, thecoefficient threshold was set to 0:2.We evaluate both the F-score and the per-rule recall of all the methods. The F-score measures howwell the mined rules reconstruct the test facts in terms of both precision and recall. The per-rulerecall is simply the recall over the number of rules mined for the target head; it favors methods that2Standard RESCAL tends to penalize the kNN-based competitors.5Under review as a conference paper at ICLR 2017Figure 1: Results of all methods on the four datasets for max rule length 2. Average F-score isreported on the left, average recall over number of rules on the right.Figure 2: Results of all methods on the four datasets for max rule length 3. Average F-score isreported on the left, average recall over number of rules on the right.focus on few rules with high coverage, and penalizes those that mine many irrelevant rules. Theresults on the four KBs (averaged over all target relations) are reported in Figures 1 and 2, and anexample of mined rules in Figure 3. More detailed per-head results can be found in Appendix B.(Unfortunately, the normalized kNN method failed to work with the UMLS dataset; we left a blankin the plots.)The plots show a clear trend: FRM performs better than the kNN-based methods in all four knowl-edge bases, both in terms of F-score and in terms of per-rule recall. Further, the normalized kNNvariant tends to outperform the original, unnormalized version, providing support for our use ofnormalized relation embeddings.Notably, the three methods mine similar amounts of rules. While OMP stops automatically when themined body reconstructs the target head sufficiently well, the kNN methods compensate for the lackof a proper termination criterion by employing a distance-based pruning heuristict (as discussedby Yang et al. (2015)). Rather, the poor per-rule recall performance of the kNN methods can beimputed to insufficient rule diversity. The kNN miners discover the rules independently of eachother, leading to theory redundancy. This is a well known problem in rule mining. On the contrary,OMP avoids this issue by enforcing orthogonality between the mined bodies. The resulting theoriesperforms much better especially in terms of per-rule recall.The phenomenon is also visible in Figure 3. The theory found by FRM contains many diverse bodies,while the one found by kNN does not. The two rules also show the power of negation: the FRMtheory includes the “perfect” definition of a brother, i.e. siblingOf^ :sisterOf (as well asan obvious error, i.e. that a brother can not be a sibling of a sibling). In contrast the theory found bykNN completely ignores the complementarity of brotherOf andsisterOf , and includes therulebrotherOf(sisterOf .6Under review as a conference paper at ICLR 2017brotherOf ( (siblingOf _(siblingOf ^brotherOf )_(siblingOf ^sisterOf ))^ : (sisterOf _(siblingOf ^siblingOf ))brotherOf (siblingOf _(siblingOf ^siblingOf )_(siblingOf ^brotherOf )_(childOf ^parentOf )_(sonOf ^parentOf )_sisterOf _(siblingOf ^sisterOf )Figure 3: Example rules for the brotherOf relation mined by FRM (top) andkNN (bottom).5 R ELATED WORKThere is a huge body of work on theory learning, historically studied in Inductive Logic Program-ming (Dzeroski & Lavrac, 1994; Muggleton et al., 1992). For the sake of brevity, we focus ontechniques that are more closely related to our proposal.The core of most ILP methods, e.g. FOIL (Quinlan, 1990), Progol (Muggleton, 1995), and Aleph3,is a search loop over the space of candidate theories. Bottom-up methods start from an initiallyempty theory, and add one Horn rule at a time. Individual rules are constructed by conjoining first-order relations so as to maximize the number of covered positive facts, while trying to keep coverednegative facts to a minimum. After each rule is constructed, all covered facts are removed fromthe KB. These methods are extremely expressive, and can handle general nary relations. Instead,FRM focuses on binary relations only, which are more common in today’s Web-centric knowledgebases. ILP methods are designed to operate on the original KB only; this fact, paired with thesheer magnitude of the search space, makes standard ILP methods highly non-scalable. More recentextensions (e.g. kFOIL (Landwehr et al., 2006)) adopt a feature-space view of relational facts,but are still based on the classical search loop and can not be trivially adapted to working on therelational embeddings directly. Finally, rule elongation can be hindered by the presence of plateausin the cost function.Our path-based learning procedure is closely related to Relational Pathfinding (RP) (Richards &Mooney, 1992). RP is based on the observation that ground relation paths (that is, conjunctionsof true relation instances) do act as support for arbitrary-length rules. It follows that mining thesepaths directly allows to detect longer rules with high support, avoiding the rule elongation problementirely. There are many commonalities between RP and FRM. Both approaches are centered aroundrelation paths, although in different representations (original versus compressed), and focus on path-based theories. The major drawback of RP is that it requires exhaustive enumeration of relation paths(up to a maximum length), which can be impractical depending on the size of the KB. FRM sidestepsthis issue by leveraging efficient online decoding techniques, namely Online Search OMP (Weinstein& Wakin, 2012).To alleviate its computational requirements, a lifting procedure for RP was presented in Kok &Domingos (2009). Similarly to FRM, lifted RP is composed of separate compression and learningstages. In the first stage, the original KB is “lifted” by clustering functionally identical relationpaths together, producing a smaller KB as output. In the second stage, standard RP is applied to thecompressed KB. A major difference with FRM is that lifting is exact, while RESCAL is typically lossy.Consequently, lifted RP guarantees equivalence of the original and compressed learning problems,but it also ignores the potential generalization benefit provided by the embeddings. Additionally, thefirst step of lifted RP relies on a (rather complex) agglomerative clustering procedure, while FRMcan make use of state-of-the-art representation learning methods. Note that, just like lifted RP, FRMcan be straightforwardly employed for structure learning of statistical relational models.The work of Malioutov & Varshney (2013) is concerned with mining one-level rules from binarydata. Like in FRM, rule learning is viewed as a recovery problem, and solved using compressed sens-ing techniques. Two major differences with FRM exist. In Malioutov & Varshney (2013) the truthvalue matrix is recovered with an extension of Basis Pursuit that handles 0-1 coefficients through3http://www.cs.ox.ac.uk/activities/machinelearning/Aleph/7Under review as a conference paper at ICLR 2017a mixed-integer linear programming (MILP) formulation, which is however solved approximatelyusing linear relaxations. BP however requires the dictionary to be explicitly grounded, which is notthe case for FRM. Additionally, their method is limited to one-level rules, i.e. either conjunctionsor disjunctions of relations, but not both. An extension to two-level rules has been presented bySu et al. (2015), where BP is combined with heuristics to aggregate individual rules into two-leveltheories. In contrast, FRM natively supports mining two-level rules via efficient online search.The only other theory learning method that is explicitly designed for working on embeddings is theone of Yang et al. (2015). It is based on the observation (also made by Gu et al. (2015)) that closedpath Horn rules can be converted to path queries, which can be answered approximately by searchingthe space of (type-compatible) compositions of relation embeddings. They propose to perform asimple nearest neighbor search around the embedding of the head relation, Wh, while avoidingtype-incompatible relation compositions. Unfortunately, rules are searched for independently ofone another, which seriously affects both quality and interpretability of the results as shown by ourexperimental evaluation.6 C ONCLUSIONWe presented a novel approach for performing rule mining directly over a compressed summaryof a KB. A major advantage over purely logical alternatives is that the relational embeddings au-tomatically generalize beyond the observed facts; as consequence, our method implicitly minesa completion of the knowledge base. The key idea is that theory learning can be approximatedby a recovery problem in the space of relation embeddings, which can be solved efficiently usingwell-known sparse recovery algorithms. This novel formulation enables our method to deal withall propositional logic connectives (conjunction, disjunction, and negation), unlike previous tech-niques. We presented experimental results highlighting the ability of our miner to discover relevantand, most importantly, diverse rules.One difficulty in applying our methods is that classical sparse recovery algorithm require the com-plete enumeration of the candidate rule bodies, which is exponential in rule length. In order to solvethis issue, we plan to apply recent online recovery algorithms, like Online Search OMP (Weinstein& Wakin, 2012), which can explore the space of alternative bodies on-the-fly.As the quality of relational embedding techniques improves, for instance thanks to path-based Guet al. (2015); Neelakantan et al. (2015); Garc ́ıa-Dur ́an et al. (2015) and logic-based Rockt ̈aschelet al. (2015) training techniques, we expect the reliability and performance of theory learning infeature space to substantially improve as well.REFERENCESS. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. Ives. Dbpedia: A nucleus for aweb of open data . 2007.A. Bordes, J. Weston, R. Collobert, and Y . Bengio. Learning structured embeddings of knowledgebases. In Proceedings of AAAI , 2011.Emmanuel J Cand `es, Justin Romberg, and Terence Tao. Robust uncertainty principles: Exact signalreconstruction from highly incomplete frequency information. IEEE Transactions on informationtheory , 52(2):489–509, 2006.S.S. Chen, David L. Donoho, and M.A. Saunders. Atomic decomposition by basis pursuit. SIAMjournal on scientific computing , 20(1):33–61, 1998.Saˇso Dzeroski and Nada Lavrac. Inductive logic programming: Techniques and applications . 1994.Saˇso Dzeroski and Nada Lavrac (eds.). Relational Data Mining , New York, NY , USA, 2000.Springer-Verlag New York, Inc.R. Fang, A. Gallagher, T. Chen, and A. Loui. Kinship classification by modeling facial featureheredity. In Proceedings of ICIP , pp. 2983–2987, 2013.8Under review as a conference paper at ICLR 2017L. Gal ́arraga, C. Teflioudi, K. Hose, and F. M. Suchanek. Fast rule mining in ontological knowledgebases with amie+. The VLDB Journal , 24(6):707–730, 2015.A. Garc ́ıa-Dur ́an, A. Bordes, and N. Usunier. Composing relationships with translations. In Pro-ceedings of EMNLP , pp. 286–290, 2015.K. Gu, J. Miller, and P. Liang. Traversing knowledge graphs in vector space. arXiv preprintarXiv:1506.01094 , 2015.J. Hoffart, F. M. Suchanek, K. Berberich, and G. Weikum. Yago2: A spatially and temporallyenhanced knowledge base from wikipedia. Artificial Intelligence , 194:28–61, 2013.Charles Kemp, Joshua B Tenenbaum, Thomas L Griffiths, Takeshi Yamada, and Naonori Ueda.Learning systems of concepts with an infinite relational model. In Proceedings of AAAI , volume 3,pp. 5, 2006.S. Kok and P. Domingos. Learning markov logic network structure via hypergraph lifting. InProceedings of ICML , pp. 505–512, 2009.Denis Krompaß, Maximilian Nickel, Xueyan Jiang, and V olker Tresp. Non-negative tensor factor-ization with rescal. In Tensor Methods for Machine Learning, ECML workshop , 2013.N. Landwehr, A. Passerini, L. De Raedt, and P. Frasconi. kfoil: Learning simple relational kernels.InAAAI , volume 6, pp. 389–394, 2006.B. London, T. Rekatsinas, B. Huang, and L. Getoor. Multi-relational learning using weighted tensordecomposition with modular loss. arXiv preprint arXiv:1303.1733 , 2013.D. Malioutov and K. Varshney. Exact rule learning via boolean compressed sensing. In Proceedingsof ICML , pp. 765–773, 2013.S. Muggleton. Inverse entailment and progol. New generation computing , 13(3-4):245–286, 1995.Stephen Muggleton, Ramon Otero, and Alireza Tamaddoni-Nezhad. Inductive logic programming ,volume 168. 1992.A. Neelakantan, B. Roth, and A. McCallum. Compositional vector space models for knowledgebase completion. arXiv preprint arXiv:1504.06662 , 2015.M. Nickel, V . Tresp, and H-P Kriegel. A three-way model for collective learning on multi-relationaldata. In Proceedings of ICML , pp. 809–816, 2011.Maximilian Nickel, V olker Tresp, and Hans-Peter Kriegel. Factorizing yago: scalable machinelearning for linked data. In Proceedings of WWW , pp. 271–280, 2012.Maximilian Nickel, Kevin Murphy, V olker Tresp, and Evgeniy Gabrilovich. A review of relationalmachine learning for knowledge graphs. Proceedings of the IEEE , 104(1):11–33, 2016.Y . Chandra Pati, R. Rezaiifar, and P.S. Krishnaprasad. Orthogonal matching pursuit: Recursivefunction approximation with applications to wavelet decomposition. In Proceedings of AsilomarConference on Signals, Systems and Computers, 1993 , pp. 40–44, 1993.J. R. Quinlan. Learning logical definitions from relations. Machine learning , 5(3):239–266, 1990.B.L. Richards and R.J. Mooney. Learning relations by pathfinding. In Proceedings of AAAI , pp.50–55, 1992.S. Riedel, L. Yao, A. McCallum, and B. M. Marlin. Relation extraction with matrix factorizationand universal schemas. In Proceedings of NAACL-HLT , pp. 74–84, 2013.T. Rockt ̈aschel, S. Singh, and S. Riedel. Injecting logical background knowledge into embeddingsfor relation extraction. In Proceedings of NAACL HTL , 2015.R. Socher, D. Chen, C. D. Manning, and A. Ng. Reasoning with neural tensor networks for knowl-edge base completion. In Proceedings of NIPS , pp. 926–934, 2013.9Under review as a conference paper at ICLR 2017G. Su, D. Wei, K.R. Varshney, and D.M. Malioutov. Interpretable two-level boolean rule learningfor classification. arXiv preprint arXiv:1511.07361 , 2015.A. J. Weinstein and M. B. Wakin. Online search orthogonal matching pursuit. In Proceedings ofIEEE SSP Workshop , pp. 584–587, 2012.B. Yang, W. Yih, X. He, J. Gao, and L. Deng. Embedding entities and relations for learning andinference in knowledge bases. In Proceedings of ICLR , 2015.APPENDIX A: E RROR DERIVATIONFix a target head relation h. LetEhdenote the RESCAL error matrix Eh:=YhX>WhX,and~Ehdenote the error matrix of Whdue to FRM, namely ~Eh:=WhPB2ThBWBwhereWB=Qb2BWb. Putting the two definitions together, we obtain:Eh=YhX>hPB2ThWB+~EhiX=YhX>PB2ThWBXX>~EhXThen, the Frobenius norm of the reconstruction error of head his:kYhX>PB2ThBWBXk=kX>~EhX+EhkkX>~EhXk+kEhkkXk2k~Ehk+kEhkwhere the last step follows from the sub-multiplicativity of the Frobenius norm. Now, FRM min-imizesk~Ehk, and therefore minimizes an upper bound of the misclassification error of ThoverYh.We note in passing that the bound can be tightened by reducing the norm of the entity embeddingsX, for instance by choosing the proper embedding method. The question of how to find an optimalchoice, however, is left as future work.APPENDIX B: E XTENDED RESULTSFigure 4: Detailed results for the nations KB with length 2 rules.Figure 5: Detailed results for the kinship KB with length 2 rules.10Under review as a conference paper at ICLR 2017Figure 6: Detailed results for the UMLS KB with length 2 rules.Figure 7: Detailed results for the family KB with length 2 rules.Figure 8: Detailed results for the nations KB with length 3 rules.Figure 9: Detailed results for the kinship KB with length 3 rules.11Under review as a conference paper at ICLR 2017Figure 10: Detailed results for the UMLS KB with length 3 rules.Figure 11: Detailed results for the family KB with length 3 rules.12
HJhcg6Fxg
Under review as a conference paper at ICLR 2017BINARY PARAGRAPH VECTORSKarol Grzegorczyk & Marcin KurdzielAGH University of Science and TechnologyDepartment of Computer ScienceKrakow, Polandfkgr,kurdziel g@agh.edu.plABSTRACTRecently Le & Mikolov described two log-linear models, called Paragraph Vector,that can be used to learn state-of-the-art distributed representations of documents.Inspired by this work, we present Binary Paragraph Vector models: simple neuralnetworks that learn short binary codes for fast information retrieval. We show thatbinary paragraph vectors outperform autoencoder-based binary codes, despite us-ing fewer bits. We also evaluate their precision in transfer learning settings, wherebinary codes are inferred for documents unrelated to the training corpus. Resultsfrom these experiments indicate that binary paragraph vectors can capture seman-tics relevant for various domain-specific documents. Finally, we present a modelthat simultaneously learns short binary codes and longer, real-valued representa-tions. This model can be used to rapidly retrieve a short list of highly relevantdocuments from a large document collection.1 I NTRODUCTIONOne of the significant challenges in contemporary information processing is the sheer volume ofavailable data. Gantz & Reinsel (2012), for example, claim that the amount of digital data in theworld doubles every two years. This trend underpins efforts to develop algorithms that can efficientlysearch for relevant information in huge datasets. One class of such algorithms, represented by, e.g.,Locality Sensitive Hashing (Indyk & Motwani, 1998), relies on hashing data into short, locality-preserving binary codes (Wang et al., 2014). The codes can then be used to group the data intobuckets, thereby enabling sublinear search for relevant information, or for fast comparison of dataitems.In this work we focus on learning binary codes for text documents. An important work in thisdirection has been presented by Salakhutdinov & Hinton (2009). Their semantic hashing lever-ages autoencoders with sigmoid bottleneck layer to learn binary codes from a word-count bag-of-words (BOW) representation. Salakhutdinov & Hinton demonstrated that semantic hashing codesused as an initial document filter can improve precision of TF-IDF-based retrieval. Learning fromBOW, however, has its disadvantages. First, word-count representation, and in turn the learnedcodes, are not in itself stronger than TF-IDF. Second, BOW is an inefficient representation: evenfor moderate-size vocabularies BOW vectors can have thousands of dimensions. Learning fully-connected autoencoders for such high-dimensional vectors is impractical. Salakhutdinov & Hintonrestricted the BOW vocabulary in their experiments to 2000 most frequent words.Recently several works explored simple neural models for unsupervised learning of distributed rep-resentations of words, sentences and documents. Mikolov et al. (2013) proposed log-linear mod-els that learn distributed representations of words by predicting a central word from its context(CBOW model) or by predicting context words given the central word (Skip-gram model). TheCBOW model was then extended by Le & Mikolov (2014) to learn distributed representations ofdocuments. Specifically, they proposed Paragraph Vector Distributed Memory (PV-DM) model,in which the central word is predicted given the context words and the document vector. Duringtraining, PV-DM learns the word embeddings and the parameters of the softmax that models theconditional probability distribution for the central words. During inference, word embeddings andsoftmax weights are fixed, but the gradients are backpropagated to the inferred document vector.1Under review as a conference paper at ICLR 2017In addition to PV-DM, Le & Mikolov studied also a simpler model, namely Paragraph Vector Dis-tributed Bag of Words (PV-DBOW). This model predicts words in the document given only thedocument vector. It therefore disregards context surrounding the predicted word and does not learnword embeddings. Le & Mikolov demonstrated that paragraph vectors outperform BOW and bag-of-bigrams in information retrieval task, while using only few hundreds of dimensions. These mod-els are also amendable to learning and inference over large vocabularies. Original CBOW networkused hierarchical softmax to model the probability distribution for the central word. One can alsouse noise-contrastive estimation (Gutmann & Hyv ̈arinen, 2010) or importance sampling (Cho et al.,2015) to approximate the gradients with respect to the softmax logits.An alternative approach to learning representation of sentences has been recently described by Kiroset al. (2015). Networks proposed therein, inspired by the Skip-gram model, learn to predict sur-rounding sentences given the center sentence. To this end, the center sentence is encoded by anencoder network and the surrounding sentences are predicted by a decoder network conditioned onthe center sentence code. Once trained, these models can encode sentences without resorting tobackpropagation inference. However, they learn representations at the sentence level but not at thedocument level.In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DMthat learn short binary codes for text documents. One inspiration for binary paragraph vectors comesfrom a recent work by Lin et al. (2015) on learning binary codes for images. Specifically, we intro-duce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binaryactivations. We demonstrate that the resultant binary paragraph vectors significantly outperform se-mantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, wheretraining and inference are carried out on unrelated text corpora. Finally, we study models that si-multaneously learn short binary codes for document filtering and longer, real-valued representationsfor ranking. While Lin et al. (2015) employed a supervised criterion to learn image codes, binaryparagraph vectors remain unsupervised models: they learn to predict words in documents.2 B INARY PARAGRAPH VECTOR MODELSThe basic idea in binary paragraph vector models is to introduce a sigmoid nonlinearity before thesoftmax that models the conditional probability of words given the context. If we then enforcebinary or near-binary activations in this nonlinearity, the probability distribution over words willbe conditioned on a bit vector context, rather than real-valued representation. The inference inthe model proceeds like in Paragraph Vector, except the document code is constructed from thesigmoid activations. After rounding, this code can be seen as a distributed binary representation ofthe document.In the simplest Binary PV-DBOW model (Figure 1) the dimensionality of the real-valued documentembeddings is equal to the length of the binary codes. Despite this low dimensional representation –a useful binary hash will typically have 128 or fewer bits – this model performed surprisingly wellin our experiments. Note that we cannot simply increase the embedding dimensionality in Binarydocument'swordsampledsoftmaxroundedsigmoidembeddinglookupreal-valuedembeddingbinaryembeddingdocumentFigure 1: The Binary PV-DBOW model. Modifications to the original PV-DBOW model are high-lighted.PV-DBOW in order to learn better codes: binary vectors learned in this way would be too long to beuseful in document hashing. The retrieval performance can, however, be improved by using binarycodes for initial filtering of documents, and then using a representation with higher capacity to rankthe remaining documents by their similarity to the query. Salakhutdinov & Hinton (2009), for exam-ple, used semantic hashing codes for initial filtering and TF-IDF for ranking. A similar document2Under review as a conference paper at ICLR 2017retrieval strategy can be realized with binary paragraph vectors. Furthermore, we can extend theBinary PV-DBOW model to simultaneously learn short binary codes and higher-dimensional real-valued representations. Specifically, in the Real-Binary PV-DBOW model (Figure 2) we introducea linear projection between the document embedding matrix and the sigmoid nonlinearity. Duringtraining, we learn the softmax parameters and the projection matrix. During inference, softmaxweights and the projection matrix are fixed. This way, we simultaneously obtain a high-capacityrepresentation of a document in the embedding matrix, e.g. 300-dimensional real-valued vector, anda short binary representation from the sigmoid activations. One advantage of using the Real-Binaryhigh-dimensionalembedding low-dimensionalembeddingbinaryembeddinglinearprojectionroundedsigmoidsampledsoftmaxembeddinglookupdocumentdocument'swordFigure 2: The Real-Binary PV-DBOW model. Modifications to the original PV-DBOW model arehighlighted.PV-DBOW model over two separate networks is that we need to store only one set of softmaxparameters (and a small projection matrix) in the memory, instead of two large weight matrices.Additionally, only one model needs to be trained, rather than two distinct networks.Binary document codes can also be learned by extending distributed memory models. Le & Mikolov(2014) suggest that in PV-DM, a context of the central word can be constructed by either concate-nating or averaging the document vector and the embeddings of the surrounding words. However,in Binary PV-DM (Figure 3) we always construct the context by concatenating the relevant vectorsbefore applying the sigmoid nonlinearity. This way, the length of binary codes is not tied to thedimensionality of word embeddings.sampledsoftmaxroundedsigmoidconcatenatedcontextbinaryconcatenatedcontextdocumentembeddinglookupwordembeddinglookupdocumentcentralword...context wordsFigure 3: The Binary PV-DM model. Modifications to the original PV-DM model are highlighted.Softmax layers in the models described above should be trained to predict words in documents givenbinary context vectors. Training should therefore encourage binary activations in the preceding sig-moid layers. This can be done in several ways. In semantic hashing autoencoders Salakhutdinov &Hinton (2009) added noise to the sigmoid coding layer. Error backpropagation then countered thenoise, by forcing the activations to be close to 0 or 1. Another approach was used by Krizhevsky& Hinton (2011) in autoencoders that learned binary codes for small images. During the forwardpass, activations in the coding layer were rounded to 0 or 1. Original (i.e. not rounded) activationswere used when backpropagating errors. Alternatively, one could model the document codes withstochastic binary neurons. Learning in this case can still proceed with error backpropagation, pro-3Under review as a conference paper at ICLR 2017vided that a suitable gradient estimator is used alongside stochastic activations. We experimentedwith the methods used in semantic hashing and Krizhevsky’s autoencoders, as well as with the twobiased gradient estimators for stochastic binary neurons discussed by Bengio et al. (2013). We alsoinvestigated the slope annealing trick (Chung et al., 2016) when training networks with stochastic bi-nary activations. From our experience, binary paragraph vector models with rounded activations areeasy to train and learn better codes than models with noise-based binarization or stochastic neurons.We therefore use Krizhevsky’s binarization in our models.3 E XPERIMENTSTo assess the performance of binary paragraph vectors, we carried out experiments on two datasetsfrequently used to evaluate document retrieval methods, namely 20 Newsgroups1and a cleansedversion (also called v2) of Reuters Corpus V olume 12(RCV1). As paragraph vectors can be trainedwith relatively large vocabularies, we did not perform any stemming of the source text. However,we removed stop words as well as words shorter than two characters and longer than 15 characters.Results reported by Li et al. (2015) indicate that performance of PV-DBOW can be improved byincluding n-grams in the model. We therefore evaluated two variants of Binary PV-DBOW: onepredicting words in documents and one predicting words and bigrams. Since 20 Newsgroups isa relatively small dataset, we used all words and bigrams from its documents. This amounts toa vocabulary with slightly over one million elements. For the RCV1 dataset we used words andbigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800thousands elements.The 20 Newsgroups dataset comes with reference train/test sets. In case of RCV1 we used halfof the documents for training and the other half for evaluation. We perform document retrieval byselecting queries from the test set and ordering other test documents according to the similarity ofthe inferred codes. We use Hamming distance for binary codes and cosine similarity for real-valuedrepresentations. Results are averaged over queries. We assess the performance of our models withprecision-recall curves and two popular information retrieval metrics, namely mean average preci-sion (MAP) and the normalized discounted cumulative gain at the 10th result (NDCG@10) (J ̈arvelin& Kek ̈al ̈ainen, 2002). The results depend, of course, on the chosen document relevancy measure.Relevancy measure for the 20 Newsgroups dataset is straightforward: a retrieved document is rel-evant to the query if they both belong to the same newsgroup. However, in RCV1 each documentbelongs to a hierarchy of topics, making the definition of relevancy less obvious. In this case weadopted the relevancy measure used by Salakhutdinov & Hinton (2009). That is, the relevancy is cal-culated as the fraction of overlapping labels in a retrieved document and the query document. Over-all, our selection of test datasets and relevancy measures follows Salakhutdinov & Hinton (2009),enabling comparison with semantic hashing codes.We use AdaGrad (Duchi et al., 2011) for training and inference in all experiments reported in thiswork. During training we employ dropout (Srivastava et al., 2014) in the embedding layer. Tofacilitate models with large vocabularies, we approximate the gradients with respect to the softmaxlogits using the method described by Cho et al. (2015). Binary PV-DM networks use the samenumber of dimensions for document codes and word embeddings.Performance of 128- and 32-bit binary paragraph vector codes is reported in Table 1 and in Fig-ure 4. For comparison we also report performance of real-valued paragraph vectors. Note that thebinary codes perform very well, despite their far lower capacity: on both test sets the 128-bit BinaryPV-DBOW trained with bigrams approaches the performance of the real-valued paragraph vectors.Furthermore, Binary PV-DBOW with bigrams outperforms semantic hashing codes: comparison ofprecision-recall curves from Figure 4 with Salakhutdinov & Hinton (2009, Figures 6 & 7) showsthat on both test sets 128-bit codes learned with this model outperform 128-bit semantic hashingcodes. Moreover, the 32-bit codes from this model outperform 128-bit semantic hashing codes onthe RCV1 dataset, and on the 20 Newsgroups dataset give similar precision up to approximately 3%recall and better precision for higher recall levels. Note that the difference in this case lies not onlyin retrieval precision: the short 32-bit Binary PV-DBOW codes are more efficient for indexing thanlong 128-bit semantic hashing codes.1Available at http://qwone.com/ ̃jason/20Newsgroups2Available at http://trec.nist.gov/data/reuters/reuters.html4Under review as a conference paper at ICLR 2017Table 1: Information retrieval results. The best results with binary models are highlighted.CodeModelWith 20 Newsgroups RCV1size bigrams MAP NDCG@10 MAP NDCG@10128PV-DBOWno 0.4 0.75 0.25 0.79yes 0.45 0.75 0.27 0.8Binary PV-DBOWno 0.34 0.69 0.22 0.74yes 0.35 0.69 0.24 0.77PV-DMN/A0.41 0.73 0.23 0.78Binary PV-DM 0.34 0.65 0.18 0.6932PV-DBOWno 0.43 0.71 0.26 0.75yes 0.46 0.72 0.27 0.77Binary PV-DBOWno 0.32 0.53 0.22 0.6yes 0.32 0.54 0.25 0.66PV-DMN/A0.43 0.7 0.23 0.77Binary PV-DM 0.29 0.49 0.17 0.53We also compared binary paragraph vectors against codes constructed by first inferring short, real-valued paragraph vectors and then using another unsupervised model or hashing algorithm for bi-narization. When the dimensionality of the paragraph vectors is equal to the size of binary codes,the number of network parameters in this approach is similar to that of Binary PV models. Weexperimented with an autoencoder with sigmoid coding layer and Krizhevsky’s binarization, witha Gaussian-Bernoulli Restricted Boltzmann Machine (Welling et al., 2004), and with two standardhashing algorithms, namely random hyperplane projection (Charikar, 2002) and iterative quantiza-tion (Gong & Lazebnik, 2011). Paragraph vectors in these experiments were inferred using PV-DBOW with bigrams. Results reported in Table 2 shows no benefit from using a separate algorithmfor binarization. On the 20 Newsgroups dataset an autoencoder with Krizhevsky’s binarizationachieved MAP equal to Binary PV-DBOW, while the other three approaches yielded lower MAP.On the larger RCV1 dataset an end-to-end training of Binary PV-DBOW yielded higher MAP thanthe baseline approaches. Some gain in precision of top hits can be observed for iterative quantizationand an autoencoder with Krizhevsky’s binarization. However, it does not translate to an improvedMAP, and decreases when models are trained on a larger corpus (RCV1).Table 2: Information retrieval results for 32-bit binary codes constructed by first inferring 32d real-valued paragraph vectors and then employing another unsupervised model or hashing algorithm forbinarization. Paragraph vectors were inferred using PV-DBOW with bigrams.Binarization model20 Newsgroups RCV1MAP NDCG@10 MAP NDCG@10Autoencoder with Krizhevsky’s0.32 0.57 0.24 0.67binarizationGaussian-Bernoulli RBM 0.26 0.39 0.23 0.52Random hyperplane projection 0.27 0.53 0.21 0.66Iterative quantization 0.31 0.58 0.23 0.68Li et al. (2015) argue that PV-DBOW outperforms PV-DM on a sentiment classification task, anddemonstrate that the performance of PV-DBOW can be improved by including bigrams in the vo-cabulary. We observed similar results with Binary PV models. That is, including bigrams in thevocabulary usually improved retrieval precision. Also, codes learned with Binary PV-DBOW pro-vided higher retrieval precision than Binary PV-DM codes. Furthermore, to choose the context sizefor the Binary PV-DM models, we evaluated several networks on validation sets taken out of thetraining data. The best results were obtained with a minimal one-word, one-sided context window.This is the distributed memory architecture most similar to the Binary PV-DBOW model.5Under review as a conference paper at ICLR 2017(a) 20 Newsgroups10-210-1100Recall0.00.10.20.30.40.50.60.70.80.9PrecisionPV-DBOW uni- & bi-gramsPV-DBOW unigrams onlyBinary PV-DBOW uni- & bi-gramsBinary PV-DBOW unigrams onlyBinary PV-DMSemantic hashing10-210-1100Recall0.00.10.20.30.40.50.60.70.80.9PrecisionPV-DBOW uni- & bi-gramsPV-DBOW unigrams onlyBinary PV-DBOW uni- & bi-gramsBinary PV-DBOW unigrams onlyBinary PV-DM128 dimensional codes 32 dimensional codes(b) RCV110-210-1100Recall0.00.10.20.30.40.50.60.7PrecisionPV-DBOW uni- & bi-gramsPV-DBOW unigrams onlyBinary PV-DBOW uni- & bi-gramsBinary PV-DBOW unigrams onlyBinary PV-DMSemantic hashing10-210-1100Recall0.10.20.30.40.50.60.7PrecisionPV-DBOW uni- & bi-gramsPV-DBOW unigrams onlyBinary PV-DBOW uni- & bi-gramsBinary PV-DBOW unigrams onlyBinary PV-DM128 dimensional codes 32 dimensional codesFigure 4: Precision-recall curves for the 20 Newsgroups and RCV1 datasets. Cosine similarity wasused with real-valued representations and the Hamming distance with binary codes. For compar-ison we also included semantic hashing results reported by Salakhutdinov & Hinton (2009, Fig-ures 6 & 7).3.1 T RANSFER LEARNINGIn the experiments presented thus far we had at our disposal training sets with documents similarto the documents for which we inferred binary codes. One could ask a question, whether binaryparagraph vectors could be used without collecting a domain-specific training set? For example,what if we needed to hash documents that are not associated with any available domain-specificcorpus? One solution could be to train the model with a big generic text corpus, that covers awide variety of domains. It is not obvious, however, whether such model would capture languagesemantics meaningful for unrelated documents. To shed light on this question we trained Binary PV-DBOW with bigrams on the English Wikipedia, and then inferred binary codes for the test parts ofthe 20 Newsgroups and RCV1 datasets. We used words and bigrams with at least 100 occurrences inthe English Wikipedia. The results are presented in Table 3 and in Figure 5. The model trained on anunrelated text corpus gives lower retrieval precision than models with domain-specific training sets,which is not surprising. However, it still performs remarkably well, indicating that the semantics itcaptured can be useful for different text collections. Importantly, these results were obtained withoutdomain-specific finetuning.6Under review as a conference paper at ICLR 2017Table 3: Information retrieval results for the Binary PV-DBOW model trained on an unrelated textcorpus. Results are reported for 128-bit codes.MAP NDCG@1020 Newsgroups 0.24 0.51RCV1 0.18 0.6610-210-1100Recall0.00.10.20.30.40.50.60.70.80.9Precisiontraining on the 20 Newsgroups training settraining on English Wikipedia(a) 20 Newsgroups10-210-1100Recall0.10.20.30.40.50.60.7Precisiontraining on the RCV1 training settraining on English Wikipedia (b) RCV1Figure 5: Precision-recall curves for the baseline Binary PV-DBOW models and a Binary PV-DBOW model trained on an unrelated text corpus. Results are reported for 128-bit codes.3.2 R ETRIEVAL WITH REAL-BINARY MODELSAs pointed out by Salakhutdinov & Hinton (2009), when working with large text collections one canuse short binary codes for indexing and a representation with more capacity for ranking. Follow-ing this idea, we proposed Real-Binary PV-DBOW model (Section 2) that can simultaneously learnshort binary codes and high-dimensional real-valued representations. We begin evaluation of thismodel by comparing retrieval precision of real-valued and binary representations learned by it. Tothis end, we trained a Real-Binary PV-DBOW model with 28-bit binary codes and 300-dimensionalreal-valued representations on the 20 Newsgroups and RCV1 datasets. Results are reported in Fig-ure 6. The real-valued representations learned with this model give lower precision than PV-DBOWvectors but, importantly, improve precision over binary codes for top ranked documents. This justi-fies their use alongside binary codes.Using short binary codes for initial filtering of documents comes with a tradeoff between the retrievalperformance and the recall level. For example, one can select a small subset of similar documents byusing 28–32 bit codes and retrieving documents within small Hamming distance to the query. Thiswill improve retrieval performance, and possibly also precision, at the cost of recall. Conversely,short codes provide a less fine-grained hashing and can be used to index documents within largerHamming distance to the query. They can therefore be used to improve recall at the cost of retrievalperformance, and possibly also precision. For these reasons, we evaluated Real-Binary PV-DBOWmodels with different code sizes and under different limits on the Hamming distance to the query. Ingeneral, we cannot expect these models to achieve 100% recall under the test settings. Furthermore,recall will vary on query-by-query basis. We therefore decided to focus on the NDCG@10 metricin this evaluation, as it is suited for measuring model performance when a short list of relevantdocuments is sought, and the recall level is not known. MAP and precision-recall curves are notapplicable in these settings.Information retrieval results for Real-Binary PV-DBOW are summarized in Table 4. The modelgives higher NDCG@10 than 32-bit Binary PV-DBOW codes (Table 1). The difference is largewhen the initial filtering is restrictive, e.g. when using 28-bit codes and 2-bit Hamming distancelimit. Real-Binary PV-DBOW can therefore be useful when one needs to quickly find a short listof relevant documents in a large text collection, and the recall level is not of primary importance.7Under review as a conference paper at ICLR 2017If needed, precision can be further improved by using plain Binary PV-DBOW codes for filteringand standard DBOW representation for raking (Table 4, column C). Note, however, that PV-DBOWmodel would then use approximately 10times more parameters than Real-Binary PV-DBOW.10-210-1100Recall0.00.10.20.30.40.50.60.70.80.9PrecisionPV-DBOWReal-Binary PV-DBOW, real-valued codesReal-Binary PV-DBOW, binary codes(a) 20 Newsgroups10-210-1100Recall0.10.20.30.40.50.60.7PrecisionPV-DBOWReal-Binary PV-DBOW, real-valued codesReal-Binary PV-DBOW, binary codes (b) RCV1Figure 6: Information retrieval results for binary and real-valued codes learned by the Real-BinaryPV-DBOW model with bigrams. Results are reported for 28-bit binary codes and 300d real-valuedcodes. A 300d PV-DBOW model is included for reference.Table 4: Information retrieval results for the Real-Binary PV-DBOW model. All real valued repre-sentations have 300dimensions and are use for ranking documents according to the cosine similarityto the query. (A) Real-valued representations learned by Real-Binary PV-DBOW are used for rank-ing all test documents. (B) Binary codes are used for selecting documents within a given Hammingdistance to the query and real-valued representations are used for ranking. (C) For comparison,variant B was repeated with binary codes inferred using plain Binary PV-DBOW and real-valuedrepresentation inferred using original PV-DBOW model.Code Hamming NDCG@10size distance (bits) 20 Newsgroups RCV1A B C A B C2820.64 0.72 0.87 0.75 0.79 0.8724 0.60.65 0.860.740.76 0.8330.63 0.8 0.75 0.8120 0.58 0.6 0.73 0.73 0.73 0.7916 0.54 0.55 0.72 0.72 0.72 0.794 C ONCLUSIONIn this article we presented simple neural networks that learn short binary codes for text documents.Our networks extend Paragraph Vector by introducing a sigmoid nonlinearity before the softmaxthat predicts words in documents. Binary codes inferred with the proposed networks achieve higherretrieval precision than semantic hashing codes on two popular information retrieval benchmarks.They also retain a lot of their precision when trained on an unrelated text corpus. Finally, wepresented a network that simultaneously learns short binary codes and longer, real-valued represen-tations.The best codes in our experiments were inferred with Binary PV-DBOW networks. The Binary PV-DM model did not perform so well. Li et al. (2015) made similar observations for Paragraph Vectormodels, and argue that in distributed memory model the word context takes a lot of the burden ofpredicting the central word from the document code. An interesting line of future research could,8Under review as a conference paper at ICLR 2017therefore, focus on models that account for word order, while learning good binary codes. It isalso worth noting that Le & Mikolov (2014) constructed paragraph vectors by combining DM andDBOW representations. This strategy may proof useful also with binary codes, when employed withhashing algorithms designed for longer codes, e.g. with multi-index hashing (Norouzi et al., 2012).ACKNOWLEDGMENTSThis research is supported by the Polish National Science Centre grantno. DEC-2013/09/B/ST6/01549 “Interactive Visual Text Analytics (IVTA): Development ofnovel, user-driven text mining and visualization methods for large text corpora exploration.” Thisresearch was carried out with the support of the “HPC Infrastructure for Grand Challenges ofScience and Engineering” project, co-financed by the European Regional Development Fund underthe Innovative Economy Operational Programme. This research was supported in part by PL-GridInfrastructure.REFERENCESYoshua Bengio, Nicholas L ́eonard, and Aaron Courville. Estimating or propagating gradientsthrough stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432 , 2013.Moses S Charikar. Similarity estimation techniques from rounding algorithms. In Proceedings ofthe thiry-fourth annual ACM symposium on Theory of computing , pp. 380–388. ACM, 2002.S ́ebastien Jean Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large targetvocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of theAssociation for Computational Linguistics and the 7th International Joint Conference on NaturalLanguage Processing , volume 1, pp. 1–10. ACL, 2015.Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net-works. arXiv preprint arXiv:1609.01704 , 2016.John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning andstochastic optimization. Journal of Machine Learning Research , 12(Jul):2121–2159, 2011.John Gantz and David Reinsel. The digital universe in 2020: Big data, bigger digital shadows, andbiggest growth in the far east. Technical report, IDC, 2012.Yunchao Gong and Svetlana Lazebnik. Iterative quantization: A procrustean approach to learningbinary codes. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on ,pp. 817–824. IEEE, 2011.Michael Gutmann and Aapo Hyv ̈arinen. Noise-contrastive estimation: A new estimation principlefor unnormalized statistical models. In International Conference on Artificial Intelligence andStatistics , pp. 297–304, 2010.Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: towards removing the curse ofdimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing ,pp. 604–613. ACM, 1998.Kalervo J ̈arvelin and Jaana Kek ̈al ̈ainen. Cumulated gain-based evaluation of ir techniques. ACMTransactions on Information Systems (TOIS) , 20(4):422–446, 2002.Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Tor-ralba, and Sanja Fidler. Skip-thought vectors. In Advances in neural information processingsystems , pp. 3294–3302, 2015.Alex Krizhevsky and Geoffrey E Hinton. Using very deep autoencoders for content-based imageretrieval. In Proceedings of the 19th European Symposium on Artificial Neural Networks , pp.489–494, 2011.Quoc Le and Tomas Mikolov. Distributed representations of sentences and documents. In Proceed-ings of The 31st International Conference on Machine Learning , pp. 1188–1196, 2014.9Under review as a conference paper at ICLR 2017Bofang Li, Tao Liu, Xiaoyong Du, Deyuan Zhang, and Zhe Zhao. Learning document embed-dings by predicting n-grams for sentiment classification of long movie reviews. arXiv preprintarXiv:1512.08183 , 2015.Kevin Lin, Huei Fang Yang, Jen Hao Hsiao, and Chu Song Chen. Deep learning of binary hashcodes for fast image retrieval. In Proceedings of the IEEE Conference on Computer Vision andPattern Recognition Workshops , pp. 27–35, 2015.Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represen-tations in vector space. arXiv preprint arXiv:1301.3781 , 2013.Mohammad Norouzi, Ali Punjani, and David J Fleet. Fast search in hamming space with multi-index hashing. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on ,pp. 3108–3115. IEEE, 2012.Ruslan Salakhutdinov and Geoffrey E Hinton. Semantic hashing. International Journal of Approx-imate Reasoning , 50(7):969–978, 2009.Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: A simple way to prevent neural networks from overfitting. The Journal of MachineLearning Research , 15(1):1929–1958, 2014.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of MachineLearning Research , 9(Nov):2579–2605, 2008.Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hashing for similarity search: Asurvey. arXiv preprint arXiv:1408.2927 , 2014.Max Welling, Michal Rosen-Zvi, and Geoffrey E Hinton. Exponential family harmoniums with anapplication to information retrieval. In Advances in neural information processing systems , pp.1481–1488, 2004.10Under review as a conference paper at ICLR 2017A V ISUALIZATION OF BINARY PV CODESFor an additional comparison with semantic hashing, we used t-distributed Stochastic NeighborEmbedding (van der Maaten & Hinton, 2008) to construct two-dimensional visualizations of codeslearned by Binary PV-DBOW with bigrams. We used the same subset of newsgroups and RCV1topics that is pictured in Salakhutdinov & Hinton (2009, Figure 5). Codes learned by Binary PV-DBOW (Figure 7) appear slightly more clustered.(a) A subset of the 20 Newsgroups dataset: green - soc.religion.christian, red - talk.politics.guns,blue - rec.sport.hockey, brown - talk.politics.mideast, magenta - comp.graphics, black - sci.crypt.128 dimensional codes 32 dimensional codes(b) A subset of the RCV1 dataset: green - disasters and accidents, red - government borrowing, blue- accounts/earnings, magenta - energy markets, black - EC monetary/economic.128 dimensional codes 32 dimensional codesFigure 7: t-SNE visualization of binary paragraph vector codes; the Hamming distance was used tocalculate code similarity.11
SyxeqhP9ll
Published as a conference paper at ICLR 2017CALIBRATING ENERGY -BASED GENERATIVE ADVER -SARIAL NETWORKSZihang Dai1, Amjad Almahairi2, Philip Bachman3, Eduard Hovy1& Aaron Courville21Language Technologies Institute, Carnegie Mellon University.2MILA, Universit ́e de Montr ́eal.3Maluuba Research.ABSTRACTIn this paper we propose equipping Generative Adversarial Networks with theability to produce direct energy estimates for samples. Specifically, we developa flexible adversarial training framework, and prove this framework not only en-sures the generator converges to the true data distribution, but also enables thediscriminator to retain the density information at the global optimum. We derivethe analytic form of the induced solution, and analyze its properties. In order tomake the proposed framework trainable in practice, we introduce two effectiveapproximation techniques. Empirically, the experiment results closely match ourtheoretical analysis, verifying that the discriminator is able to recover the energyof data distribution.1 I NTRODUCTIONGenerative Adversarial Networks (GANs) (Goodfellow et al., 2014) represent an important mile-stone on the path towards more effective generative models. GANs cast generative model trainingas a minimax game between a generative network ( generator ), which maps a random vector into thedata space, and a discriminative network ( discriminator ), whose objective is to distinguish gener-ated samples from real samples. Multiple researchers Radford et al. (2015); Salimans et al. (2016);Zhao et al. (2016) have shown that the adversarial interaction with the discriminator can result in agenerator that produces compelling samples. The empirical successes of the GAN framework werealso supported by the theoretical analysis of Goodfellow et al., who showed that, under certain con-ditions, the distribution produced by the generator converges to the true data distribution, while thediscriminator converges to a degenerate uniform solution.While GANs have excelled as compelling sample generators, their use as general purpose probabilis-tic generative models has been limited by the difficulty in using them to provide density estimatesor even unnormalized energy values for sample evaluation.It is tempting to consider the GAN discriminator as a candidate for providing this sort of scoringfunction. Conceptually, it is a trainable sample evaluation mechanism that – owing to GAN train-ing paradigm – could be closely calibrated to the distribution modeled by the generator. If thediscriminator could retain fine-grained information of the relative quality of samples, measured forinstance by probability density or unnormalized energy, it could be used as an evaluation metric.Such data-driven evaluators would be highly desirable for problems where it is difficult to defineevaluation criteria that correlate well with human judgment. Indeed, the real-valued discriminatorof the recently introduced energy-based GANs Zhao et al. (2016) might seem like an ideal candidateenergy function. Unfortunately, as we will show, the degenerate fate of the GAN discriminator atthe optimum equally afflicts the energy-based GAN of Zhao et al..In this paper we consider the questions: (i) does there exists an adversarial framework that inducesa non-degenerate discriminator, and (ii) if so, what form will the resulting discriminator take? Weintroduce a novel adversarial learning formulation, which leads to a non-degenerate discriminatorwhile ensuring the generator distribution matches the data distribution at the global optimum. Wederive a general analytic form of the optimal discriminator, and discuss its properties and theirPart of this work was completed while author was at Maluuba Research1Published as a conference paper at ICLR 2017relationship to the specific form of the training objective. We also discuss the connection betweenthe proposed formulation and existing alternatives such as the approach of Kim & Bengio (2016).Finally, for a specific instantiation of the general formulation, we investigate two approximationtechniques to optimize the training objective, and verify our results empirically.2 R ELATED WORKFollowing a similar motivation, the field of Inverse Reinforcement Learning (IRL) (Ng & Russell,2000) has been exploring ways to recover the “intrinsic” reward function (analogous to the discrim-inator) from observed expert trajectories (real samples). Taking this idea one step further, appren-ticeship learning or imitation learning (Abbeel & Ng, 2004; Ziebart et al., 2008) aims at learning apolicy (analogous to the generator) using the reward signals recovered by IRL. Notably, Ho & Er-mon draw a connection between imitation learning and GAN by showing that the GAN formulationcan be derived by imposing a specific regularization on the reward function. Also, under a specialcase of their formulation, Ho & Ermon provide a duality-based interpretation of the problem, whichinspires our theoretical analysis. However, as the focus of (Ho & Ermon, 2016) is only on the policy,the authors explicitly propose to bypass the intermediate IRL step, and thus provide no analysis ofthe learned reward function.The GAN models most closely related to our proposed framework are energy-based GAN models ofZhao et al. (2016) and Kim & Bengio (2016). In the next section, We show how one can derive bothof these approaches from different assumptions regarding regularization of the generative model.3 A LTERNATIVE FORMULATION OF ADVERSARIAL TRAINING3.1 B ACKGROUNDBefore presenting the proposed formulation, we first state some basic assumptions required by theanalysis, and introduce notations used throughout the paper.Following the original work on GANs (Goodfellow et al., 2014), our analysis focuses on the non-parametric case, where all models are assumed to have infinite capacities. While many of the non-parametric intuitions can directly transfer to the parametric case, we will point out cases where thistransfer fails. We assume a finite data space throughout the analysis, to avoid technical machineryout of the scope of this paper. Our results, however, can be extended to continuous data spaces, andour experiments are indeed performed on continuous data.LetXbe the data space under consideration, and P=fpjp(x)0;8x2X;Px2Xp(x) = 1gbe the set of all proper distributions defined on X. Then,pdata2P :X 7! Randpgen2P :X7!Rwill denote the true data distribution and the generator distribution. Expf(x)denotes theexpectation of the quantity f(x)w.r.t.xdrawn from p. Finally, the term “discriminator” will referto any structure that provides training signals to the generator based on some measure of differencebetween the generator distribution and the real data distribution, which which includes but is notlimited tof-divergence.3.2 P ROPOSED FORMULATIONIn order to understand the motivation of the proposed approach, it is helpful to analyze the optimiza-tion dynamics near convergence in GANs first.When the generator distribution matches the data distribution, the training signal (gradient) w.r.t.the discriminator vanishes. At this point, assume the discriminator still retains density information,and views some samples as more real and others as less. This discriminator will produce a trainingsignal (gradient) w.r.t. the generator, pushing the generator to generate samples that appear morereal to the discriminator. Critically, this training signal is the sole driver of the generator’s training.Hence, the generator distribution will diverge from the data distribution. In other words, as long asthe discriminator retains relative density information, the generator distribution cannot stably matchthe data distribution. Thus, in order to keep the generator stationary as the data distribution, thediscriminator must assign flat (exactly the same) density to all samples at the optimal.2Published as a conference paper at ICLR 2017From the analysis above, the fundamental difficulty is that the generator only receives a single train-ing signal (gradient) from the discriminator, which it has to follow. To keep the generator stationary,this single training signal (gradient) must vanish, which requires a degenerate discriminator. In thiswork, we propose to tackle this single training signal constraint directly. Specifically, we intro-duce a novel adversarial learning formulation which incorporates an additional training signal to thegenerator, such that this additional signal canbalance (cancel out) the discriminator signal at the optimum, so that the generator can staystationary even if the discriminator assigns non-flat density to samplescooperate with the discriminator signal to make sure the generator converges to the datadistribution, and the discriminator retains the correct relative density informationThe proposed formulation can be written as the following minimax training objective,maxcminpgen2PExpgenc(x)Expdatac(x)+K(pgen); (1)wherec(x) :X7!Ris the discriminator that assigns each data point an unbounded scalar cost, andK(pgen) :P7!Ris some (functionally) differentiable, convex function of pgen. Compared to theoriginal GAN, despite the similar minimax surface form, the proposed fomulation has two crucialdistinctions.Firstly, while the GAN discriminator tries to distinguish “fake” samples from real ones using binaryclassification, the proposed discriminator achieves that by assigning lower cost to real samples andhigher cost to “fake” one. This distinction can be seen from the first two terms of Eqn. (1), wherethe discriminator c(x)is trained to widen the expected cost gap between “fake” and real samples,while the generator is adversarially trained to minimize it. In addition to the different adversarialmechanism, a calibrating term K(pgen)is introduced to provide a countervailing source of trainingsignal forpgenas we motivated above. For now, the form of K(pgen)has not been specified. But aswe will see later, its choice will directly decide the form of the optimal discriminator c(x).With the specific optimization objective, we next provide theoretical characterization of both thegenerator and the discriminator at the global optimum.DefineL(pgen;c) =Expgenc(x)Expdatac(x)+K(pgen), thenL(pgen;c)is the Lagrange dualfunction of the following optimization problemminpgen2PK(pgen)s.t.pgen(x)pdata(x) = 0;8x2X(2)wherec(x);8xappears inL(pgen;c)as the dual variables introduced for the equality constraints.This duality relationship has been observed previously in (Ho & Ermon, 2016, equation (7)) underthe adversarial imitation learning setting. However, in their case, the focus was fully on the generatorside (induced policy), and no analysis was provided for the discriminator (reward function).In order to characterize c, we first expand the set constraint on pgeninto explicit equality andinequality constraints:minpgenK(pgen)s.t.pgen(x)pdata(x) = 0;8xpgen(x)0;8xXx2Xpgen(x)1 = 0:(3)Notice thatK(pgen)is a convex function of pgen(x)by definition, and both the equality and inequalityconstraints are affine functions of pgen(x). Thus, problem (2) is a convex optimization problem.What’s more, since (i) dom Kis open, and (ii) there exists a feasible solution pgen=pdatato (3), bythe refined Slater’s condition (Boyd & Vandenberghe, 2004, page 226), we can further verify thatstrong duality holds for (3). With strong duality, a typical approach to characterizing the optimalsolution is to apply the Karush-Kuhn-Tucker (KKT) conditions, which gives rise to this theorem:3Published as a conference paper at ICLR 2017Proposition 3.1. By the KKT conditions of the convex problem (3), at the global optimum, theoptimal generator distribution pgenmatches the true data distribution pdata, and the optimal discrim-inatorc(x)has the following form:c(x) =@K(pgen)@pgen(x)pgen=pdata+(x);8x2X;where(x) =0; p data(x)>0ux; p data(x) = 0;2R;is an under-determined real number independent of x;ux2R+;is an under-determined non-negative real number.(4)The detailed proof of proposition 3.1 is provided in appendix A.1. From (4), we can see the exactform of the optimal discriminator depends on the term K(pgen), or more specifically its gradient.But, before we instantiate K(pgen)with specific choices and show the corresponding forms of c(x),we first discuss some general properties of c(x)that do not depend on the choice of K.Weak Support Discriminator. As part of the optimal discriminator function, the term (x)plays the role of support discriminator. That is, it tries to distinguish the support of the datadistribution, i.e. SUPP (pdata) =fx2 X jpdata(x)>0g, from its complement set with zero-probability, i.e. SUPP (pdata){=fx2X jpdata(x) = 0g. Specifically, for any x2SUPP (pdata)andx02SUPP (pdata){, it is guaranteed that (x)(x0). However, because ()is under-determined, there is nothing preventing the inequality from degenerating into an equality. Therefore,we name it the weak support discriminator. But, in all cases, ()assigns zero cost to all data pointswithin the support. As a result, it does not possess any fine-grained density information inside of thedata support. It is worth pointing out that, in the parametric case, because of the smoothness and thegeneralization properties of the parametric model, the learned discriminator may generalize beyondthe data support.Global Bias. In (4), the term is a scalar value shared for all x. As a result, it does not affect therelative cost among data points, and only serves as a global bias for the discriminator function.Having discussed general properties, we now consider some specific cases of the convex functionK, and analyze the resulting optimal discriminator c(x)in detail.1. First, let us consider the case where Kis the negative entropy of the generator distribution, i.e.K(pgen) =H(pgen). Taking the derivative of the negative entropy w.r.t. pgen(x), we havecent(x) =logpdata(x)1+(x);8x2X; (5)where(x)andhave the same definitions as in (4).Up to a constant, this form of cent(x)is exactly the energy function of the data distributionpdata(x). This elegant result has deep connections to several existing formulations, which includemax-entropy imitation learning (Ziebart et al., 2008) and the directed-generator-trained energy-based model (Kim & Bengio, 2016). The core difference is that these previous formulationsare originally derived from maximum-likelihood estimation, and thus the minimax optimizationis only implicit. In contrast, with an explicit minimax formulation we can develop a betterunderstanding of the induced solution. For example, the global bias suggests that there existsmore than one stable equilibrium the optimal discriminator can actually reach. Further, (x)can be understood as a support discriminator that poses extra cost on generator samples whichfall in zero-probability regions of data space.2. WhenK(pgen) =12Px2Xpgen(x)2=12kpgenk22, which can be understood as posing `2regular-ization onpgen, we have@K(pgen)@pgen(x)pgen=pdata=pdata(x), and it followsc`2(x) =pdata(x)+(x);8x2X; (6)with(x);similarly defined as in (4).Surprisingly, the result suggests that the optimal discriminator c`2(x)directly recovers the neg-ative probabilitypdata(x), shifted by a constant. Thus, similar to the entropy solution (5), itfully retains the relative density information of data points within the support.4Published as a conference paper at ICLR 2017However, because of the under-determined term (x), we cannot recover the distribution den-sitypdataexactly from either c`2orcentif the data support is finite. Whether this ambiguity canbe resolved is beyond the scope of this paper, but poses an interesting research problem.3. Finally, let’s consider consider a degenerate case, where K(pgen)is a constant. That is, we dontprovide any additional training signal for pgen at all. With K(pgen) =const, we simply haveccst(x) =+(x);8x2X; (7)whose discriminative power is fully controlled by the weak support discriminator (x). Thus,it follows that ccst(x)won’t be able to discriminate data points within the support of pdata, andits power to distinguish data from SUPP (pdata)and SUPP (pdata){is weak. This closely matchesthe intuitive argument in the beginning of this section.Note that when K(pgen)is a constant, the objective function (1) simplifies to:maxcminpgen2PExpgenc(x)Expdatac(x); (8)which is very similar to the EBGAN objective (Zhao et al., 2016, equation (2) and (4)). Aswe show in appendix A.2, compared to the objective in (8), the EBGAN objective puts extraconstraints on the allowed discriminator function. In spite of that, the EBGAN objective suf-fers from the single-training-signal problem and does not guarantee that the discriminator willrecover the real energy function (see appendix A.2 for detailed analysis).As we finish the theoretical analysis of the proposed formulation, we want to point out that simplyadding the same term K(pgen)to the original GAN formulation will not lead to both a generator thatmatches the data distribution, and a discriminator that retains the density information (see appendixA.3 for detailed analysis).4 P ARAMETRIC INSTANTIATION WITH ENTROPY APPROXIMATIONWhile the discussion in previous sections focused on the non-parametric case, in practice we are lim-ited to a finite amount of data, and the actual problem involves high dimensional continuous spaces.Thus, we resort to parametric representations for both the generator and the discriminator. In orderto train the generator using standard back-propagation, we do not parametrize the generator distri-bution directly. Instead, we parametrize a directed generator network that transforms random noisezpz(z)to samples from a continuous data space Rn. Consequently, we don’t have analytical ac-cess to the generator distribution, which is defined implicitly by the generator network’s noise !datamapping. However, the regularization term K(pgen)in the training objective (1) requires the gen-erator distribution. Faced with this problem, we focus on the max-entropy formulation, and exploittwo different approximations of the regularization term K(pgen) =H(pgen).4.1 N EAREST -NEIGHBOR ENTROPY GRADIENT APPROXIMATIONThe first proposed solution is built upon an intuitive interpretation of the entropy gradient. Firstly,since we construct pgenby applying a deterministic, differentiable transform gto sampleszfrom afixed distribution pz, we can write the gradient of H(pgen)with respect to the generator parametersas follows:rH(pgen) =Ezpz[rlogpgen(g(z))] =Ezpz@g(z)@@logpgen(g(z))@g(z); (9)where the first equality relies on the “reparametrization trick”. Equation 9 implies that, if we cancompute the gradient of the generator log-density logpgen(x)w.r.t. anyx=g(z), then we candirectly construct the Monte-Carlo estimation of the entropy gradient rH(pgen)using samplesfrom the generator.Intuitively, for any generated data x=g(z), the term@logpgen(x)@xessentially describes the directionoflocal change in the sample space that will increase the log-density. Motivated by this intuition,we propose to form a local Gaussian approximation pigenofpgenaround each point xiin a batch ofsamplesfx1;:::;xngfrom the generator, and then compute the gradient@logpgen(xi)@xibased on the5Published as a conference paper at ICLR 2017Gaussian approximation. Specifically, each local Gaussian approximation pigenis formed by findingtheknearest neighbors of xiin the batchfx1;:::;xng, and then placing an isotropic Gaussian distri-bution at their mean (i.e. maximimum likelihood). Based on the isotropic Gaussian approximation,the resulting gradient has the following form@logpgen(xi)@xiixi;wherei=1kXx02KNN(xi)x0is the mean of the Gaussian (10)Finally, note the scale of this gradient approximation may not be reliable. To fix this problem, wenormalize the approximated gradient into unit norm, and use a single hyper-parameter to model thescale for all x, leading to the following entropy gradient approximationrH(pgen)1kXxi=g(zi)ixikixik2(11)whereis the hyper-parameter and iis defined as in equation (10).An obvious weakness of this approximation is that it relies on Euclidean distance to find the knearestneighbors. However, Euclidean distance is usually not the proper metric to use when the effectivedimension is very high. As the problem is highly challenging, we leave it for future work.4.2 V ARIATIONAL LOWER BOUND ON THE ENTROPYAnother approach we consider relies on defining and maximizing a variational lower bound on theentropyH(pgen(x))of the generator distribution. We can define the joint distribution over observeddata and the noise variables as pgen(x;z) =pgen(xjz)pgen(z), where simply pgen(z) =pz(z)is afixed prior. Using the joint, we can also define the marginal pgen(x)and the posterior pgen(zjx).We can also write the mutual information between the observed data and noise variables as:I(pgen(x);pgen(z)) =H(pgen(x))H(pgen(xjz))=H(pgen(z))H(pgen(zjx));(12)whereH(pgen(:j:))denotes the conditional entropy. By reorganizing terms in this definition, wecan write the entropy H(pgen(x))as:H(pgen(x)) =H(pgen(z))H(pgen(zjx)) +H(pgen(xjz)) (13)We can think of pgen(xjz)as a peaked Gaussian with a fixed, diagonal covariance, and hence itsconditional entropy is constant and can be dropped. Furthermore, H(pgen(z))is also assumed to befixed a priori. Hence, we can maximize H(pgen(x))by minimizing the conditional entropy:H(pgen(zjx)) =Expgen(x)Ezpgen(zjx)[logpgen(zjx)](14)Optimizing this term is still problematic, because (i) we do not have access to the posteriorpgen(zjx), and (ii) we cannot sample from it. Therefore, we instead minimize a variational up-per bound defined by an approximate posterior qgen(zjx):H(pgen(zjx)) =Expgen(x)Ezpgen(zjx)[logqgen(zjx)]KL(pgen(zjx)kqgen(zjx))Expgen(x)Ezpgen(zjx)[logqgen(zjx)]=U(qgen):(15)We can also rewrite the variational upper bound as:U(qgen) =Ex;zpgen(x;z)[logqgen(zjx)] =Ezpgen(z)Expgen(xjz)[logqgen(zjx)];(16)which can be optimized efficiently with standard back-propagation and Monte Carlo integration ofthe relevant expectations based on independent samples drawn from the joint pgen(x;z). By mini-mizing this upper bound on the conditional entropy H(pgen(zjx)), we are effectively maximizinga variational lower bound on the entropy H(pgen(x)).6Published as a conference paper at ICLR 2017Figure 1: True energy functions and samples from synthetic distributions. Green dots in the sampleplots indicate the mean of each Gaussian component.5 E XPERIMENTSIn this section, we verify our theoretical results empirically on several synthetic and real datasets. Inparticular, we evaluate whether the discriminator obtained from the entropy-regularized adversarialtraining can capture the density information (in the form of energy), while making sure the generatordistribution matches the data distribution. For convenience, we refer to the obtained models asEGAN-Ent. Our experimental setting follows closely recommendations from Radford et al. (2015),except in Sec. 5.1 where we use fully-connected models (see appendix B.1 for details).15.1 S YNTHETIC LOW -DIMENSIONAL DATAFirst, we consider three synthetic datasets in 2-dimensional space, which are drawn from the fol-lowing distributions: (i) Mixture of 4 Gaussians with equal mixture weights, (ii) Mixture of 200Gaussians arranged as two spirals (100 components each spiral), and (iii) Mixture of 2 Gaussianswith highly biased mixture weights, P(c1) = 0:9;P(c2) = 0:1. We visualize the ground-truthenergy of these distributions along with 100K training samples in Figure 1. Since the data lies in2-dimensional space, we can easily visualize both the learned generator (by drawing samples) andthe discriminator for direct comparison and evaluation. We evaluate here our EGAN-Ent modelusing both approximations: the nearest-neighbor based approximation (EGAN-Ent-NN) and thevariational-inference based approximation (EGAN-Ent-VI), and compare them with two baselines:the original GAN and the energy based GAN with no regularization (EGAN-Const).Experiment results are summarized in Figure 2 for baseline models, and Figure 3 for the proposedmodels. As we can see, all four models can generate perfect samples. However, for the discrimi-nator, both GAN and EGAN-Const lead to degenerate solution, assigning flat energy inside the em-pirical data support. In comparison, EGAN-Ent-VI and EGAN-Ent-NN clearly capture the densityinformation, though to different degrees. Specifically, on the equally weighted Gaussian mixtureand the two-spiral mixture datasets, EGAN-Ent-NN tends to give more accurate and fine-grainedsolutions compared to EGAN-Ent-VI. However, on the biased weighted Gaussian mixture dataset,EGAN-Ent-VI actually fails to captures the correct mixture weights of the two modes, incorrectlyassigning lower energy to the mode with lower probability (smaller weight). In contrast, EGAN-Ent-NN perfectly captures the bias in mixture weight, and obtains a contour very close to the groundtruth.To better quantify these differences, we present detailed comparison based on KL divergence inappendix B.2. What’s more, the performance difference between EGAN-Ent-VI and EGAN-Ent-NNon biased Gaussian mixture reveals the limitations of the variational inference based approximation,i.e. providing inaccurate gradients. Due to space consideratiosn, we refer interested readers to theappendix B.3 for a detailed discussion.5.2 R ANKING NIST DIGITSIn this experiment, we verify that the results in synthetic datasets can translate into data with higherdimensions. While visualizing the learned energy function is not feasible in high-dimensional space,we can verify whether the learned energy function learns relative densities by inspecting the rankingof samples according to their assigned energies. We train on 2828images of a single handwritten1For more details, please refer to https://github.com/zihangdai/cegan_iclr2017 .7Published as a conference paper at ICLR 2017(a) Standard GAN(b) Energy GAN without regularization (EGAN-Const)Figure 2: Learned energies and samples from baseline models whose discriminator cannot retaindensity information at the optimal. In the sample plots, blue dots indicate generated samples, andred dots indicate real ones.(a) Entropy regularized Energy GAN with variational inference approximation (EGAN-Ent-VI)(b) Entropy regularized Energy GAN with nearest neighbor approximation (EGAN-Ent-NN)Figure 3: Learned energies and samples from proposed models whose discriminator can retain den-sity information at the optimal. Blue dots are generated samples, and red dots are real ones.digit from the NIST dataset.2We compare the ability of EGAN-Ent-NN with both EGAN-Constand GAN on ranking a set of 1,000 images, half of which are generated samples and the rest are realtest images. Figures 4 and 5 show the top-100 and bottom-100 ranked images respectively for eachmodel, after training them on digit 1. We also show in Figure 7 the mean of all training samples,so we can get a sense of what is the most common style (highest density) of digit 1 in NIST. Wecan notice that all of the top-ranked images by EGAN-Ent-NN look similar to the mean sample.In addition, the lowest-ranked images are clearly different from the mean image, with either high(clockwise or counter-clockwise) rotation degrees from the mean, or an extreme thickness level. Wedo not see such clear distinction in other models. We provide in the appendix B.4 the ranking of thefull set of images.5.3 S AMPLE QUALITY ON NATURAL IMAGE DATASETSIn this last set of experiments, we evaluate the visual quality of samples generated by our modelin two datasets of natural images, namely CIFAR-10 and CelebA. We employ here the variational-based approximation for entropy regularization, which can scale well to high-dimensional data.Figure 6 shows samples generated by EGAN-Ent-VI. We can see that despite the noisy gradientsprovided by the variational approximation, our model is able to generate high-quality samples.2https://www.nist.gov/srd/nist-special-database-19 , which is an extended versionof MNIST with an average of over 74K examples per digit.8Published as a conference paper at ICLR 2017(a) EGAN-Ent-NN(b) EGAN-Const(c) GANFigure 4: 100 highest-ranked images out of 1000 generated and reals (bounding box) samples.(a) EGAN-Ent-NN(b) EGAN-Const(c) GANFigure 5: 100 lowest-ranked images out of 1000 generated and reals (bounding box) samples.We futher validate the quality of our model’s samples on CIFAR-10 using the Inception score pro-posed by (Salimans et al., 2016)3. Table 1 shows the scores of our EGAN-Ent-VI, the best GANmodel from Salimans et al. (2016) which uses only unlabeled data, and an EGAN-Const modelwhich has the same architecture as our model. We notice that even without employing suggestedtechniques in Salimans et al. (2016), energy-based models perform quite similarly to the GANmodel. Furthermore, the fact that our model scores higher than EGAN-Const highlights the im-portance of entropy regularization in obtaining good quality samples.6 C ONCLUSIONIn this paper we have addressed a fundamental limitation in adversarial learning approaches, whichis their inability of providing sensible energy estimates for samples. We proposed a novel adversariallearning formulation which results in a discriminator function that recovers the true data energy. Weprovided a rigorous characterization of the learned discriminator in the non-parametric setting, andproposed two methods for instantiating it in the typical parametric setting. Our experimental resultsverify our theoretical analysis about the discriminator properties, and show that we can also obtainsamples of state-of-the-art quality.7 A CKNOWLEDGEMENTSWe would like to thank the developers of Theano (Theano Development Team, 2016) for developingsuch a powerful tool for scientific computing. Amjad Almahairi was supported by funding fromMaluuba Research.3Using the evaluation script released in https://github.com/openai/improved-gan/9Published as a conference paper at ICLR 2017(a) CIFAR-10 (b) CelebAFigure 6: Samples generated from our model.Model Our model Improved GAN yEGAN-ConstScorestd. 7.07.10 6.86.06 6.74470.09Table 1: Inception scores on CIFAR-10. yAs reported in Sali-mans et al. (2016) without using labeled data.Figure 7: mean digitREFERENCESPieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. InProceedings of the twenty-first international conference on Machine learning , pp. 1. ACM, 2004.Stephen Boyd and Lieven Vandenberghe. Convex optimization . Cambridge university press, 2004.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. arXiv preprintarXiv:1606.03476 , 2016.Taesup Kim and Yoshua Bengio. Deep directed generative models with energy-based probabilityestimation. arXiv preprint arXiv:1606.03439 , 2016.A. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Icml, pp. 663–670, 2000.Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplersusing variational divergence minimization. arXiv preprint arXiv:1606.00709 , 2016.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.Theano Development Team. Theano: A Python framework for fast computation of mathematicalexpressions. arXiv e-prints , abs/1605.02688, May 2016.Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network.arXiv preprint arXiv:1609.03126 , 2016.Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inversereinforcement learning. In AAAI , pp. 1433–1438, 2008.10Published as a conference paper at ICLR 2017A S UPPLEMENTARY MATERIALS FOR SECTION 3A.1 O PTIMAL DISCRIMINATOR FORM UNDER THE PROPOSED FORMULATIONProof of proposition 3.1. Refining the Lagrange L(pgen;c)by introducing additional dual variablesfor the probability constraints (the second and third), the new Lagrange function has the formL(pgen;c;; ) =K(pgen)+Xx2Xc(x)pgen(x)pdata(x)Xx2X(x)pgen(x)+(Xx2Xpgen(x)1)(17)wherec(x)2R;8x,(x)2R+;8x, and2Rare the dual variables. The KKT conditions for theoptimal primal and dual variables are as follows@K(pgen)@pgen(x)pgen=pdata+c(x)(x) += 0;8x (stationarity)(x)pgen(x) = 0;8x(complement slackness)(x)0;8x (dual feasibility)pgen(x)0; pgen(x) =pdata(x);8x (primal feasibility)Xx2Xpgen(x) = 1 (primal feasibility)(18)Rearranging the conditions above, we get pgen(x) =pdata(x);8x2X as well as equation (4), whichconcludes the proof.A.2 O PTIMAL CONDITIONS OF EBGANIn (Zhao et al., 2016), the training objectives of the generator and the discriminator cannot be writtenas a single minimax optimization problem since the margin structure is only applied to the objectiveof the discriminator. In addition, the discriminator is designed to produce the mean squared recon-struction error of an auto-encoder structure. This restricted the range of the discriminator outputto be non-negative, which is equivalent to posing a set constraint on the discriminator under thenon-parametric setting.Thus, to characterize the optimal generator and discriminator, we adapt the same analyzing logicused in the proof sketch of the original GAN (Goodfellow et al., 2014). Specifically, given a spe-cific generator distribution pgen, the optimal discriminator function given the generator distributionc(x;pgen)can be derived by examining the objective of the discriminator. Then, the conditionaloptimal discriminator function is substituted into the training objective of pgen, simplifying the “ad-versarial” training as a minimizing problem only w.r.t. pgen, which can be well analyzed.Firstly, given any generator distribution pgen, the EBGAN training objective for the discriminatorcan be written as the following formc(x;pgen) = arg maxc2CEpgenmax(0;mc(x))Epdatac(x)= arg maxc2CEpgenmin(0;c(x)m)Epdatac(x)(19)whereC=fc:c(x)0;8x2Xg is the set of allowed non-negative discriminator functions. Notethis set constraint comes from the fact the mean squared reconstruction error as discussed above.Since the problem (19) is independent w.r.t. each x, the optimal solution can be easily derived asc(x;pgen) =8>><>>:0; p gen(x)<p data(x)m; p gen(x)>p data(x)x; p gen(x) =pdata(x)>0x; p gen(x) =pdata(x) = 0(20)wherex2[0;m]is an under-determined number, a x2[0;1)is another under-determined non-negative real number, and the subscripts in m;x;xreflect that fact that these under-determinedvalues can be distinct for different x.11Published as a conference paper at ICLR 2017This way, the overall training objective can be cast into a minimization problem w.r.t. pgen,pgen= arg minpgen2PExpgenc(x;pgen)Expdatac(x;pgen)= arg minpgen2PXx2Xhpgen(x)pdata(x)ic(x;pgen)(21)where the second term of the first line is implicitly defined as the problem is an adversarial gamebetweenpgenandc.Proposition A.1. The global optimal of the EBGAN training objective is achieved if and only ifpgen=pdata. At that point, c(x)is fully under-determined.Proof. The proof is established by showing contradiction.Firstly, assume the optimal pgen6=pdata. Thus, there must exist a non-equal set X6==fxjpdata(x)6=pgen(x)g, which can be further splitted into two subsets, the greater-than set X>=fxjpgen(x)>pdata(x)g, and the less-than set X<=fxjpgen(x)< p data(x)g. Similarly, we define the equal setX==fx:pgen(x) =pdata(x)g. Obviously,X>SX<SX==X.LetL(pgen) =Px2Xhpgen(x)pdata(x)ic(x;pgen), substituting the results from equation (20)into (21), the L(pgen)can be written asL(pgen) =Xx2X<SX<SX=pgen(x)pdata(x)c(x;pgen)=Xx2X<pgen(x)pdata(x)c(x;pgen) +Xx2X>pgen(x)pdata(x)c(x;pgen)=mXx2X>pgen(x)pdata(x)>0(22)However, when p0gen=pdata, we haveL(p0gen) = 0<L(pgen) (23)which contradicts the optimal (miminum) assumption of pgen. Hence, the contradiction concludesthat at the global optimal, pgen=pdata. By equation (20), it directly follows that c(x;pgen) =x,which completes the proof.A.3 A NALYSIS OF ADDING ADDITIONAL TRAINING SIGNAL TO GAN FORMULATIONTo show that simply adding the same training signal to GAN will not lead to the same result, it ismore convenient to directly work with the formulation of f-GAN (Nowozin et al., 2016, equation(6)) family, which include the original GAN formulation as a special case.Specifically, the general f-GAN formulation takes the following formmaxcminpgen2PExpgenf?(c(x))Expdatac(x); (24)where thef?()denotes the convex conjugate (Boyd & Vandenberghe, 2004) of the f-divergencefunction. The optimal condition of the discriminator can be found by taking the variation w.r.t. c,which gives the optimal discriminatorc(x) =f0(pdata(x)pgen(x)) (25)wheref0()is the first-order derivative of f(). Note that, even when we add an extra term L(pgen)to equation (24), since the term K(pgen)is a constant w.r.t. the discriminator, it does not change theresult given by equation (25) about the optimal discriminator. As a consequence, for the optimal12Published as a conference paper at ICLR 2017discriminator to retain the density information, it effectively means pgen6=pdata. Hence, there willbe a contradiction if both c(x)retains the density information, and the generator matches the datadistribution.Intuitively, this problem roots in the fact that f-divergence is quite “rigid” in the sense that given thepgen(x)it only allows one fixed point for the discriminator. In comparison, the divergence used in ourproposed formulation, which is the expected cost gap, is much more flexible. By the expected costgap itself, i.e. without the K(pgen)term, the optimal discriminator is actually under-determined.B S UPPLEMENTARY MATERIALS FOR SECTION 5B.1 E XPERIMENT SETTINGHere, we specify the neural architectures used for experiements presented in Section 5.Firstly, for the Egan-Ent-VI model, we parameterize the approximate posterior distribution qgen(zjx)with a diagonal Gaussian distribution, whose mean and covariance matrix are the output of atrainable inference network, i.e.qgen(zjx) =N(;I2);log=finfer(x)(26)wherefinferdenotes the inference network, and Iis the identity matrix. Note that the InferenceNetwork only appears in the Egan-Ent-VI model.For experiments with the synthetic datasets, the following fully-connected feed forward neural net-works are employedGenerator: FC(4,128)-BN-ReLU-FC(128,128)-BN-ReLU-FC(128,2)Discriminator: FC(2,128)-ReLU-FC(128,128)-ReLU-FC(128,1)Inference Net: FC(2,128)-ReLU-FC(128,128)-ReLU-FC(128,4 *2)where FCandBNdenote fully-connected layer and batch normalization layer respectively. Note thatsince the input noise to the generator has dimension 4, the Inference Net output has dimension 4*2,where the first 4 elements correspond the inferred mean, and the last 4 elements correspond to theinferred diagonal covariance matrix in log scale.For the handwritten digit experiment, we closely follow the DCGAN (Radford et al., 2015) archi-tecture with the following configurationGenerator: FC(10,512 *7*7)-BN-ReLU-DC(512,256;4c2s)-BN-ReLU-DC(256,128;4c2s)-BN-ReLU-DC(128,1;3c1s)-SigmoidDiscriminator: CV(1,64;3c1s)-BN-LRec-CV(64,128;4c2s)-BN-LRec-CV(128,256;4c2s)-BN-LRec-FC(256 *7*7,1)Inference Net: CV(1,64;3c1s)-BN-LRec-CV(64,128;4c2s)-BN-LRec-CV(128,256;4c2s)-BN-LRec-FC(256 *7*7,10 *2)Here, LRec is the leaky rectified non-linearity recommended by Radford et al. (2015). In addition,CV(128,256,4c2s) denotes a convolutional layer with 128 input channels, 256 output channels,and kernel size 4 with stride 2. Similarly, DC(256,128,4c2s) denotes a corresponding trans-posed convolutional operation. Compared to the original DCGAN architecture, the discriminatorunder our formulation does not have the last sigmoid layer which squashes a scalar value into aprobability in [0, 1].For celebA experiment with 6464color images, we use the following architectureGenerator: FC(10,512 *4*4)-BN-ReLU-DC(512,256;4c2s)-BN-ReLU-DC(256,128;4c2s)-BN-ReLU-DC(256,128;4c2s)-BN-ReLU-DC(128,3;4c2s)-TanhDiscriminator: CV(3,64;4c2s)-BN-LRec-CV(64,128;4c2s)-BN-LRec-CV(128,256;4c2s)-BN-LRec-CV(256,256;4c2s)-BN-LRec-FC(256 *4*4,1)Inference Net: CV(3,64;4c2s)-BN-LRec-CV(64,128;4c2s)-BN-LRec-CV(128,256;4c2s)-BN-LRec-CV(256,256;4c2s)-BN-LRec-FC(256 *4*4,10 *2)13Published as a conference paper at ICLR 2017For Cifar10 experiment, where the image size is 3232, similar architecture is usedGenerator: FC(10,512 *4*4)-BN-ReLU-DC(512,256;4c2s)-BN-ReLU-DC(256,128;3c1s)-BN-ReLU-DC(256,128;4c2s)-BN-ReLU-DC(128,3;4c2s)-TanhDiscriminator: CV(3,64;3c1s)-BN-LRec-CV(64,128;4c2s)-BN-LRec-CV(128,256;4c2s)-BN-LRec-CV(256,256;4c2s)-BN-LRec-FC(256 *4*4,1)Inference Net: CV(3,64;3c1s)-BN-LRec-CV(64,128;4c2s)-BN-LRec-CV(128,256;4c2s)-BN-LRec-CV(256,256;4c2s)-BN-LRec-FC(256 *4*4,10 *2)Given the chosen architectures, we follow Radford et al. (2015) and use Adam as the optimizationalgorithm. For more detailed hyper-parameters, please refer to the code.B.2 Q UANTITATIVE COMPARISON OF DIFFERENT MODELSGaussian Mixture: KL (pdatakpemp) = 0:0291 , KL (pempkpdata) = 0:0159KL Divergence pgenkpemppempkpgenpgenkpdatapdatakpgenpdisckpemppempkpdiscpdisckpdatapdatakpdiscpgenkpdiscpdisckpgenGAN 0.3034 0.5024 0.2498 0.4807 6.7587 2.0648 6.2020 2.0553 2.4596 7.0895EGAN-Const 0.2711 0.4888 0.2239 0.4735 6.7916 2.1243 6.2159 2.1149 2.5062 7.0553EGAN-Ent-VI 0.1422 0.1367 0.0896 0.1214 0.8866 0.6532 0.7215 0.6442 0.7711 1.0638EGAN-Ent-NN 0.1131 0.1006 0.0621 0.0862 0.0993 0.1356 0.0901 0.1187 0.1905 0.1208Biased Gaussian Mixture: KL (pdatakpemp) = 0:0273 , KL (pempkpdata) = 0:0144KL Divergence pgenkpemppempkpgenpgenkpdatapdatakpgenpdisckpemppempkpdiscpdisckpdatapdatakpdiscpgenkpdiscpdisckpgenGAN 0.0788 0.0705 0.0413 0.0547 7.1539 2.5230 6.4927 2.5018 2.5205 7.1140EGAN-Const 0.1545 0.1649 0.1211 0.1519 7.1568 2.5269 6.4969 2.5057 2.5860 7.1995EGAN-Ent-VI 0.0576 0.0668 0.0303 0.0518 3.9151 1.3574 2.9894 1.3365 1.4052 4.0632EGAN-Ent-NN 0.0784 0.0574 0.0334 0.0422 0.8505 0.3480 0.5199 0.3299 0.3250 0.7835Two-spiral Gaussian Mixture: KL (pdatakpemp) = 0:3892 , KL (pempkpdata) = 1:2349KL Divergence pgenkpemppempkpgenpgenkpdatapdatakpgenpdisckpemppempkpdiscpdisckpdatapdatakpdiscpgenkpdiscpdisckpgenGAN 0.5297 0.2701 0.3758 0.7240 6.3507 1.7180 4.3818 1.0866 1.6519 5.7694EGAN-Const 0.7473 1.0325 0.7152 1.6703 5.9930 1.5732 3.9749 0.9703 1.8380 6.0471EGAN-Ent-VI 0.2014 0.1260 0.4283 0.8399 1.1099 0.3508 0.3061 0.4037 0.4324 0.9917EGAN-Ent-NN 0.1246 0.1147 0.4475 1.2435 0.1036 0.0857 0.4086 0.7917 0.1365 0.1686Table 2: Pairwise KL divergence between distributions. Bold face indicate the lowest divergencewithin group.In order to quantify the quality of recovered distributions, we compute the pairwise KL divergenceof the following four distributions:The real data distribution with analytic form, denoted as pdataThe empirical data distribution approximated from the 100K training data, denoted as pempThe generator distribution approximated from 100K generated data, denoted as pgenThe discriminator distribution re-normalized from the learned energy, denoted as pdiscSince the synthetic datasets are two dimensional, we approximate both the empirical data distribu-tion and the generator distribution using the simple histogram estimation. Specifically, we divide thecanvas into a 100-by-100 grid, and assign each sample into its nearest grid cell based on euclideandistance. Then, we normalize the number of samples in each cell into a proper distribution. Whenrecovering the discriminator distribution from the learned energy, we assume that (x) = 0 (i.e.infinite data support), and discretize the distribution into the same grid cellspdisc(x) =exp(c(x))Px02Gridexp(c(x0));8x2GridBased on these approximation, Table 2 summarizes the results. For all measures related to thediscriminator distribution, EGAN-Ent-VI and EGAN-Ent-NN significantly outperform the other twobaseline models, which matches our visual assessment in Figure 2 and 3. Meanwhile, the generatordistributions learned from our proposed framework also achieve relatively lower divergence to boththe empirical data distribution and the true data distribution.14Published as a conference paper at ICLR 2017B.3 C OMPARISON OF THE ENTROPY (GRADIENT )APPROXIMATION METHODSIn order to understand the performance difference between EGAN-Ent-VI and EGAN-Ent-NN, weanalyze the quality of the entropy gradient approximation during training. To do that, we visualizesome detailed training information in Figure 8.(a) Training details under variational inference entropy approximation(b) Training details under nearest neighbor entropy approximationFigure 8: For convenience, we will use Fig. (i,j) to refer to the subplot in row i, column j. Fig. (1,1):current energy plot. Fig. (1,2): frequency map of generated samples in the current batch. Fig. (1,3):frequency map of real samples in the current batch. Fig-(1,4): frequency difference between realand generated samples. Fig. (2,1) comparison between more generated from current model and realsample. Fig. (2,2): the discriminator gradient w.r.t. each training sample. Fig. (2,3): the entropygradient w.r.t. each training samples. Fig. (2,4): all gradient (discriminator + entropy) w.r.t. eachtraining sample.As we can see in figure 8a, the viarational entropy gradient approximation w.r.t. samples is notaccurate:It is inaccurate in terms of gradient direction. Ideally, the direction of the entropy gradi-ent should be pointing from the center of its closest mode towards the surroundings, with15Published as a conference paper at ICLR 2017the direction orthogonal to the implicit contour in Fig. (1,2). However, the direction ofgradients in the Fig. (2,3) does not match this.It is inaccurate in magnitude. As we can see, the entropy approximation gradient (Fig.(2,3)) has much larger norm than the discriminator gradient (Fig. (2,2)). As a result, thetotal gradient (Fig. (2,4)) is fully dominated by the entropy approximation gradient. Thus,it usually takes much longer for the generator to learn to generate rare samples, and thetraining also proceeds much slower compared to the nearest neighbor based approximation.In comparison, the nearest neighbor based gradient approximation is much more accurate as shownin 8b. As a result, it leads to more accurate energy contour, as well as faster training. What’s more,from Figure 8b Fig. (2,4), we can see the entropy gradient does have the cancel-out effect on thediscriminator gradient, which again matches our theory.B.4 R ANKING NIST D IGITSFigure 9 shows the ranking of all 1000 generated and real images (from the test set) for three models:EGAN-Ent-NN, EGAN-Const, and GAN. We can clearly notice that in EGAN-Ent-NN the top-ranked digits look very similar to the mean digit. From the upper-left corner to the lower-rightcorner, the transition trend is: the rotation degree increases, and the digits become increasingly thickor thin compared to the mean. In addition, samples in the last few rows do diverge away from themean image: either highly diagonal to the right or left, or have different shape: very thin or thick, ortypewriter script. Other models are not able to achieve a similar clear distinction for high versus lowprobability images. Finally, we consistently observe the same trend in modeling other digits, whichare not shown in this paper due to space constraint.B.5 C LASSIFIER PERFORMANCE AS A PROXY MEASUREAs mentioned in Section 5, evaluating the proposed formulation quantitatively on high-dimensionaldata is extremely challenging. Here, in order to provide more quantitative intuitions on the learneddiscriminator at convergence, we adopt a proxy measure. Specifically, we take the last-layer activa-tion of the converged discriminator network as fixed pretrained feature, and build a linear classifierupon it. Hypothetically, if the discriminator does not degenerate, the extracted last-layer featureshould maintain more information about the data points, especially compared to features from de-generated discriminators. Following this idea, we first train EGAN-Ent-NN, EGAN-Const, andGAN on the MNIST till convergence, and then extract the last-layer activation from their discrimi-nator networks as fixed feature input. Based on fixed feature, a randomly initialized linear classifieris trained to do classification on MNIST. Based on 10 runs (with different initialization) of each ofthe three models, the test classification performance is summarized in Table 3. For comparison pur-pose, we also include a baseline where the input features are extracted from a discriminator networkwith random weights.Test error (%) EGAN-Ent-NN EGAN-Const GAN RandomMin 1.160 1.280 1.220 3.260Mean 1.190 1.338 1.259 3.409Std. 0.024 0.044 0.032 0.124Table 3: Test performance of linear classifiers based on last-layer discriminator features.Based on the proxy measure, EGAN-Ent-NN seems to maintain more information of data, whichsuggests that the discriminator from our proposed formulation is more informative. Despite thepositive result, it is important to point out that maintaining information about categories does notnecessarily mean maintaining information about the energy (density). Thus, this proxy measureshould be understood cautiously.16Published as a conference paper at ICLR 2017(a) EGAN-Ent-NN(b) EGAN-Const(c) GANFigure 9: 1000 generated and test images (bounding box) ranked according their assigned energies.17
BJrFC6ceg
Published as a conference paper at ICLR 2017PIXEL CNN++: I MPROVING THE PIXEL CNN WITHDISCRETIZED LOGISTIC MIXTURE LIKELIHOOD ANDOTHER MODIFICATIONSTim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingmaftim,karpathy,peter,dpkingma g@openai.comABSTRACTPixelCNNs are a recently proposed class of powerful generative models withtractable likelihood. Here we discuss our implementation of PixelCNNs whichwe make available at https://github.com/openai/pixel-cnn . Ourimplementation contains a number of modifications to the original model that bothsimplify its structure and improve its performance. 1) We use a discretized logisticmixture likelihood on the pixels, rather than a 256-way softmax, which we find tospeed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels,simplifying the model structure. 3) We use downsampling to efficiently capturestructure at multiple resolutions. 4) We introduce additional short-cut connec-tions to further speed up optimization. 5) We regularize the model using dropout.Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demon-strate the usefulness of these modifications.1 I NTRODUCTIONThe PixelCNN, introduced by van den Oord et al. (2016b), is a generative model of images with atractable likelihood. The model fully factorizes the probability density function on an image xoverall its sub-pixels (color channels in a pixel) as p(x) =Qip(xijx<i). The conditional distributionsp(xijx<i)are parameterized by convolutional neural networks and all share parameters. The Pixel-CNN is a powerful model as the functional form of these conditionals is very flexible. In additionit is computationally efficient as all conditionals can be evaluated in parallel on a GPU for an ob-served image x. Thanks to these properties, the PixelCNN represents the current state-of-the-art ingenerative modeling when evaluated in terms of log-likelihood. Besides being used for modelingimages, the PixelCNN model was recently extended to model audio (van den Oord et al., 2016a),video (Kalchbrenner et al., 2016b) and text (Kalchbrenner et al., 2016a).For use in our research, we developed our own internal implementation of PixelCNN and made anumber of modifications to the base model to simplify its structure and improve its performance.We now release our implementation at https://github.com/openai/pixel-cnn , hopingthat it will be useful to the broader community. Our modifications are discussed in Section 2, andevaluated experimentally in Section 3. State-of-the-art log-likelihood results confirm their useful-ness.2 M ODIFICATIONS TO PIXEL CNNWe now describe the most important modifications we have made to the PixelCNN model archite-cure as described by van den Oord et al. (2016c). For complete details see our code release athttps://github.com/openai/pixel-cnn .2.1 D ISCRETIZED LOGISTIC MIXTURE LIKELIHOODThe standard PixelCNN model specifies the conditional distribution of a sub-pixel, or color channelof a pixel, as a full 256-way softmax. This gives the model a lot of flexibility, but it is also very costlyin terms of memory. Moreover, it can make the gradients with respect to the network parameters1Published as a conference paper at ICLR 2017very sparse, especially early in training. With the standard parameterization, the model does notknow that a value of 128 is close to a value of 127 or 129, and this relationship first has to be learnedbefore the model can move on to higher level structures. In the extreme case where a particularsub-pixel value is never observed, the model will learn to assign it zero probability. This would beespecially problematic for data with higher accuracy on the observed pixels than the usual 8 bits: Inthe extreme case where very high precision values are observed, the PixelCNN, in its current form,would require a prohibitive amount of memory and computation, while learning very slowly. Wetherefore propose a different mechanism for computing the conditional probability of the observeddiscretized pixel values. In our model, like in the V AE of Kingma et al. (2016), we assume there isa latent color intensity with a continuous distribution, which is then rounded to its nearest 8-bitrepresentation to give the observed sub-pixel value x. By choosing a simple continuous distributionfor modeling (like the logistic distribution as done by Kingma et al. (2016)) we obtain a smooth andmemory efficient predictive distribution for x. Here, we take this continuous univariate distributionto be a mixture of logistic distributions which allows us to easily calculate the probability on theobserved discretized value x, as shown in equation (2). For all sub-pixel values xexcepting the edgecases 0 and 255 we have:KXi=1ilogistic (i;si) (1)P(xj;;s ) =KXi=1i[((x+ 0:5i)=si)((x0:5i)=si)]; (2)where()is the logistic sigmoid function. For the edge case of 0, replace x0:5by1, and for255 replace x+ 0:5by+1. Our provided code contains a numerically stable implementation forcalculating the log of the probability in equation 2.Our approach follows earlier work using continuous mixture models (Domke et al., 2008; Theiset al., 2012; Uria et al., 2013; Theis & Bethge, 2015), but avoids allocating probability mass tovalues outside the valid range of [0;255] by explicitly modeling the rounding of tox. In addi-tion, we naturally assign higher probability to the edge values 0 and 255 than to their neighboringvalues, which corresponds well with the observed data distribution as shown in Figure 1. Experi-mentally, we find that only a relatively small number of mixture components, say 5, is needed toaccurately model the conditional distributions of the pixels. The output of our network is thus ofmuch lower dimension, yielding much denser gradients of the loss with respect to our parameters. Inour experiments this greatly sped up convergence during optimization, especially early on in train-ing. However, due to the other changes in our architecture compared to that of van den Oord et al.(2016c) we cannot say with certainty that this would also apply to the original PixelCNN model.Figure 1: Marginal distribution of all sub-pixel values in CIFAR-10. The edge value of 255 ismuch more frequent than its neighbouring values: This is easy to model using our rounding basedapproach, but harder using continuous or truncated distributions.2Published as a conference paper at ICLR 20172.2 C ONDITIONING ON WHOLE PIXELSThe pixels in a color image consist of three real numbers, giving the intensities of the red, blue andgreen colors. The original PixelCNN factorizes the generative model over these 3 sub-pixels. Thisallows for very general dependency structure, but it also complicates the model: besides keepingtrack of the spatial location of feature maps, we now have to separate out all feature maps in 3groups depending on whether or not they can see the R/G/B sub-pixel of the current location. Thisadded complexity seems to be unnecessary as the dependencies between the color channels of a pixelare likely to be relatively simple and do not require a deep network to model. Therefore, we insteadcondition only on whole pixels up and to the left in an image, and output joint predictive distributionsover all 3 channels of a predicted pixel. The predictive distribution on a pixel itself can be interpretedas a simple factorized model: We first predict the red channel using a discretized mixture of logisticsas described in section 2.1. Next, we predict the green channel using a predictive distribution of thesame form. Here we allow the means of the mixture components to linearly depend on the value ofthe red sub-pixel. Finally, we model the blue channel in the same way, where we again only allowlinear dependency on the red and green channels. For the pixel (ri;j;gi;j;bi;j)at location (i;j)inour image, the distribution conditional on the context Ci;j, consisting of the mixture indicator andthe previous pixels, is thusp(ri;j;gi;j;bi;jjCi;j) =P(ri;jjr(Ci;j);sr(Ci;j))P(gi;jjg(Ci;j;ri;j);sg(Ci;j))P(bi;jjb(Ci;j;ri;j;gi;j);sb(Ci;j))g(Ci;j;ri;j) =g(Ci;j) +(Ci;j)ri;jb(Ci;j;ri;j;gi;j) =b(Ci;j) +(Ci;j)ri;j+(Ci;j)bi;j; (3)with;; scalar coefficients depending on the mixture component and previous pixels.The mixture indicator is shared across all 3 channels; i.e. our generative model first samples a mix-ture indicator for a pixel, and then samples the color channels one-by-one from the correspondingmixture component. Had we used a discretized mixture of univariate Gaussians for the sub-pixels,instead of logistics, this would have been exactly equivalent to predicting the complete pixel usinga (discretized) mixture of 3-dimensional Gaussians with full covariance. The logistic and Gaus-sian distributions are very similar, so this is indeed very close to what we end up doing. For fullimplementation details we refer to our code at https://github.com/openai/pixel-cnn .2.3 D OWNSAMPLING VERSUS DILATED CONVOLUTIONThe original PixelCNN only uses convolutions with small receptive field. Such convolutions aregood at capturing local dependencies, but not necessarily at modeling long range structure. Al-though we find that capturing these short range dependencies is often enough for obtaining verygood log-likelihood scores (see Table 2), explicitly encouraging the model to capture long rangedependencies can improve the perceptual quality of generated images (compare Figure 3 and Fig-ure 5). One way of allowing the network to model structure at multiple resolutions is to introducedilated convolutions into the model, as proposed by van den Oord et al. (2016a) and Kalchbren-ner et al. (2016b). Here, we instead propose to use downsampling by using convolutions of stride2. Downsampling accomplishes the same multi-resolution processing afforded by dilated convo-lutions, but at a reduced computational cost: where dilated convolutions operate on input of everincreasing size (due to zero padding), downsampling reduces the input size by a factor of 4 (forstride of 2 in 2 dimensions) at every downsampling. The downside of using downsampling is thatit loses information, but we can compensate for this by introducing additional short-cut connectionsinto the network as explained in the next section. With these additional short-cut connections, wefound the performance of downsampling to be the same as for dilated convolution.2.4 A DDING SHORT -CUT CONNECTIONSFor input of size 3232our suggested model consists of 6 blocks of 5 ResNet layers. In betweenthe first and second block, as well as the second and third block, we perform subsampling by stridedconvolution. In between the fourth and fifth block, as well as the fifth and sixth block, we performupsampling by transposed strided convolution. This subsampling and upsampling process losesinformation, and we therefore introduce additional short-cut connections into the model to recover3Published as a conference paper at ICLR 2017this information from lower layers in the model. The short-cut connections run from the ResNetlayers in the first block to the corresponding layers in the sixth block, and similarly between blockstwo and five, and blocks three and four. This structure resembles the V AE model with top downinference used by Kingma et al. (2016), as well as the U-net used by Ronneberger et al. (2015) forimage segmentation. Figure 2 shows our model structure graphically.= Identity (skip) connectionx32x3216x168x88x8x= Sequence of 6 layers16x1632x32= Convolutional connection= Downward stream= Downward and rightward streamFigure 2: Like van den Oord et al. (2016c), our model follows a two-stream (downward, anddownward+rightward) convolutional architecture with residual connections; however, there are twosignificant differences in connectivity. First, our architecture incorporates downsampling and up-sampling, such that the inner parts of the network operate over larger spatial scale, increasing com-putational efficiency. Second, we employ long-range skip-connections, such that each k-th layerprovides a direct input to the (Kk)-th layer, where Kis the total number of layers in the net-work. The network is grouped into sequences of six layers, where most sequences are separated bydownsampling or upsampling.2.5 R EGULARIZATION USING DROPOUTThe PixelCNN model is powerful enough to overfit on training data. Moreover, rather than justreproducing the training images, we find that overfitted models generate images of low perceptualquality, as shown in Figure 8. One effective way of regularizing neural networks is dropout (Srivas-tava et al., 2014). For our model, we apply standard binary dropout on the residual path after the firstconvolution. This is similar to how dropout is applied in the wide residual networks of Zagoruyko& Komodakis (2016). Using dropout allows us to successfully train high capacity models whileavoiding overfitting and producing high quality generations (compare figure 8 and figure 3).3 E XPERIMENTSWe apply our model to modeling natural images in the CIFAR-10 data set. We achieve state-of-the-art results in terms of log-likelihood, and generate images with coherent global structure.3.1 U NCONDITIONAL GENERATION ON CIFAR-10We apply our PixelCNN model, with the modifications as described above, to generative modeling ofthe images in the CIFAR-10 data set. For the encoding part of the PixelCNN, the model uses 3 Resnetblocks consisting of 5 residual layers, with 22downsampling in between. The same architectureis used for the decoding part of the model, but with upsampling instead of downsampling in betweenblocks. All residual layers use 192 feature maps and a dropout rate of 0:5. Table 1 shows the state-of-the-art test log-likelihood obtained by our model. Figure 3 shows some samples generated by themodel.4Published as a conference paper at ICLR 2017Figure 3: Samples from our PixelCNN model trained on CIFAR-10.Model Bits per sub-pixelDeep Diffusion (Sohl-Dickstein et al., 2015) 5.40NICE (Dinh et al., 2014) 4.48DRAW (Gregor et al., 2015) 4.13Deep GMMs (van den Oord & Dambre, 2015) 4.00Conv DRAW (Gregor et al., 2016) 3.58Real NVP (Dinh et al., 2016) 3.49PixelCNN (van den Oord et al., 2016b) 3.14V AE with IAF (Kingma et al., 2016) 3.11Gated PixelCNN (van den Oord et al., 2016c) 3.03PixelRNN (van den Oord et al., 2016b) 3.00PixelCNN++ 2.92Table 1: Negative log-likelihood for generative models on CIFAR-10 expressed as bits per sub-pixel.3.2 C LASS -CONDITIONAL GENERATIONNext, we follow van den Oord et al. (2016c) in making our generative model conditional on theclass-label of the CIFAR-10 images. This is done by linearly projecting a one-hot encoding of theclass-label into a separate class-dependent bias vector for each convolutional unit in our network. Wefind that making the model class-conditional makes it harder to avoid overfitting on the training data:our best test log-likelihood is 2.94 in this case. Figure 4 shows samples from the class-conditionalmodel, with columns 1-10 corresponding the 10 classes in CIFAR-10. The images clearly lookqualitatively different across the columns and for a number of them we can clearly identify theirclass label.5Published as a conference paper at ICLR 2017Figure 4: Class-conditional samples from our PixelCNN for CIFAR-10 (left) and real CIFAR-10images for comparison (right).3.3 E XAMINING NETWORK DEPTH AND FIELD OF VIEW SIZEIt is hypothesized that the size of the receptive field and additionally the removal of blind spots inthe receptive field are important for PixelCNN’s performance (van den Oord et al., 2016b). Indeedvan den Oord et al. (2016c) specifically introduced an improvement over the previous PixelCNNmodel to remove the blind spot in the receptive field that was present in their earlier model.Here we present the surprising finding that in fact a PixelCNN with rather small receptive field canattain competitive generative modelling performance on CIFAR-10 as long as it has enough capacity.Specifically, we experimented with our proposed PixelCNN++ model without downsampling blocksand reduce the number of layers to limit the receptive field size. We investigate two receptive fieldsizes: 11x5 and 15x8, and a receptive field size of 11x5, for example, means that the conditionaldistribution of a pixel can depends on a rectangle above the pixel of size 11x5 as well as1112= 5x1block to the left of the pixel.As we limit the size of the receptive field, the capacity of the network also drops significantly sinceit contains many fewer layers than a normal PixelCNN. We call the type of PixelCNN that’s simplylimited in depth “Plain” Small PixelCNN. Interestingly, this model already has better performancethan the original PixelCNN in van den Oord et al. (2016b) which had a blind spot. To increasecapacity, we introduced two simple variants that make Small PixelCNN more expressive withoutgrowing the receptive field:NIN (Network in Network): insert additional gated ResNet blocks with 1x1 convolution be-tween regular convolution blocks that grow receptive field. In this experiment, we inserted3NIN blocks between every other layer.Autoregressive Channel: skip connections between sets of channels via 1x1 convolutiongated ResNet block.Both modifications increase the capacity of the network, resulting in improved log-likelihood asshown in Table 2. Although the model with small receptive field already achieves an impressivelikelihood score, its samples do lack global structure, as seen in Figure 5.6Published as a conference paper at ICLR 2017Table 2: CIFAR-10 bits per sub-pixel for Small PixelCNNModel Bits per sub-pixelField=11x5, Plain 3.11Field=11x5, NIN 3.09Field=11x5, Autoregressive Channel 3.07Field=15x8, Plain 3.07Field=15x8, NIN 3.04Field=15x8, Autoregressive Channel 3.03Figure 5: Samples from 3:03bits/dim Small PixelCNN3.4 A BLATION EXPERIMENTSIn order to test the effect of our modifications to PixelCNN, we run a number of ablation experimentswhere for each experiment we remove a specific modification.3.4.1 S OFTMAX LIKELIHOOD INSTEAD OF DISCRETIZED LOGISTIC MIXTUREIn order to test the contribution of our logistic mixture likelihood, we re-run our CIFAR-10 experi-ment with the 256-way softmax as the output distribution instead. We allow the 256 logits for eachsub-pixel to linearly depend on the observed value of previous sub-pixels, with coefficients that aregiven as output by the model. Our model with softmax likelihood is thus strictly more flexible thanour model with logistic mixture likelihood, although the parameterization is quite different from thatused by van den Oord et al. (2016c). The model now outputs 1536 numbers per pixel, describing thelogits on the 256 potential values for each sub-pixel, as well as the coefficients for the dependenciesbetween the sub-pixels. Figure 6 shows that this model trains more slowly than our original model.In addition, the running time per epoch is significantly longer for our tensorflow implementation.For our architecture, the logistic mixture model thus clearly performs better. Since our architecturediffers from that of van den Oord et al. (2016c) in other ways as well, we cannot say whether thiswould also apply to their model.3.4.2 C ONTINUOUS MIXTURE LIKELIHOOD INSTEAD OF DISCRETIZATIONInstead of directly modeling the discrete pixel values in an image, it is also possible to de-quantizethem by adding noise from the standard uniform distribution, as used by Uria et al. (2013) and others,and modeling the data as being continuous. The resulting model can be interpreted as a variationalautoencoder (Kingma & Welling, 2013; Rezende et al., 2014), where the dequantized pixels zforma latent code whose prior distribution is captured by our model. Since the original discrete pixels xcan be perfectly reconstructed from zunder this model, the usual reconstruction term vanishes from7Published as a conference paper at ICLR 2017Figure 6: Training curves for our model with logistic mixture likelihood versus our model withsoftmax likelihood.the variational lower bound. The entropy of the standard uniform distribution is zero, so the termthat remains is the log likelihood of the dequantized pixels, which thus gives us a variational lowerbound on the log likelihood of our original data.We re-run our model for CIFAR-10 using the same model settings as those used for the 2.92 bitsper dimension result in Table 1, but now we remove the discretization in our likelihood model andinstead add standard uniform noise to the image data. The resulting model is a continuous mixturemodel in the same class as that used by Theis et al. (2012); Uria et al. (2013); Theis & Bethge (2015)and others. After optimization, this model gives a variational lower bound on the data log likelihoodof 3.11 bits per dimension. The difference with the reported 2.92 bits per dimension shows thebenefit of using discretization in the likelihood model.3.4.3 N O SHORT -CUT CONNECTIONSNext, we test the importance of the additional parallel short-cut connections in our model, indicatedby the dotted lines in Figure 2. We re-run our unconditional CIFAR-10 experiment, but remove theshort-cut connections from the model. As seen in Figure 7, the model fails to train without theseconnections. The reason for needing these extra short-cuts is likely to be our use of sub-sampling,which discards information that otherwise cannot easily be recovered,Figure 7: Training curves for our model with and without short-cut connections.3.4.4 N O DROPOUTWe re-run our CIFAR-10 model without dropout regularization. The log-likelihood we achieve onthe training set is below 2.0 bits per sub-pixel, but the final test log-likelihood is above 6.0 bits per8Published as a conference paper at ICLR 2017sub-pixel. At no point during training does the unregularized model get a test-set log-likelihoodbelow 3.0 bits per sub-pixel. Contrary to what we might naively expect, the perceptual quality ofthe generated images by the overfitted model is not great, as shown in Figure 8.Figure 8: Samples from intentionally overfitted PixelCNN model trained on CIFAR-10, with trainlog-likelihood of 2.0 bits per dimension: Overfitting does not result in great perceptual quality.4 C ONCLUSIONWe presented PixelCNN++, a modification of PixelCNN using a discretized logistic mixture like-lihood on the pixels among other modifications. We demonstrated the usefulness of these mod-ifications with state-of-the-art results on CIFAR-10. Our code is made available at https://github.com/openai/pixel-cnn and can easily be adapted for use on other data sets.REFERENCESLaurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components esti-mation. arXiv preprint arXiv:1410.8516 , 2014.Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXivpreprint arXiv:1605.08803 , 2016.Justin Domke, Alap Karapurkar, and Yiannis Aloimonos. Who killed the directed model? InComputer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on , pp. 1–8.IEEE, 2008.Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. Draw: A recurrent neural networkfor image generation. In Proceedings of the 32nd International Conference on Machine Learning ,2015.Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. Towardsconceptual compression. arXiv preprint arXiv:1604.08772 , 2016.Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and KorayKavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099 , 2016a.Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, AlexGraves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527 , 2016b.Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. Proceedings of the 2ndInternational Conference on Learning Representations , 2013.9Published as a conference paper at ICLR 2017Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling.Improving variational inference with inverse autoregressive flow. In Advances in Neural Informa-tion Processing Systems , 2016.Danilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi-mate inference in deep generative models. In ICML , pp. 1278–1286, 2014.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomed-ical image segmentation. In International Conference on Medical Image Computing andComputer-Assisted Intervention , pp. 234–241. Springer, 2015.Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsuper-vised learning using nonequilibrium thermodynamics. In Proceedings of the 32nd InternationalConference on Machine Learning , 2015.Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine LearningResearch , 15(1):1929–1958, 2014.Lucas Theis and Matthias Bethge. Generative image modeling using spatial lstms. In Advances inNeural Information Processing Systems , pp. 1927–1935, 2015.Lucas Theis, Reshad Hosseini, and Matthias Bethge. Mixtures of conditional gaussian scale mix-tures applied to multiscale image representations. PloS one , 7(7):e39857, 2012.Benigno Uria, Iain Murray, and Hugo Larochelle. Rnade: The real-valued neural autoregressivedensity-estimator. In Advances in Neural Information Processing Systems , pp. 2175–2183, 2013.Aaron van den Oord and Joni Dambre. Locally-connected transformations for deep gmms. InInternational Conference on Machine Learning (ICML) : Deep learning Workshop , 2015.Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves,Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model forraw audio. arXiv preprint arXiv:1609.03499 , 2016a.Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.InInternational Conference on Machine Learning (ICML) , 2016b.Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko-ray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv preprintarXiv:1606.05328 , 2016c.Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprintarXiv:1605.07146 , 2016.10
HyAbMKwxe
Published as a conference paper at ICLR 2017TIGHTER BOUNDS LEAD TO IMPROVED CLASSIFIERSNicolas Le RouxCriteo Researchnicolas@le-roux.nameABSTRACTThe standard approach to supervised classification involves the minimization of alog-loss as an upper bound to the classification error. While this is a tight boundearly on in the optimization, it overemphasizes the influence of incorrectly clas-sified examples far from the decision boundary. Updating the upper bound dur-ing the optimization leads to improved classification rates while transforming thelearning into a sequence of minimization problems. In addition, in the contextwhere the classifier is part of a larger system, this modification makes it possibleto link the performance of the classifier to that of the whole system, allowing theseamless introduction of external constraints.1 I NTRODUCTIONClassification aims at mapping inputs X2X to one or several classes y2Y. For instance, inobject categorization, Xwill be the set of images depicting an object, usually represented by theRGB values of each of their pixels, and Ywill be a set of object classes, such as “car” or “dog”.We shall assume we are given a training set comprised of Nindependent and identically distributedlabeled pairs (Xi;yi). The standard approach to solve the problem is to define a parameterized classof functions p(yjX;)indexed byand to find the parameter which minimizes the log-loss, i.e.= arg min1NXilogp(yijXi;) (1.1)= arg minLlog();withLlog() =1NXilogp(yijXi;): (1.2)One justification for minimizing Llog()is thatis the maximum likelihood estimator, i.e. theparameter which maximizes= arg maxp(Dj)= arg maxYip(yijXi;):There is another reason to use Eq. 1.1. Indeed, the goal we are interested in is minimizing theclassification error. If we assume that our classifiers are stochastic and outputs a class according top(yijXi;), then the expected classification error is the probability of choosing the incorrect classa.This translates toL() =1NXi(1p(yijXi;))= 11NXip(yijXi;): (1.3)aIn practice, we choose the class deterministically and output arg maxyp(yjXi;).1Published as a conference paper at ICLR 2017This is a highly nonconvex function of , which makes its minimization difficult. However, we haveL() = 11NXip(yijXi;)11NXi1K(1 + logp(yijXi;) + logK)=(K1logK)K+Llog()K;whereK=jYjis the number of classes (assumed finite), using the fact that, for every nonnegativet, we havet1 + logt. Thus, minimizing Llog()is equivalent to minimizing an upper boundofL(). Further, this bound is tight when p(yijXi;) =1Kfor allyi. As a model with randomlyinitialized parameters will assign probabilities close to 1=Kto each class, it makes sense to minimizeLlog()rather thanL()early on in the optimization.However, this bound becomes looser as moves away from its initial value. In particular, poorlyclassified examples, for which p(yijXi;)is close to 0, have a strong influence on the gradient ofLlog()despite having very little influence on the gradient of L(). The model will thus wastecapacity trying to bring these examples closer to the decision boundary rather than correctly clas-sifying those already close to the boundary. This will be especially noticeable when the model haslimited capacity, i.e. in the underfitting setting.Section 2 proposes a tighter bound of the classification error as well as an iterative scheme to easilyoptimize it. Section 3 experiments this iterative scheme using generalized linear models over a vari-ety of datasets to estimate its impact. Section 4 then proposes a link between supervised learning andreinforcement learning, revisiting common techniques in a new light. Finally, Section 5 concludesand proposes future directions.2 T IGHTER BOUNDS ON THE CLASSIFICATION ERRORWe now present a general class of upper bounds of the classification error which will prove usefulwhen the model is far from its initialization.Lemma 1. Letp(yjX;) =p(yjX;)1 + logp(yjX;)p(yjX;)(2.1)withany value of the parameters. Then we havep(yjX;)p(yjX;): (2.2)Further, if=, we havep(yjX;) =p(yjX;); (2.3)@p(yjX;)@==@p(yjX;)@: (2.4)Proof.p(yjX;) =p(yjX;)p(yjX;)p(yjX;)p(yjX;)1 + logp(yjX;)p(yjX;)=p(yjX;):The second line stems from the inequality t1 + logt.2Published as a conference paper at ICLR 2017p(yjX;) =p(yjX;)is immediate when setting =in Eq. 2.1. Deriving p(yjX;)withrespect toyields@p(yjX;)@=p(yjX;)@logp(yjX;)@=p(yjX;)p(yjX;)@p(yjX;)@:Taking=on both sides yields@p(yjX;)@==@p(yjX;)@.Lemma 1 suggests that, if the current set of parameters is t, an appropriate upper bound on theprobability that an example will be correctly classified isL() = 11NXip(yijXi;)11NXip(yijXi;t)1 + logp(yijXi;)p(yijXi;t)=C1NXip(yijXi;t) logp(yijXi;);whereCis a constant independent of . We shall denoteLt() =1NXip(yijXi;t) logp(yijXi;): (2.5)One possibility is to recompute the bound after every gradient step. This is exactly equivalent todirectly minimizing L. Such a procedure is brittle. In particular, Eq. 2.5 indicates that, if an exampleis poorly classified early on, its gradient will be close to 0 and it will difficult to recover from thissituation. Thus, we propose using Algorithm 1 for supervised learning: In regularly recomputingThe data: A datasetDcomprising of (Xi;yi)pairs, initial parameters 0The result: Final parameters Tfort = 0 toT-1dot+1= arg minLt=Pip(yijXi;t) logp(yijXi;)endAlgorithm 1: Iterative supervised learningthe bound, we ensure that it remains close to the quantity we are interested in and that we do notwaste time optimizing a loose bound.The idea of computing tighter bounds during optimization is not new. In particular, several authorsused a CCCP-based (Yuille & Rangarajan, 2003) procedure to achieve tighter bounds for SVMs (Xuet al., 2006; Collobert et al., 2006; Ertekin et al., 2011). Though Collobert et al. (2006) show asmall improvement of the test error, the primary goal was to reduce the number of support vectorsto keep the testing time manageable. Also, the algorithm proposed by Ertekin et al. (2011) requiredthe setting of an hyperparameter, s, which has a strong influence on the final solution (see Fig. 5 intheir paper). Finally, we are not aware of similar ideas in the context of the logistic loss.Additionally, our idea extends naturally to the case where pis a complicated function of and noteasily written as a sum of a convex and a concave function. This might lead to nonconvex inneroptimizations but we believe that this can still yield lower classification error. A longer study in thecase of deep networks is planned.3Published as a conference paper at ICLR 2017REGULARIZATIONAs this model further optimizes the training classification accuracy, regularization is often needed.The standard optimization procedure minimizes the following regularized objective:= arg minXilogp(yijXi;) +()= arg minXi1Klogp(yijXi;) +K():Thus, we can view this as an upper bound of the following “true” objective:= arg minXip(yijXi;) +K();which can then be optimized using Algorithm 1.ONLINE LEARNINGBecause of its iterative nature, Algorithm 1 is adapted to a batch setting. However, in many cases,we have access to a stream of data and we cannot recompute the importance weights on all thepoints. A natural way around this problem is to select a parameter vector and to use=forthe subsequent examples. One can see this as “crystallizing” the current solution as the value of chosen will affect all subsequent gradients.3 E XPERIMENTSWe experimented the impact of using tighter bounds to the expected misclassification rate on severaldatasets, which will each be described in their own section. The experimental setup for all datasetswas as follows. We first set aside part of the dataset to compose the test set. We then performedk-fold cross-validation, using a generalized linear model, on the remaining datapoints for differentvalues ofT, the number of times the importance weights were recomputed, and the `2-regularizer. For each value of T, we then selected the set of hyperparameters ( and the number of iterations)which achieved the lowest validation classification error. We computed the test error for each of thekmodels (one per fold) with these hyperparameters. This allowed us to get a confidence intervalson the test error, where the random variable is the training set but not the test set.For a fair comparison, each internal optimization was run for Zupdates so that ZTwas constant.Each update was computed on a randomly chosen minibatch of 50 datapoints using the SAG algo-rithm (Le Roux et al., 2012). Since we used a generalized linear model, each internal optimizationwas convex and thus had no optimization hyperparameter.Fig. 1 presents the training classification errors on all the datasets.3.1 C OVERTYPE BINARY DATASETThe Covertype binary dataset (Collobert et al., 2002) has 581012 datapoints in dimension 54 and2 classes. We used the first 90% for the cross-validation and the last 10% for testing. Due to thesmall dimension of the input, linear models strongly underfit, a regime in which tighter bounds aremost beneficial. We see in Fig. 2 that using T > 1leads to much lower training and validationclassification errors. Training and validation curves are presented in Fig. 2 and the test classificationerror is listed in Table 1.3.2 A LPHA DATASETThe Alpha dataset is a binary classification dataset used in the Pascal Large-Scale challenge and con-tains 500000 samples in dimension 500. We used the first 400000 examples for the cross-validationand the last 100000 for testing. A logistic regression trained on this dataset overfits quickly and, asa result, the results for all values of Tare equivalent. Training and validation curves are presentedin Fig. 3 and the test classification error is listed in Table 2.4Published as a conference paper at ICLR 2017Figure 1: Training classification errors for covertype (top left ),alpha (top right ),MNist (bottomleft) and IJCNN (bottom right ). We can immediately see that all values of T > 1yield significantlower errors than the standard log-loss (the confidence intervals represent 3 standard deviations).Figure 2: Training ( top) and validation ( bottom ) negative log-likelihood ( left) and classificationerror ( right ) for the covertype dataset. We only display the result for the value of yielding thelowest validation error. As soon as the importance weights are recomputed, the NLL increases andthe classification error decreases (the confidence intervals represent 3 standard deviations).TZ Test error3(%)1000 1e5 32:880:07100 1e6 32:960:0610 1e7 32:850:061 1e8 36:320:06Table 1: Test error for the models reaching the best valida-tion error for various values of Ton the covertype dataset.We can see that any value of Tgreater than 1 leads to a sig-nificant improvement over the standard log-loss (the confi-dence intervals represent 3 standard deviations).5Published as a conference paper at ICLR 2017Figure 3: Training ( top) and validation ( bottom ) negative log-likelihood ( left) and classificationerror ( right ) for the alpha dataset. We only display the result for the value of yielding the lowestvalidation error. As soon as the importance weights are recomputed, the NLL increases. Overfittingoccurs very quickly and the best validation error is the same for all values of T(the confidenceintervals represent 3 standard deviations).TZ Test error3(%)1000 1e5 21:830:03100 1e6 21:830:0310 1e7 21:820:031 1e8 21:820:03Table 2: Test error for the models reaching the best valida-tion error for various values of Ton the alpha dataset. Wecan see that overfitting occurs very quickly and, as a result,all values of Tlead to the same result as the standard log-loss.3.3 MN IST DATASETThe MNist dataset is a digit recognition dataset with 70000 samples. The first 60000 were used forthe cross-validation and the last 10000 for testing. Inputs have dimension 784 but 67 of them arealways equal to 0. Despite overfitting occurring quickly, values of Tgreater than 1 yield significantimprovements over the log-loss. Training and validation curves are presented in Fig. 4 and the testclassification error is listed in Table 3.TZ Test error3(%)1000 1e5 7:000:08100 1e6 7:010:0510 1e7 6:970:081 1e8 7:460:11Table 3: Test error for the models reaching the best valida-tion error for various values of Ton the MNist dataset. Theresults for all values of Tstrictly greater than 1 are compa-rable and significantly better than for T= 1.3.4 IJCNN DATASETThe IJCNN dataset is a dataset with 191681 samples. The first 80% of the dataset were used fortraining and validation (70% for training, 10% for validation, using random splits), and the last 20%were used for testing samples. Inputs have dimension 23, which means we are likely to be in theunderfitting regime. Indeed, larger values of Tlead to significant improvements over the log-loss.Training and validation curves are presented in Fig. 5 and the test classification error is listed inTable 4.6Published as a conference paper at ICLR 2017Figure 4: Training ( top) and validation ( bottom ) negative log-likelihood ( left) and classificationerror ( right ) for the MNist dataset. We only display the result for the value of yielding the lowestvalidation error. As soon as the importance weights are recomputed, the NLL increases. Overfittingoccurs quickly but higher values of Tstill lead to lower validation error. The best training error was2.52% withT= 10 .Figure 5: Training ( top) and validation ( bottom ) negative log-likelihood ( left) and classificationerror ( right ) for the IJCNN dataset. We only display the result for the value of yielding the lowestvalidation error. As soon as the importance weights are recomputed, the NLL increases. Since thenumber of training samples is large compared to the dimension of the input, the standard logisticregression is underfitting and higher values of Tlead to better validation errors.TZ Test error3(%)1000 1e5 4:620:12100 1e6 5:260:3310 1e7 5:870:131 1e8 6:190:12Table 4: Test error for the models reaching the best vali-dation error for various values of Ton the IJCNN dataset.Larger values of Tlead significantly lower test errors.7Published as a conference paper at ICLR 20174 S UPERVISED LEARNING AS POLICY OPTIMIZATIONWe now propose an interpretation of supervised learning which closely matches that of direct policyoptimization in reinforcement learning. This allows us to naturally address common issues in theliterature, such as optimizing ROC curves or allowing a classifier to withhold taking a decision.A machine learning algorithm is often only one component of a larger system whose role is to makedecisions, whether it is choosing which ad to display or deciding if a patient needs a specific treat-ment. Some of these systems also involve humans. Such systems are complex to optimize and it isoften appealing to split them into smaller components which are optimized independently. However,such splits might lead to poor decisions, even when each component is carefully optimized (Bottou).This issue can be alleviated by making each component optimize the full system with respect to itsown parameters. Doing so requires taking into account the reaction of the other components in thesystem to the changes made, which cannot in general be modeled. However, one may cast it as areinforcement learning problem where the environment is represented by everything outside of ourcomponent, including the other components of the system (Bottou et al., 2013).Pushing the analogy further, we see that in one-step policy learning, we try to find a policy p(yjX;)over actions ygiven the state Xbto minimize the expected loss defined asL() =XiXyR(y;Xi)p(yjXi;): (4.1)L()is equivalent to L()from Eq. 1.3 where all actions have a reward of 0 except for the actionchoosing the correct class yiyieldingR(yi;Xi) = 1 . One major difference between policy learningand supervised learning is that, in policy learning, we only observe the reward for the actions wehave taken, while in supervised learning, the reward for all the actions is known.Casting the classification problem as a specific policy learning problem yields a loss function com-mensurate with a reward. In particular, it allows us to explicit the rewards associated with eachdecision, which was difficult with Eq. 1.1. We will now review several possibilities opened by thisformulation.OPTIMIZING THE ROC CURVEIn some scenarios, we might be interested in other performance metrics than the average classifica-tion error. In search advertising, for instance, we are often interested in maximizing the precisionat a given recall. Mozer et al. (2001) address the problem by emphasizing the training points whoseoutput is within a certain interval. Gasso et al. (2011); Parambath et al. (2014), on the other hand,assign a different cost to type I and type II errors, learning which values lead to the desired falsepositive rate. Finally, Bach et al. (2006) propose a procedure to find the optimal solution for all costsefficiently in the context of SVMs and showed that the resulting models are not the optimal modelsin the class.To test the impact of optimizing the probabilities rather than a surrogate loss, we reproduced thebinary problem of Bach et al. (2006). We computed the average training and testing performanceover 10 splits. An example of the training set and the results are presented in Fig. 6.Even though working directly with probabilities solved the non-concavity issue, we still had toexplore all possible cost asymmetries to draw this curve. In particular, if we had been asked tomaximize the true positive rate for a given false positive rate, we would have needed to draw thewhole curve then find the appropriate point.However, expressing the loss directly as a function of the probabilities of choosing each class allowsus to cast this requirement as a constraint and solve the following constrained optimization problem:= arg min1N1Xi=yi=1p(1jxi;)such that1N0Xi=yi=0p(1jxi;)cFP;bIn standard policy learning, we actually consider full rollouts which include not only actions but also statechanges due to these actions.8Published as a conference paper at ICLR 2017Figure 6: Training data ( left) and test ROC curve ( right ) for the binary classification problemfrom Bach et al. (2006). The black dots are obtained when minimizing the log-loss for variousvalues of the cost asymmetry. The red stars correspond to the ROC curve obtained when directlyoptimizing the probabilities. While the former is not concave, a problem already mentioned by Bachet al. (2006), the latter is.withN0(resp.N1) the number of examples belonging to class 0(resp. class 1). Sincep(1jxi;) =1p(0jxi;), we can solve the following Lagrangian problemminmax0L(;) = minmax01N1Xi=yi=1p(1jxi;) +0@11N0Xi=yi=0p(0jxi;)cFP1A:This is an approach proposed by Mozer et al. (2001) who then minimize this function directly. Wecan however replace L(;)with the following upper bound:L(;)1N1Xi=yi=1p(1jxi;)1 + logp(1jxi;)p(1jxi;)+0@11N0Xi=yi=0p(0jxi;)1 + logp(0jxi;)p(0jxi;)cFP1Aand jointly optimize over and. Even though the constraint is on the upper bound and thuswill not be exactly satisfied during the optimization, the increasing tightness of the bound with theconvergence will lead to a satisfied constraint at the end of the optimization. We show in Fig. 7 theobtained false positive rate as a function of the required false positive rate and see that the constraintis close to being perfectly satisfied. One must note, however, that the ROC curve obtained usingthe constrained optimization problems matches that of T= 1, i.e. is not concave. We do not havean explanation as to why the behaviour is not the same when solving the constrained optimizationproblem and when optimizing an asymmetric cost for all values of the asymmetry.ALLOWING UNCERTAINTY IN THE DECISIONLet us consider a cancer detection algorithm which would automatically classify patients in twocategories: healthy or ill. In practice, this algorithm will not be completely accurate and, given thehigh price of a misclassification, we would like to include the possibility for the algorithm to handover the decision to the practitioner. In other words, it needs to include the possibility of being“Undecided”.The standard way of handling this situation is to manually set a threshold on the output of the clas-sifier and, should the maximum score across all classes be below that threshold, deem the exampletoo hard to classify. However, it is generally not obvious how to set the value of that threshold norhow it relates to the quantity we care about, even though some authors provided guidelines ( ?). Thedifficulty is heightened when the prior probabilities of each class are very different.9Published as a conference paper at ICLR 2017Figure 7: Test false positive rate as a function ofthe desired false positive rate cFP. The dottedline representing the optimal behaviour, we cansee that the constraint is close to being satisfied.T= 10 was used.Eq. 4.1 allows us to naturally include an extra “action”, the “Undecided” action, which has its ownreward. This reward should be equal to the reward of choosing the correct class (i.e., 1) minus thecostchof resorting to external interventionc, which is less than 1 since we would otherwise ratherhave an error than be undecided. Let us denote by rh= 1chthe reward obtained when the modelchooses the “Undecided” class. Then, the reward obtained when the input is Xiis:R(yijXi) = 1R(\Undecided00jXi) =rh;and the average under the policy is p(yijXi;) +rhp(\Undecided00jXi;).Learning this model on a training set is equivalent to minimizing the following quantity:= arg min1NXi(p(yijXi;) +rhp(“Undecided”jXi;)): (4.2)For each training example, we have added another example with importance weight rhand class“Undecided”. If we were to solve this problem through a minimization of the log-loss, it is well-known that the optimal solution would be, for each example Xi, to predict yiwith probability1=(1 +rh)and “Undecided” with probability rh=(1 +rh). However, when optimizing the weightedsum of probabilities, the optimal solution is still to predict yiwith probability 1. In other words,adding the “Undecided” class does not change the model if it has enough capacity to learn thetraining set accurately.5 D ISCUSSION AND CONCLUSIONUsing a general class of upper bounds of the expected classification error, we showed how a sequenceof minimizations could lead to reduced classification error rates. However, there are still a lot ofquestions to be answered. As using T > 1increases overfitting, one might wonder whether thestandard regularizers are still adapted. Also, current state-of-the-art models, especially in imageclassification, already use strong regularizers such as dropout. The question remains whether usingT >1with these models would lead to an improvement.Additionally, it makes less and less sense to think of machine learning models in isolation. They areincreasingly often part of large systems and one must think of the proper way of optimizing them inthis setting. The modification proposed here led to an explicit formulation for the true impact of aclassifier. This facilitates the optimization of such a classifier in the context of a larger productionsystem where additional costs and constraints may be readily incorporated. We believe this is acritical venue of research to be explored further.cThis is assuming that the external intervention always leads to the correct decision. Any other setting caneasily be used.10Published as a conference paper at ICLR 2017ACKNOWLEDGMENTSWe thank Francis Bach, L ́eon Bottou, Guillaume Obozinski, and Vianney Perchet for helpful dis-cussions.REFERENCESFrancis R Bach, David Heckerman, and Eric Horvitz. Considering cost asymmetry in learningclassifiers. The Journal of Machine Learning Research , 7:1713–1741, 2006.L ́eon Bottou. Two high stakes challenges in machine learning. http://videolectures.net/icml2015_bottou_machine_learning/ .L ́eon Bottou, Jonas Peters, Joaquin Quinonero-Candela, Denis X Charles, D Max Chickering, ElonPortugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. Counterfactual reasoning and learningsystems: The example of computational advertising. Journal of Machine Learning Research , 14(1):3207–3260, 2013.Ronan Collobert, Samy Bengio, and Yoshua Bengio. A parallel mixture of svms for very large scaleproblems. Neural computation , 14(5):1105–1114, 2002.Ronan Collobert, Fabian Sinz, Jason Weston, and L ́eon Bottou. Trading convexity for scalability.InProceedings of the 23rd international conference on Machine learning , pp. 201–208. ACM,2006.S ̧eyda Ertekin, L ́eon Bottou, and C Lee Giles. Nonconvex online support vector machines. PatternAnalysis and Machine Intelligence, IEEE Transactions on , 33(2):368–381, 2011.Gilles Gasso, Aristidis Pappaioannou, Marina Spivak, and L ́eon Bottou. Batch and online learn-ing algorithms for nonconvex neyman-pearson classification. ACM Transactions on IntelligentSystems and Technology , 2(3):28, 2011.Nicolas Le Roux, Mark Schmidt, and Francis Bach. A stochastic gradient method with an expo-nential convergence rate for finite training sets. In Advances in Neural Information ProcessingSystems , pp. 2663–2671, 2012.Michael C Mozer, Robert H Dodier, Michael D Colagrosso, C ́esar Guerra-Salcedo, and Richard HWolniewicz. Prodding the roc curve: Constrained optimization of classifier performance. InNIPS , pp. 1409–1415, 2001.Shameem Puthiya Parambath, Nicolas Usunier, and Yves Grandvalet. Optimizing f-measures bycost-sensitive classification. In Advances in Neural Information Processing Systems , pp. 2123–2131, 2014.Linli Xu, Koby Crammer, and Dale Schuurmans. Robust support vector machine training via convexoutlier ablation. In AAAI , volume 6, pp. 536–542, 2006.Alan L Yuille and Anand Rangarajan. The concave-convex procedure. Neural computation , 15(4):915–936, 2003.11
BkfiXiUlg
Under review as a conference paper at ICLR 2017LEARNING EFFICIENT ALGORITHMSWITH HIERARCHICAL ATTENTIVE MEMORYMarcin AndrychowiczGoogle DeepmindKarol KurachGoogle / University of WarsawABSTRACTIn this paper, we propose and investigate a novel memory architecture for neuralnetworks called Hierarchical Attentive Memory (HAM). It is based on a binarytree with leaves corresponding to memory cells. This allows HAM to performmemory access in (logn)complexity, which is a significant improvement overthe standard attention mechanism that requires (n)operations, where nis thesize of the memory. We show that an LSTM network augmented with HAM canlearn algorithms for problems like merging, sorting or binary searching from pureinput-output examples. In particular, it learns to sort nnumbers in time (nlogn)and generalizes well to input sequences much longer than the ones seen during thetraining. We also show that HAM can be trained to act like classic data structures:a stack, a FIFO queue and a priority queue.1 I NTRODeep Recurrent Neural Networks (RNNs) have recently proven to be very successful in real-wordtasks, e.g. machine translation (Sutskever et al., 2014) and computer vision (Vinyals et al., 2014).However, the success has been achieved only on tasks which do not require a large memory tosolve the problem, e.g. we can translate sentences using RNNs, but we cannot produce reasonabletranslations of really long pieces of text, like books.A high-capacity memory is a crucial component necessary to deal with large-scale problems thatcontain plenty of long-range dependencies. Currently used RNNs do not scale well to larger memories,e.g. the number of parameters in an LSTM (Hochreiter & Schmidhuber, 1997) grows quadraticallywith the size of the network’s memory. In practice, this limits the number of used memory cells tofew thousands.It would be desirable for the size of the memory to be independent of the number of model parameters.The first versatile and highly successful architecture with this property was Neural Turing Machine(NTM) proposed by Graves et al. (2014). The main idea behind the NTM is to split the network intoa trainable “controller” and an “external” variable-size memory. It caused an outbreak of other neuralnetwork architectures with external memories (see Sec. 2).However, one aspect which has been usually neglected so far is the efficiency of the memory access.Most of the proposed memory architectures have the (n)access complexity, where nis the size ofthe memory. It means that, for instance, copying a sequence of length nrequires performing (n2)operations, which is clearly unsatisfactory.1.1 O UR CONTRIBUTIONWe propose a novel memory module for neural networks, called Hierarchical Attentive Memory(HAM). The HAM module is generic and can be used as a building block of larger neural architectures.Its crucial property is that it scales well with the memory size — the memory access requires only(logn)operations, where nis the size of the memory. This complexity is achieved by using a newattention mechanism based on a binary tree with leaves corresponding to memory cells. The novelattention mechanism is not only faster than the standard one used in Deep Learning (Bahdanau et al.,2014), but it also facilities learning algorithms due to a built-in bias towards operating on intervals.Equal contribution.1Under review as a conference paper at ICLR 2017We show that an LSTM augmented with HAM is able to learn algorithms for tasks like merging,sorting or binary searching. In particular, it is the first neural network, which we are aware of, that isable to learn to sort from pure input-output examples and generalizes well to input sequences muchlonger than the ones seen during the training. Moreover, the learned sorting algorithm runs in time(nlogn). We also show that the HAM memory itself is capable of simulating different classicmemory structures: a stack, a FIFO queue and a priority queue.2 R ELATED WORKIn this section we mention a number of recently proposed neural architectures with an externalmemory, which size is independent of the number of the model parameters.Memory architectures based on attention Attention is a recent but already extremely successfultechnique in Deep Learning. This mechanism allows networks to attend to parts of the (potentiallypreprocessed) input sequence (Bahdanau et al., 2014) while generating the output sequence. It isimplemented by giving the network as an auxiliary input a linear combination of input symbols,where the weights of this linear combination can be controlled by the network. Attention mechanismwas used to access the memory in Neural Turing Machines (NTMs) proposed by Graves et al. (2014).It was the first paper, that explicitly attempted to train a computationally universal neural networkand achieved encouraging results.The Memory Network (Weston et al., 2014) is an early model that attempted to explicitly separatethe memory from computation in a neural network model. The followup work of (Sukhbaatar et al.,2015) combined the memory network with the soft attention mechanism, which allowed it to betrained with less supervision. In contrast to NTMs, the memory in these models is non-writeable.Another model without writeable memory is the Pointer Network (Vinyals et al., 2015), which isvery similar to the attention model of Bahdanau et al. (2014). Despite not having a memory, thismodel was able to solve a number of difficult algorithmic problems, like the Convex Hull and theapproximate 2D TSP.All of the architectures mentioned so far use standard attention mechanisms to access the memoryand therefore memory access complexity scales linearly with the memory size.Memory architectures based on data structures Stack-Augmented Recurrent Neural Network(Joulin & Mikolov, 2015) is a neural architecture combining an RNN and a differentiable stack.Grefenstette et al. (2015) consider extending an LSTM with a stack, a FIFO queue or a double-endedqueue and show some promising results. The advantage of the latter model is that the presented datastructures have a constant access time.Memory architectures based on pointers In two recent papers (Zaremba & Sutskever, 2015;Zaremba et al., 2015) authors consider extending neural networks with nondifferentiable memoriesbased on pointers and trained using Reinforcement Learning. The big advantage of these models isthat they allow a constant time memory access. They were however only successful on relativelysimple tasks.Another model, which use a pointer-based memory and learns sub-procedures is the NeuralProgrammer-Interpreter (Reed & de Freitas, 2015). Unfortunately, it requires strong supervisionin the form of execution traces. Different type of pointer-based memory was presented in NeuralRandom-Access Machine (Kurach et al., 2015), which is a neural architecture mimicking classiccomputers.Parallel memory architectures There are two recent memory architectures, which are especiallysuited for parallel computation. Grid-LSTM (Kalchbrenner et al., 2015) is an extension of LSTM tomultiple dimensions. Another recent model of this type is Neural GPU (Kaiser & Sutskever, 2015),which can learn to multiply long binary numbers.2Under review as a conference paper at ICLR 20173 H IERARCHICAL ATTENTIVE MEMORYIn this section we describe our novel memory module called Hierarchical Attentive Memory (HAM).The HAM module is generic and can be used as a building block of larger neural network architectures.For instance, it can be added to feedforward or LSTM networks to extend their capabilities. To makeour description more concrete we will consider a model consisting of an LSTM “controller” extendedwith a HAM module.The high-level idea behind the HAM module is as follows. The memory is structured as a full binarytree with the leaves containing the data stored in the memory. The inner nodes contain some auxiliarydata, which allows us to efficiently perform some types of “queries” on the memory. In order toaccess the memory, one starts from the root of the tree and performs a top-down descent in the tree,which is similar to the hierarchical softmax procedure (Morin & Bengio, 2005). At every node ofthe tree, one decides to go left or right based on the auxiliary data stored in this node and a “query”.Details are provided in the rest of this section.3.1 N OTATIONThe model takes as input a sequence x1;x2;:::and outputs a sequence y1;y2;:::. We assume thateach element of these sequences is a binary vector of size b2N, i.e.xi;yi2f0;1gb. Suppose fora moment that we only want to process input sequences of length n, wheren2Nis a power oftwo (we show later how to process sequences of an arbitrary length). The model is based on the fullbinary tree with nleaves. LetVdenote the set of the nodes in that tree (notice that jVj= 2n1)and letLVdenote the set of its leaves. Let l(e)fore2VnLbe the left child of the node eandletr(e)be its right child. We will now present the inference procedure for the model and then discusshow to train it.3.2 I NFERENCEy1LSTMHAMx1. . . xmy2LSTMHAMy3LSTMHAM. . .Figure 1: The LSTM+HAM model consists ofan LSTM controller and a HAM module. Theexecution of the model starts with the initializa-tion of HAM using the whole input sequencex1;x2;:::;xm. At each timestep, the HAM mod-ule produces an input for the LSTM, which thenproduces an output symbol yt. Afterwards, thehidden states of the LSTM and HAM are updated.The high-level view of the model executionis presented in Fig. 1. The hidden state ofthe model consists of two components: thehidden state of the LSTM controller (denotedhLSTM2Rlfor somel2N) and the hidden val-ues stored in the nodes of the HAM tree. Moreprecisely, for every node e2Vthere is a hiddenvaluehe2Rd. These values change during therecurrent execution of the model, but we dropall timestep indices to simplify the notation.The parameters of the model describe the input-output behaviour of the LSTM, as well as thefollowing 4transformations, which describe theHAM module: EMBED :Rb!Rd,JOIN :RdRd!Rd,SEARCH :RdRl![0;1]andWRITE :RdRl!Rd. These transfor-mations may be represented by arbitrary func-tion approximators, e.g. Multilayer Perceptrons(MLPs). Their meaning will be described soon.The details of the model are presented in 4figures. Fig. 2a describes the initialization of the model.Each recurrent timestep of the model consists of three phases: the attention phase described in Fig. 2b,theoutput phase described in Fig. 2c and the update phase described in Fig. 2d. The whole timestepcan be performed in time (logn).The HAM parameters describe only the 4mentioned transformations and hence the number of themodel parameters does not depend on the size of the binary tree used. Thus, we can use the model toprocess the inputs of an arbitrary length by using big enough binary trees. It is not clear that the sameset of parameters will give good results across different tree sizes, but we showed experimentally thatit is indeed the case (see Sec. 4 for more details).3Under review as a conference paper at ICLR 2017h1h2 h3h4 h5 h6 h7h8 h9 h10 h11 h12 h13 h14 h15x1 x2 x3 x4 x5 x6EMBED EMBED EMBED EMBED EMBED EMBEDJOINJOIN JOINJOIN JOIN JOIN JOIN(a) Initialization of the model. The value in the i-thleaf of HAM is initialized with EMBED (xi), whereEMBED is a trainable feed-forward network. If thereare more leaves than input symbols, we initialize thevalues in the excessive leaves with zeros. Then, weinitialize the values in the inner nodes bottom-upusing the formula he=JOIN (hl(e);hr(e)). Thehidden state of the LSTM — hLSTM is initialized withzeros.h1h2 h3h4 h5 h6 h7h8 h9 h10 h11 h12 ha h14 h15SEARCH( h1, hLSTM ) = 0.95SEARCH( h3, hLSTM ) = 0.1SEARCH( h6, hLSTM ) = 1(b)Attention phase. In this phase the model performsa top-down “search” in the tree starting from the root.Suppose that we are currently at the node c2VnL.We compute the value p=SEARCH (hc;hLSTM).Then, with probability pthe model goes right (i.e.c:=r(c)) and with probability 1pit goes left(i.e.c:=l(c)). This procedure is continued untilwe reach one of the leaves. This leaf is called theattended oraccessed leaf and denoted a.ha hLSTMyt(c)Output phase. The value hastored in the at-tended leaf is given to the LSTM as an input. Then,the LSTM produces an output symbol yt2f0;1gb.More precisely, the value u2Rbis computed bya trainable linear transformation from hLSTM andthe distribution of ytis defined by the formulap(yt;i= 1) = sigmoid (ui)for1ib. Itmay be beneficial to allow the model to access thememory a few times between producing each outputsymbols. Therefore, the model produces an outputsymbol only at timesteps with indices divisible bysome constant 2N, which is a hyperparameter.h1h2 h3h4 h5 h6 h7h8 h9 h10 h11 h12 ha h14 h15hLSTMha:= WRITE( ha, hLSTM )JOINJOINJOIN(d)Update phase. In this phase the value inthe attended leaf ais updated. More precisely,the value is modified using the formula ha:=WRITE (ha;hLSTM). Then, we update the valuesof the inner nodes encountered during the attentionphase (h6;h3andh1in the figure) bottom-up usingthe equation he=JOIN (hl(e);hr(e)).Figure 2: The model. One timestep consists of three phases presented in Figures (b)–(d).We decided to represent the transformations defining HAM with MLPs with ReLU (Nair & Hinton,2010) activation function in all neurons except the output layer of SEARCH , which uses sigmoidactivation function to ensure that the output may be interpreted as a probability. Moreover, thenetwork for WRITE is enhanced in a similar way as Highway Networks (Srivastava et al., 2015),i.e.WRITE (ha;hLSTM) =T(ha;hLSTM)H(ha;hLSTM) + (1T(ha;hLSTM))ha, whereHandTare two MLPs with sigmoid activation function in the output layer. This allows the WRITEtransformation to easily leave the value haunchanged.3.3 T RAININGIn this section we describe how to train our model from purely input-output examples using REIN-FORCE (Williams, 1992). In Appendix B we also present a different variant of HAM which is fullydifferentiable and can be trained using end-to-end backpropagation.Letx;ybe an input-output pair. Recall that both xandyare sequences. Moreover, let denote theparameters of the model and let Adenote the sequence of all decisions whetherto go left or rightmade during the whole execution of the model. We would like to maximize the log-probability ofproducing the correct output, i.e.4Under review as a conference paper at ICLR 2017L= logp(yjx;) = log XAp(Ajx;)p(yjA;x; )!:This sum is intractable, so instead of minimizing it directly, we minimize a variational lower boundon it:F=XAp(Ajx;) logp(yjA;x; )L:This sum is also intractable, so we approximate its gradient using the REINFORCE, which we brieflyexplain below. Using the identity rp(Ajx;) =p(Ajx;)rlogp(Ajx;), the gradient of the lowerbound with respect to the model parameters can be rewritten as:rF=XAp(Ajx;)hrlogp(yjA;x; ) + logp(yjA;x; )rlogp(Ajx;)i(1)We estimate this value using Monte Carlo approximation.For every xwe sampleeAfromp(Ajx;)and approximate the gradient for the input xasrlogp(yjeA;x; ) + logp(yjeA;x; )rlogp(eAjx;).Notice that this gradient estimate can be computed using normal backpropagation if we substitute thegradients in the nodes1which sample whether we should go left or right during the attention phase bylogp(yjeA;x; )|{z}returnrlogp(eAjx;):This term is called REINFORCE gradient estimate and the left factor is called a return in Rein-forcement Learning literature. This gradient estimator is unbiased, but it often has a high variance.Therefore, we employ two standard variance-reduction technique for REINFORCE: discountedreturns andbaselines (Williams, 1992). Discounted returns means that our return at the t-th timestephas the formPtiitlogp(yijeA;x; )for some discount constant 2[0;1], which is a hyperpa-rameter. This biases the estimator if <1, but it often decreases its variance.For the lack of space we do not describe the baselines technique. We only mention that our baselineis case and timestep dependent: it is computed using a learnable linear transformation from hLSTMand trained using MSE loss function. The whole model is trained with the Adam (Kingma & Ba,2014) algorithm. We also employ the following three training techniques:Different reward function During our experiments we noticed that better results may be obtainedby using a different reward function for REINFORCE. More precisely, instead of the log-probabilityof producing the correct output, we use the percentage of the output bits, which have the proba-bility of being predicted correctly (given eA) greater than 50%, i.e. our discounted return is equalPti;1jbithp(yi;jjeA;x; )>0:5i. Notice that it corresponds to the Hamming distance be-tween the most probable outcome accordingly to the model (given bA) and the correct output.Entropy bonus term We add a special term to the cost function which encourages exploration.More precisely, for each sampling node we add to the cost function the termH(p), whereH(p)isthe entropy of the distribution of the decision, whether to go left or right in this node and is anexponentially decaying coefficient. This term goes to infinity, whenever the entropy goes to zero,what ensures some level of exploration. We noticed that this term works better in our experimentsthan the standard term of the form H(p)(Williams, 1992).Curriculum schedule We start with training on inputs with lengths sampled uniformly from [1;n]for somen= 2kand the binary tree with nleaves. Whenever the error drops below some threshold,we increment the value kand start using the bigger tree with 2nleaves and inputs with lengthssampled uniformly from [1;2n].1For a general discussion of computing gradients in computation graphs, which contain stochastic nodes see(Schulman et al., 2015).5Under review as a conference paper at ICLR 20174 E XPERIMENTSIn this section, we evaluate two variants of using the HAM module. The first one is the modeldescribed in Sec. 3, which combines an LSTM controller with a HAM module (denoted byLSTM+HAM). Then, in Sec. 4.3 we investigate the “raw” HAM (without the LSTM controller) tocheck its capability of acting as classic data structures: a stack, a FIFO queue and a priority queue. Itwould be also interesting to get some insight into the algorithms learned by the model. In Appendix Awe present an example execution on the Sort task.4.1 T EST SETUPFor each test that we perform, we apply the following procedure. First, we train the model withmemory of size up to n= 32 using the curriculum schedule described in Sec. 3.3. The model istrained using the minibatch Adam algorithm with exponentially decaying learning rate. We userandom search to determine the best hyper-parameters for the model. We use gradient clipping(Pascanu et al., 2012) with constant 5. The depth of our MLPs is either 1or2, the LSTM controllerhasl= 20 memory cells and the hidden values in the tree have dimensionality d= 20 . Constantdetermining a number of memory accesses between producing each output symbols (Fig. 2c) isequal either 1or2. We always train for 100epochs, each consisting of 1000 batches of size 50. Aftereach epoch we evaluate the model on 200validation batches without learning. When the training isfinished, we select the model parameters that gave the lowest error rate on validation batches andreport the error using these parameters on fresh 2;500random examples.We report two types of errors: a test error and a generalization error. The test error shows howwell the model is able to fit the data distribution and generalize to unknown cases, assuming thatcases of similar lengths were shown during the training. It is computed using the HAM memorywithn= 32 leaves, as the percentage of output sequences , which were predicted incorrectly. Thelengths of test examples are sampled uniformly from the range [1;n]. Notice that we mark the wholeoutput sequence as incorrect even if only one bit was predicted incorrectly, e.g. a hypothetical modelpredicting each bit incorrectly with probability 1%(and independently of the errors on the other bits)has an error rate of 96% onwhole sequences if outputs consist of 320bits.The generalization error shows how well the model performs with enlarged memory on exampleswith lengths exceeding n. We test our model with memory 4times bigger than the training one. Thelengths of input sequences are now sampled uniformly from the range [2n+ 1;4n].During testing we make our model fully deterministic by using the most probable outcomes insteadof stochastic sampling. More precisely, we assume that during the attention phase the model decidesto go right iff p>0:5(Fig. 2b). Moreover, the output symbols (Fig. 2c) are computed by rounding tozero or one instead of sampling.4.2 LSTM+HAMWe evaluate the model on a number of algorithmic tasks described below:1.Reverse : Given a sequence of 10-bit vectors, output them in the reversed order., i.e.yi=xm+1ifor1im, wheremis the length of the input sequence.2.Search : Given a sequence of pairs xi=keyijjvalueifor1im1sorted by keysand a query xm=q, find the smallest isuch that keyi=qand outputy1=valuei. Keysand values are 5-bit vectors and keys are compared lexicographically. The LSTM+HAMmodel is given only two timesteps ( = 2) to solve this problem, which forces it to use aform of binary search.3.Merge : Given two sorted sequences of pairs — (p1;v1);:::; (pm;vm)and(p01;v01);:::; (p0m0;v0m0), wherepi;p0i2[0;1]andvi;v0i2f0;1g5, merge them. Pairsare compared accordingly to their priorities, i.e. values piandp0i. Priorities are uniqueand sampled uniformly from the set f1300;:::;300300g, because neural networks cannot eas-ily distinguish two real numbers which are very close to each other. Input is encoded as6Under review as a conference paper at ICLR 2017xi=pijjvifor1imandxm+i=p0ijjv0ifor1im0. The output consists of thevectorsviandv0isorted accordingly to their priorities2.4.Sort : Given a sequence of pairs xi=keyijjvalueisort them in a stable way3accordinglyto the lexicographic order of the keys. Keys and values are 5-bit vectors.5.Add: Given two numbers represented in binary, compute their sum. The input is representedasa1;:::;am;+;b1;:::;bm;=(i.e.x1=a1;x2=a2and so on), where a1;:::;amandb1;:::;bmare bits of the input numbers and +;=are some special symbols. Input and outputnumbers are encoded starting from the least significant bits.Every example output shown during the training is finished by a special “End Of Output” symbol,which the model learns to predict. It forces the model to learn not only the output symbols, but alsothe length of the correct output.We compare our model with 2 strong baseline models: encoder-decoder LSTM (Sutskever et al.,2014) and encoder-decoder LSTM with attention (Bahdanau et al., 2014), denoted LSTM+A. Thenumber of the LSTM cells in the baselines was chosen in such a way, that they have more parametersthan the biggest of our models. We also use random search to select an optimal learning rate and someother parameters for the baselines and train them using the same curriculum scheme as LSTM+HAM.Table 1: Experimental results. The upper tablepresents the error rates on inputs of the samelengths as the ones used during training. The lowertable shows the error rates on input sequences 2to4times longer than the ones encountered dur-ing training. LSTM+A denotes an LSTM with thestandard attention mechanism. Each error rate is apercentage of output sequences , which containedat least one incorrectly predicted bit.test error LSTMLSTM+ALSTM+HAMReverse 73% 0% 0%Search 62% 0:04% 0:12%Merge 88% 16% 0%Sort 99% 25% 0:04%Add 39% 0% 0%2-4x longerinputsLSTMLSTM+ALSTM+HAMReverse 100% 100% 0%Search 89% 0:52% 1:68%Merge 100% 100% 2:48%Sort 100% 100% 0:24%Add 100% 100% 100%Complexity (1) (n)(logn)The results are presented in Table 1. Not only,does LSTM+HAM solve all the problems al-most perfectly, but it also generalizes very wellto much longer inputs on all problems exceptAdd. Recall that for the generalization tests weused a HAM memory of a different size than theones used during the training, what shows thatHAM generalizes very well to new sizes of thebinary tree. We find this fact quite interesting,because it means that parameters learned froma small neural network (i.e. HAM based on atree with 32leaves) can be successfully used ina different, bigger network (i.e. HAM with 128memory cells).In comparison, the LSTM with attention doesnot learn to merge, nor sort. It also completelyfails to generalize to longer examples, whichshows that LSTM+A learns rather some statis-tical dependencies between inputs and outputsthan the real algorithms.The LSTM+HAM model makes a few errorswhen testing on longer outputs than the onesencountered during the training. Notice how-ever, that we show in the table the percentageof output sequences, which contain at least oneincorrect bit. For instance, LSTM+HAM on theproblem Merge predicts incorrectly only 0:03% of output bits, which corresponds to 2:48% ofincorrect output sequences. We believe that these rare mistakes could be avoided if one trained themodel longer and chose carefully the learning rate schedule. One more way to boost generalizationwould be to simultaneously train the models with different memory sizes and shared parameters. Wehave not tried this as the generalization properties of the model were already very good.2Notice that we earlier assumed for the sake of simplicity that the input sequences consist of binary vectorsand in this task the priorities are realvalues. It does not however require any change of our model. We decidedto use real priorities in this task in order to diversify our set of problems.3Stability means that pairs with equal keys should be ordered accordingly to their order in the input sequence.7Under review as a conference paper at ICLR 20174.3 R AWHAMIn this section, we evaluate “raw” HAM module (without the LSTM controller) to see if it can actas a drop-in replacement for 3classic data structures: a stack, a FIFO queue and a priority queue.For each task, the network is given a sequence of PUSH and POP operations in an online manner: attimesteptthe network sees only the t-th operation to perform xt. This is a more realistic scenario fordata structures usage as it prevents the network from cheating by peeking into the future.Raw HAM module differs from the LSTM+HAM model from Sec. 3 in the following way:The HAM memory is initialized with zeros.Thet-th output symbol ytis computed using an MLP from the value in the accessed leaf ha.Notice that in the LSTM+HAM model, hLSTM acted as a kind of “query” or “command”guiding the behaviour of HAM. We will now use the values xtinstead. Therefore, atthet-th timestep we use xtinstead ofhLSTM wheneverhLSTM was used in the originalmodel, e.g. during the attention phase (Fig. 2b) we use p=SEARCH (hc;xt)instead ofp=SEARCH (hc;hLSTM).We evaluate raw HAM on the following tasks:1.Stack : The “PUSH x” operation places the element x(a5-bit vector) on top of the stack,and the “POP” returns the last added element and removes it from the stack.2.Queue : The “PUSH x” operation places the element x(a5-bit vector) at the end of thequeue and the “POP” returns the oldest element and removes it from the queue.3.PriorityQueue : The “PUSH x p” operations adds the element xwith priority ptothe queue. The “POP” operation returns the value with the highest priority and remove itfrom the queue. Both xandpare represented as 5-bit vectors and priorities are comparedlexicographically. To avoid ties we assume that all elements have different priorities.Table 2: Results of experiments with the raw ver-sion of HAM (without the LSTM controller). Errorrates are measured as a percentage of operation se-quences in which at least one POP query was notanswered correctly.Task Test ErrorGeneralizationErrorStack 0% 0%Queue 0% 0%PriorityQueue0:08% 0:2%Model was trained with the memory of size upton= 32 with operation sequences of lengthn. Sequences of PUSH/POP actions for train-ing were selected randomly. The t-th operationout ofnoperations in the sequence was POPwith probabilitytnand PUSH otherwise. To testgeneralization, we report the error rates with thememory of size 4non sequences of operationsof length 4n.The results presented in Table 2 show that HAMsimulates a stack and a queue perfectly with noerrors whatsoever even for memory 4times big-ger. For the PriorityQueue task, the modelgeneralizes almost perfectly to large memory, with errors only in 0:2% of output sequences.5 C OMPARISON TO OTHER MODELSAs far as we know, our model is the first one which is able to learn a sorting algorithm from pureinput-output examples. Although this problem was considered in the original NTM paper, the errorrate achieved by the NTM is in fact quite high – the log-likelihood of the correct output was equalaround 20bits on outputs consisting of 128bits. In comparison our model learns to solve almostperfectly - only 0:04% of the outputs produced by our model contain at least one incorrect bit.Reed & de Freitas (2015) shown that an LSTM is able to learn to sort short sequences, but it failsto generalize to inputs longer than the ones seen during the training. It is quite clear that an LSTMcannot learn a “real” sorting algorithm, because it uses a bounded memory independent of the lengthof the input. The Neural Programmer-Interpreter (Reed & de Freitas, 2015) is a neural networkarchitecture, which is able to learn bubble sort, but it requires strong supervision in the form of8Under review as a conference paper at ICLR 2017execution traces. In comparison, our model can be trained from pure input-output examples, which iscrucial if we want to use it to solve problems for which we do not know any algorithms.An important feature of neural memories is their efficiency. Our HAM module in comparison tomany other recently proposed solutions is effective and allows to access the memory in (log(n))complexity. In the context of learning algorithms it may sound surprising that among all thearchitectures mentioned in Sec. 2 the only ones, which can copy a sequence of length nwithout(n2)operations are: Reinforcement-Learning NTM (Zaremba & Sutskever, 2015), the model from(Zaremba et al., 2015), Neural Random-Access Machine (Kurach et al., 2015), and Queue-AugmentedLSTM (Grefenstette et al., 2015). However, the first three models have been only successful onrelatively simple tasks. The last model was successful on some synthetic tasks from the domain ofNatural Language Processing, which are very different from the tasks we tested our model on, so wecannot directly compare the two models.6 C ONCLUSIONSWe presented a new memory architecture for neural networks called Hierarchical Attentive Memory.Its crucial property is that it scales well with the memory size — the memory access requires only(logn)operations. This complexity is achieved using a new attention mechanism based on a binarytree. The model proved to be successful on a number of algorithmic problems. The future workis to apply this or similar architecture to very long real-world sequential data like books or DNAsequences.REFERENCESDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. arXiv preprint arXiv:1409.0473 , 2014.Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprintarXiv:1410.5401 , 2014.Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning totransduce with unbounded memory. In Advances in Neural Information Processing Systems , pp.1819–1827, 2015.Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrentnets. arXiv preprint arXiv:1503.01007 , 2015.Łukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228 ,2015.Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. arXiv preprintarXiv:1507.01526 , 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. arXivpreprint arXiv:1511.06392 , 2015.Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neuralnetworks. arXiv preprint arXiv:1511.05493 , 2015.Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. InAistats , volume 5, pp. 246–252. Citeseer, 2005.Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. InProceedings of the 27th International Conference on Machine Learning (ICML-10) , pp. 807–814,2010.9Under review as a conference paper at ICLR 2017Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. Understanding the exploding gradient problem.Computing Research Repository (CoRR) abs/1211.5063 , 2012.Scott Reed and Nando de Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279 ,2015.John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation usingstochastic computation graphs. In Advances in Neural Information Processing Systems , pp. 3510–3522, 2015.Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. arXiv preprintarXiv:1505.00387 , 2015.Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks.arXiv preprint arXiv:1503.08895 , 2015.Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks.InAdvances in neural information processing systems , pp. 3104–3112, 2014.Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural imagecaption generator. arXiv preprint arXiv:1411.4555 , 2014.Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. arXiv preprintarXiv:1506.03134 , 2015.Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprintarXiv:1410.3916 , 2014.Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcementlearning. Machine learning , 8(3-4):229–256, 1992.Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprintarXiv:1505.00521 , 2015.Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithmsfrom examples. arXiv preprint arXiv:1511.07275 , 2015.10Under review as a conference paper at ICLR 2017A E XAMPLE : HAM SORTINGWe present some insights into the algorithms learned by the LSTM+HAM model, by investigating thehidden representations helearned for a variant of the problem Sort in which we sort 4-bit vectorslexicographically4. For demonstration purposes, we use a small tree with n= 8 leaves and eachnode’s hidden state has size d= 6values.The trained network performs sorting perfectly. It attends to the leaves in the order corresponding tothe order of the sorted input values, i.e. at every timestep HAM attends to the leaf corresponding tothe smallest input value among the leaves, which have not been attended so far.It would be interesting to exactly understand the algorithm used by the network to perform thisoperation. A natural solution to this problem would be to store in each hidden node ethe smallestinput value among the (unattended so far) leaves belowetogether with the information whether thesmallest value is in the right or the left subtree under e.In the Fig. 3 we present two timesteps of our model. The LSTM controller is not presented to simplifythe exposition. The input sequence is presented on the left, below the tree: x1=0000;x2=1110;x3=1101 and so on. The 2x3grids in the nodes of the tree represent the values he2R6.White cells correspond to value 0and non-white cells correspond to values >0.The lower-rightmost cells are presented in pink, because we managed to decipher the meaning of thiscoordinate for the inner nodes. This coordinate in the node edenotes whether the minimum in thesubtree (among the values unattended so far) is in the right or left subtree of e. Value greater than 0(pink in the picture) means that the minimum is in the right subtree and therefore we should go rightwhile visiting this node in the attention phase.In the first timestep the leftmost leaf (corresponding to the input 0000 ) is accessed. Notice that thelast coordinates (shown in pink) are updated appropriately, e.g. the smallest unattended value at thebeginning of the second timestep is 0101 , which corresponds to the 6-th leaf. It is in the right subtreeunder the root and accordingly the last coordinate in the hidden value stored in the root is high (i.e.pink in the figure).(a) The first timestep (b) The second timestepFigure 3: An exemplary input sequence and the state of HAM after initialization (left) and after firsttimestep (right).4In the problem Sort considered in the experimental results, there are separate keys and values, whichforces the model to learn stable sorting. Here, for the sake of simplicity, we consider the simplified version ofthe problem and do not use separate keys and values.11Under review as a conference paper at ICLR 2017B U SING SOFT ATTENTIONOne of the open questions in the area of designing neural networks with attention mechanisms iswhether to use a softorhard attention. The model described in the paper belongs to the latter class ofattention mechanisms as it makes hard, stochastic choices. The other solution would be to use a soft,differentiable mechanism, which attends to a linear combination of the potential attention targetsand do not involve any sampling. The main advantage of such models is that their gradients can becomputed exactly.We now describe how to modify the model to make it fully differentiable ("DHAM"). Recall that inthe original model the leaf which is attended at every timestep is sampled stochastically. Instead ofthat, we will now at every timestep compute for every leaf ethe probability p(e)that this leaf wouldbe attended if we used the stochastic procedure described in Fig. 2b. The value p(e)can be computedby multiplying the probabilities of going in the right direction from all the nodes on the path from theroot toe.As the input for the LSTM we then use the valuePe2Lp(e)he. During the write phase, we update thevalues of allthe leaves using the formula he:=p(e)WRITE (he;hROOT)+(1p(e))he. Then, in theupdate phase we update the values of allthe inner nodes, so that the equation he=JOIN (hl(e);hr(e))is satisfied for each inner node e. Notice that one timestep of the soft version of the model takes time(n)as we have to update the values of all the nodes in the tree. Our model may be seen as a specialcase of Gated Graph Neural Network (Li et al., 2015).This version of the model is fully differentiable and therefore it can be trained using end-to-endbackpropagation on the log-probability of producing the correct output. We observed that trainingDHAM is slightly easier than the REINFORCE version. However, DHAM does not generalize aswell as HAM to larger memory sizes.12
r1xUYDYgg
Under review as a conference paper at ICLR 2017DEVELOPMENT OF JAVASCRIPT -BASED DEEP LEARN -ING PLATFORM AND APPLICATION TO DISTRIBUTEDTRAININGMasatoshi Hidaka, Ken Miura & Tatsuya HaradaDepartment of Information Science and TechnologyThe University of Tokyo7-3-1, Hongo, Bunkyo-ku, Tokyo, Japanfhidaka,miura,harada g@mi.t.u-tokyo.ac.jpABSTRACTDeep learning is increasingly attracting attention for processing big data. Exist-ing frameworks for deep learning must be set up to specialized computer systems.Gaining sufficient computing resources therefore entails high costs of deploymentand maintenance. In this work, we implement a matrix library and deep learningframework that uses JavaScript. It can run on web browsers operating on ordi-nary personal computers and smartphones. Using JavaScript, deep learning canbe accomplished in widely diverse environments without the necessity for soft-ware installation. Using GPGPU from WebCL framework, our framework cantrain large scale convolutional neural networks such as VGGNet and ResNet. Inthe experiments, we demonstrate their practicality by training VGGNet in a dis-tributed manner using web browsers as the client.1 I NTRODUCTIONRecently, machine learning, which uses big data derived from user activity on websites, images andvideos is increasingly getting attention. Deep learning is at the center of that attention. Conven-tional machine learning techniques have required hand-crafted features specialized to a particulardomain such as image or voice. In contrast, deep learning has a hugely important benefit that canillustrate data flow from raw data to an objective value in a single neural network and can train thor-oughly using those data. In the computer vision domain, a team of Hinton ( Krizhevsky et al. ,2012 )achieved outstanding classification accuracy using deep learning in an object classification competi-tion ILSVRC2012 ( Russakovsky et al. ,2015 ). In the subsequent years’ competitions, deep-learning-based methods evolved continually and exhibited superior performance ( Simonyan & Zisserman ,2014a ;Szegedy et al. ,2014 ;He et al. ,2016 ). Convolutional neural networks (CNNs) trained forILSVRC object classification are helpful for improving classification accuracy for scene recognitionand video recognition by functioning as a feature extractor or being fine-tuned ( Zhou et al. ,2014 ;Simonyan & Zisserman ,2014b ). Moreover, application is beginning to emerge in other areas suchas medical imaging ( Tajbakhsh et al. ,2016 ). Software platforms for deep learning are expected toplay an important role in accelerating a wide range of research efforts and applications.Although deep learning achieved significant recognition accuracy that cannot be achieved usingconventional methods, the number of parameters that can be trained is greater, resulting in requestsfor huge amounts of training data. This shortcoming not only increases data collection costs butalso increases computational costs of training larger parameters with larger data. Moreover, trial-and-error must be undertaken to ascertain a good neural network structure; thereby higher costsbecome necessary. What resolved this computational cost difficulty and enabled deep learning towork on a practical scale problem is general purpose computing on GPU (GPGPU) technology,which offers rapid matrix calculation. However, a deep learning framework must be set up ona dedicated computer. If a user wants to train a huge network, then a cluster computing systemthat uses MPI or Hadoop must be used for collaboration of multiple computers to obtain largerworking memory and computational speed. To set up and maintain these systems generally presents1Under review as a conference paper at ICLR 2017an expensive task. For that reason, such systems are available only to expert IT companies orlaboratories.This work specifically examines JavaScript, the programming language that runs on web browsersinstalled on ordinary personal computers and smartphones. With the recent advancement of webtechnology, JavaScript became the standard programming language to implement rich applicationson web browsers. Word processors provided by Google and Microsoft are the popular examples.Those applications are traditionally implemented as native applications. This is not only a changeof programming language; it brings an advantage of install-free convenience. Moreover, the com-munication features of web browsers are used not only during the loading of the application, butare also used by the application on demand, using so-called Ajax technology. For example, usingthis technology with a Google service spreadsheet, modifications made by one user are shown inreal time on other users’ displays. By making full use of this technology, collaboration of an ap-plication running on web browsers across the internet becomes possible. Moreover, web browserssuch as Google Chrome run not only on Windows, but also on Mac OS X, Linux, Android, and iOSsmartphones. They provide a compatible JavaScript executing environment. More recently, a smallmicrocontroller board for prototyping Internet of Things (IoT) devices runs Linux. JavaScript canrun on these devices. However, JavaScript is rarely used for scientific computation. This is mainlybecause JavaScript assumes single-threaded execution. It has no fast matrix computation library,which is crucially important for scientific computation. To resolve this difficulty, our previous workproposed the fast matrix computation library, which uses a parallel computing platform, WebCL,from JavaScript ( Miura et al. ,2015 ). In WebCL, GPGPU can be utilized from JavaScript code.Moreover, its application to deep learning is proposed ( Miura & Harada ,2015 ). However, existingimplementations cannot fully exploit the functionality of JavaScript and WebCL. For that reason,only a small six-layer CNN for classifying CIFAR-10 ( Krizhevsky ,2009 ) dataset can be trained. Inthis work, our objective is to provide a deep learning platform that can train practical large-scaleCNN as large as VGGNet. In the Experiment section, we present preliminary results on trainingVGGNet by distributed computation using web browsers as the computation client. In the followingsection, we restrict our description to CNN only, but our system is applicable to neural networks ofother kinds by implementing the layers that they need.Our contributions are the following:We implemented the fastest matrix library and deep learning library that can run on webbrowsers using GPGPU. The source code is provided as open-source software1.Even where GPGPU cannot be used, native JavaScript implementation is provided, whichallows high-level multi-dimensional matrix operation.We describe the possibility of training large scale CNN in a distributed manner withoutinstalling software in computation nodes, except for a generic plugin.2 R ELATED WORKIn this section, we first describe the studies related to distributed computing using generic comput-ers that are not designed for scientific computing. The SETI@home project searches for extrater-restrial life ( Anderson et al. ,2002 ). In that research effort, radio waves analyses were performeddistributedly on computers of volunteers. Although dedicated software had to be installed, morethan 3 million computers participated in the project and contributed vast amounts of computationalresources. Merelo-Guervos et al. (2008 );Klein & Spector (2007 ) distributedly computed geneticalgorithm (GA) using web browsers as computing nodes. The main component of GA was calcula-tion of the fitness of population, which could be computed completely in parallel, thereby achievingextremely effective distributed computing. In our work, the main task to be distributed is deep learn-ing, for which a large amount of weight parameters must be communicated frequently. Therefore,the communication efficiency becomes important.Secondly, we explain distributed computing of deep learning. Dean et al. (2012 ) proposed a mech-anism called DistBelief, which divides a neural network into multiple blocks of neurons and trainseach block in a different computer. Large amounts of data are transferred at the division borders.1Download code from https://github.com/mil-tokyo2Under review as a conference paper at ICLR 2017They require n-to-n communication, which is unsuitable for environment in which computing nodesare not in the same LAN. deeplearning4j2provides distributed computing of deep learning frame-work that runs on the distributed computing Hadoop. However, Hadoop must be installed in allcomputing nodes, thereby imposing high deployment and maintenance costs. Meeds et al. (2014 )developed a distributed deep learning system using web browsers. However, it is implemented innative JavaScript. For that reason, training with a large-scale dataset is nearly impossible because ofthe computational speed. In this work, we inherit the good properties of a JavaScript (web browser)based computing environment, with the aim of making training of practical CNN possible.3 M ATRIX LIBRARY IMPLEMENTATIONIn this section, we describe the fast and generic matrix library “Sushi2”, which is based on previouslibrary “Sushi.” They are using WebCL technology, which is a parallel computing platform to beused from JavaScript. WebCL is a JavaScript wrapper for parallel computing platform OpenCL,standardized by Khronos Group, which provides a unified interface to multi-core CPU and GPGPU.In contrast to NVIDIA CUDA, GPUs from AMD and Intel can also be used as accelerators. Un-fortunately, WebCL is not built-in feature of web browsers, but there is an add-on for Firefox andWebCL-integrated Chromium. Our library also works with node.js (server-side JavaScript execu-tion environment), in which node-opencl3library can be used to accelerate computation. AlthoughSushi2 performs best in a WebCL environment, most functions have equivalent native JavaScriptimplementation. Sushi2 currently uses WebCL for the acceleration of numerical calculation, but itis possible to use other solutions including WebGL or asm.js by substituting implementation of ma-trix manipulation. In WebCL, “kernel” is the function to run on GPGPU. Kernel, which is written inC language, must be compiled before use. Sushi2 wraps them to allow users to write simple codes.Details of low-level WebCL operations are available in the literature ( Miura et al. ,2015 ).Though Sushi achieved efficient calculation on GPGPU, currently it lacks the availability for largescale neural networks that require matrices of large dimensions. Sushi2 is developed to overcomesuch problems that Sushi has been facing and achieved the following benefits:Use simple and efficient data structures to achieve good performance.Allow users to understand how to use it easily.Support CPU (native JavaScript) and GPGPU matrix without burdening ordinary users withlearning WebCL programming.Most general purpose matrix libraries for JavaScript represent a multi-dimensional matrix with anested JavaScript array. In contrast, Sushi2 represents a matrix with TypedArray, which is used fortransferring numeric data between the CPU and GPGPU. TypedArray is a one-dimensional numericarray with fixed size and bit width at construction, as in arrays of C language. The array accommo-dates efficient storing and manipulation of large data. TypedArray which stores 32-bit floating pointnumbers is named Float32Array and the one that stores 8-bit unsigned integer is named Uint8Array.The numeric type of JavaScript is a 64-bit floating point number, but some WebCL environments donot support it. Therefore, the basic numeric type of matrix is a 32-bit floating point number. How-ever, the precision of a 32-bit floating number is only 23-bit, so it cannot be used as an index of alarge matrix (which have more than 223elements). This is a problem for functions such as argmax ,so a 32-bit signed integer matrix is also implemented. Moreover, an 8-bit unsigned integer matrixfor raw image data and a logical matrix for Boolean operations are implemented.Functions for the operating matrix are designed to be similar to those of MATLAB, which allowsnew users to use Sushi2 quickly. Operations for matrices that have more than two dimensionsare implemented. It is a simple matter to operate color images and sets of color images (four-dimensional matrix). Almost all patterns for indexing operation in MATLAB are implemented. Forimport or export of a matrix, efficient binary format of numpy4is implemented as well as the nativeJavaScript nested Array.2http://deeplearning4j.org3https://github.com/mikeseven/node-opencl4http://docs.scipy.org/doc/numpy/neps/npy-format.html3Under review as a conference paper at ICLR 2017Table 1: Speed of Matrix Calculation. Time [ms] to process each task is shown.Task1: Addition of 1000x1000 matrix and 1000x1000 matrixTask2: Take element-wise logarithm of 1000x1000 matrixTask3: Multiplication of 1000x100 and 100x10 matricesTask4: Multiplication of 1000x1000 and 1000x1000 matricesEnvironment Library Task1 Task2 Task3 Task4Firefox Sushi2 + WebCL (Ours) 15.6 12.8 33.6 62.4Sushi2 (Ours) 1.8 39.0 2.4 1897.8Sylvester 49.0 64.6 3.8 9438.6Math.js 36.2 503.4 16.0 23321.0node.js Sushi2 + WebCL (Ours) 4.0 14.0 3.8 5.2Sushi2 (Ours) 1.8 26.4 2.0 1891.0Sylvester 38.0 52.4 3.2 7102.8Math.js 53.8 679.2 19.8 57588.6Function $M.gpuArray transfers a matrix to GPGPU. In functions that support WebCL, operations ofmatrices in GPGPU are accelerated. In JavaScript, unused memory is released by garbage collection,but this is not applied for memory allocated on the GPGPU by WebCL. It has to be released byexplicitly calling the destruct method. To make programming convenient, an “autodestruct” helperfunction is supplied. When the closure passed to autodestruct finishes, the matrices allocated in itare released automatically. Figure 1presents a sample implementation of a fully-connected layer ofCNN. Whether input matrices are on GPGPU or not, they can be processed in the same code.1vartop = $M.autodestruct( function ()f// closure function2 varproduct = $M.mtimes($M.t(weight), data); // weight’ data (No operator overloads inJavaScript)3 varbias repeated = $M.repmat(bias, 1, $M.size(data, 2)); //$M.size(data, 2) is the number ofsamples4 varproduct with bias = $M.plus(product, bias repeated); // product + bias repeated5 return product with bias;6g);// allocated matrices other than product with bias (e.g. $M.t(weight), product, bias repeated) arereleased hereFigure 1: Example of forward calculation of fully-connected layer using Sushi2Most GPGPU kernels are implemented originally for Sushi2, but matrix multiplication kernel isported from clBLAS’s5“sgemm”, because it requires advanced optimization.Table 1presents a speed comparison between our library and existing JavaScript based matrix li-braries; Sylvester6and Math.js7. The hardware environment is on Table 2(AMD). When GPGPUis used, the time includes data transfer between the CPU and GPGPU. Task 1 represents simpleelement-wise task. Task 2 represents relatively expensive element-wise task. Task 3 and 4 are ma-trix multiplication task; the complexity of operations is greater than the number of elements. Ourmatrix representation (TypedArray) seems to be better than native JavaScript Array used in other li-braries, even without WebCL. We can see clear superiority of using GPGPU when the computationalcost is high.4 D EEPLEARNING LIBRARY IMPLEMENTATIONIn this section, we describe deep learning library “Sukiyaki2”, which is based on matrix librarySushi2.5https://github.com/clMathLibraries/clBLAS6http://sylvester.jcoglan.com/7http://mathjs.org/4Under review as a conference paper at ICLR 2017BlobData(train,test)Convolu4onPoolingReLULinearSo;maxAccuracylabeldataconv1pool1relu1pred1[f”type”: ”blob data”, ”name”: ”d train”, ”inputs”: [”batch”], ”outputs”: [”data”,”label”], ”params”: f”data shape”: [28, 28, 1], ”file prefix”: ”mnist train”,”data klass”: ”single” g, ”phase”: [”train”] g,2f”type”: ”blob data”, ”name”: ”d test”, ”inputs”: [”batch”], ”outputs”: [”data”,”label”], ”params”: f”data shape”: [28, 28, 1], ”file prefix”: ”mnist test”,”data klass”: ”single” g, ”phase”: [”test”] g,3f”type”: ”convolution 2d”, ”name”: ”conv1”, ”inputs”: [”data”], ”outputs”: [”conv1”], ”params”: f”out size”: 20, ”stride”: 1, ”pad”: 0, ”in size”: 1, ”ksize”: 5 gg,4f”type”: ”pooling 2d”, ”name”: ”pool1”, ”inputs”: [”conv1”], ”outputs”: [”pool1”], ”params”: f”stride”: 2, ”pad”: 0, ”type”: ”max”, ”ksize”: 2 gg,5f”type”: ”relu”, ”name”: ”relu3”, ”inputs”: [”pool1”], ”outputs”: [”relu1”], ”params”: fgg,6f”type”: ”linear”, ”name”: ”fc3”, ”inputs”: [”relu1”], ”outputs”: [”pred”], ”params”: f”out size”: 10, ”in shape”: [12, 12, 20] gg,7f”type”: ”softmax cross entropy”, ”name”: ”loss”, ”inputs”: [”pred”, ”label”],”outputs”: [”loss”], ”params”: fgg,8f”type”: ”accuracy”, ”name”: ”acc”, ”inputs”: [”pred”, ”label”], ”outputs”: [”accuracy”], ”params”: fg, ”phase”: [”test”] g]Figure 2: Sample of a neural network and corresponding definition file.1 varimagedata = canvas context.getImageData(0, 0, 28, 28); // getpixel data from canvas2 varimage = $M.typedarray2mat([4, 28, 28], ’uint8’, newUint8Array(imagedata.data)); // convert to matrix withspecifying channel, width, height (in fortran order)3 image = image.get(1, $M.colon(), $M.colon()); // extract singlecolor channel (image(1, :, :) in MATLAB)4 image = $M.permute(image, [3, 2, 1]); // transpose to height,width, channel5 net.forward( f’data’: image g,function ()f// forwardpropagation6 varpred = net.blobs forward[’pred’]; // prediction layer output7 varmax index = $M.argmax(pred).I.get(); // get matrix indexof highest score (1 origin)8 varpredicted number = max index 1;9 document.getElementById(’result’).textContent =predicted number.toString(); // display classificationresult10 net.release();Figure 3: Screenshot of digit recognition web application using trained CNN, and main code ofrecognition. Recognition is performed on Android tablet, not on server.Sukiyaki2 implements modules that are necessary for deep learning: layers, network structure man-ager, and optimizers. Users can use a single layer separately, as well as training network by supply-ing configuration file to the executable. Figure 2portrays a sample of a network definition file. Fornetwork analysis required for distributed computing in the future, we used the architecture with stat-ically defined relations of layers. Improvements from our previous work include: enabling networkgraph branch (necessary for ResNet training), addition of some layers including dropout and batchnormalization, efficient binary export of network parameters. Users can implement the original lay-ers and optimizers to train new neural networks. It works automatically with CPU and GPGPUif it can be implemented by Sushi2’s matrix operations. For cases in which a performance bottle-neck exists, a dedicated GPGPU kernel can also be implemented. Using GPGPU for training isrecommended, but almost all functions have native JavaScript fallback.Figure 3portrays a sample application for recognizing digits captured using a camera. The networkis trained using MNIST dataset ( LeCun et al. ,1998b ). Although image data are given as a flat bytearray, extensive functions of Sushi2 allow short implementation of image recognition only in 10lines. Recent web browsers for smartphones follow the JavaScript standard, and it is possible todevelop such applications in this sample.5Under review as a conference paper at ICLR 2017Table 2: Hardware used for the experiments. NVIDIA K80 is recognized as two independentGPGPU chips from software. Performance of the single chip is presented.GPU GPU Theoretical FLOPS CPUAMD FirePro S9170 5.24T Intel Core i7-5930KNVIDIA K80 4.37T (using 1 chip) Intel Xeon E5-2690 v3Table 3: Speed of training LeNet. Processed images per second.JavaScript environment ConvNetJS OursFirefox 64 107node.js 88 47705 E XPERIMENTS5.1 S INGLE -GPGPU T RAININGIn this section, we evaluate the CNN training performance of the proposed system. The specifica-tions of hardware used for experiments are shown in Table 2.First, we compared our library and existing deep learning library ConvNetJS by Andrej Karpa-thy8, which is written in JavaScript. We evaluated them by training LeNet with MNIST dataset(LeCun et al. ,1998b ). The network structure is based on LeCun et al. (1998a ), which contains twoconvolutional layers and two fully-connected layers. The batch size is 64. Firefox (version 32) andnode.js (version 4.3.0) are used as the JavaScript execution environment. A tiny server applicationis implemented and used for supplying the dataset and saving the trained model to and from the webbrowser.The measured calculation speed is presented in Table 3. In Firefox, the performance gain wasrelatively low because the control overhead of GPGPU is dominant in the small CNN. In node.js,this overhead is smaller, thus using GPGPU allowed faster computation by a large margin.Next, we trained VGGNet ( Simonyan & Zisserman ,2014a ) and ResNet ( He et al. ,2016 ) as practicalscale CNNs. VGGNet is proposed by Simonyan & Zisserman (2014a ) at ILSVRC2014. 16-layerversion, denoted as VGG16, includes 13 convolutional layers and 3 fully-connected layers. It isamong the largest CNNs that are commonly used. ResNet is the winner of ILSVRC2015. 152-layerversion, denoted as ResNet152, includes 151 convolutional layers and 1 fully-connected layer, butthe bottleneck structure reduces the number of parameters.We used Caffe ( Jia et al. ,2014 ), a popular deep learning library, for comparison. The mainstreamversion of Caffe employs NVIDIA CUDA as the interface to GPGPU. We designate this version asCaffe (CUDA). CUDA is not compatible with GPGPUs other than NVIDIA’s. Caffe uses cuBLASfor matrix operations such as multiplication. There are forks of Caffe which use OpenCL as ancross-platform GPGPU interface. One such fork is OpenCL-Caffe by AMD9, which uses clBLASas the matrix operation. Another one is the opencl branch of Caffe by Fabian Tschopp10. It usesViennaCL11for matrix operations. In Caffe (CUDA), the cuDNN accelerator library from NVIDIAcan also be attached. We used same batch size in the same CNN / GPU setting for fair comparison.The training speed is presented in Table 4. By virtue of GPGPU, VGG16 and ResNet152 can betrained, which was difficult using existing JavaScript based libraries. In ResNet152, more than 1,000GPGPU kernels are executed and its execution overhead seems to be problematic on Firefox envi-ronment. Currently, our library is not faster than Caffe, but it achieved the same order of speed.Especially, Caffe (CUDA) provides the best performance. This difference mainly comes from thespeed of convolution. Implementation of convolution in Caffe is similar to ours. To perform con-8http://cs.stanford.edu/people/karpathy/convnetjs/index.html9https://github.com/amd/OpenCL-caffe10https://github.com/BVLC/caffe/tree/opencl11http://viennacl.sourceforge.net/6Under review as a conference paper at ICLR 2017Table 4: Training speed of VGG16 and ResNet152 [images/sec]. Batch size is shown in (). AMDrepresents AMD FirePro S9170, NVIDIA stands for NVIDIA K80.GPU Software VGG16 ResNet152AMD Ours (on Firefox) 4.0 (32) 1.4 (32)Ours (on node.js) 5.7 (32) 6.5 (32)Caffe (AMD) 7.7 (32) N/ACaffe (Tshopp) 5.3 (32) 1.6 (32)NVIDIA Ours (on Firefox) 2.7 (16) 0.2 (8)Ours (on node.js) 4.9 (16) 2.7 (8)Caffe (Tshopp) 3.2 (16) 1.5 (8)Caffe (CUDA) w/o cuDNN 11.9 (16) 8.5 (8)Caffe (CUDA) with cuDNN 14.4 (16) 9.4 (8)05001000150020002500300035004000conv1_1conv1_2conv2_2conv2_3conv3_1conv3_2conv4_1conv4_2conv5_1GFLOPScuBLAS_forwardcuBLAS_backwardcuBLAS_gradientclBLAS_forwardclBLAS_backwardclBLAS_gradientFigure 4: Calculation speed for each layer’s computation in VGG16. Measured on NVIDIA K80GPU. For example, forward computation of conv1 1 is performed by matrix multiplication of(802816, 27) and (27, 64). Forward, backward, gradient computation of cuBLAS and clBLASare shown in different bars.volution, elements of the input matrix are re-ordered (i.e. lowering). Then the output is gained bymatrix multiplication with the weight. Table 4presents the calculation speed in matrix multiplicationused in computation of VGG16, performed by cuBLAS and clBLAS.As the table shows, clBLAS gives inferior speed, especially on gradient computation of layers thatare close to the input layer. In such layers, the matrix shape is far from square. For that reason,performance tuning for such input shape or implementation without matrix multiplication is needed.In the CUDA environment, Lavin (2015 ) showed that 96% of theoretical GPGPU performance isachieved in convolution by circumspect implementation.5.2 D ISTRIBUTED TRAININGIn this subsection, we describe a preliminary evaluation of distributed training of CNN.The method of distributed training is simple data-parallelism. The system is depicted in Fig. 5.First the server distributes network weight Wtand images in a batch. A batch for the iteration ( It)is divided into Nsplits, It1; It2; :::; I tN, where Nis the number of computing clients. After theclient Kcalculates gradient of weight ∆WtKusing assigned batch split, they send the gradient tothe server. The server takes the average of the gradients from all clients and then updates the weightusing it ( Wt+1=Wt1N∆WtK). The optimization method is momentum SGD. The result isequivalent regardless of the number of clients.First, we trained LeNet distributedly in Nexus 7 tablets (Android OS). Chrome browser is used asthe client. The batch size is 120 and divided by the clients equally. Figure 6(left) shows the speedupaccording to the increase in the number of clients. Naturally, the absolute speed is slow, but we candemonstrate that the computational power of mobile devices can be accumulated and nearly linearspeedup is achieved.7Under review as a conference paper at ICLR 2017ParameterServerDistributed ClientsFigure 5: Data-parallelism systemof distributed training02468100246LeNetImages/secThenumberofclients0246810121402468VGG16Images/secThenumberofclients8bitfirefox32bitfirefox8bitnode.js32bitnode.jsFigure 6: Computation speed with respect to the number of distributedclients. Left: speed of training LeNet in Nexus 7 Android tablets(Chrome browser). Right: speed of training VGG16 in clients withNVIDIA K80 (Firefox browser / node.js). Measurement includes timeof communication and optimization in the server.Next, we train large scale CNN; VGG16. Its weight and gradient have 130 million elements. Ittherefore requires 500 MB if represented as 32-bit floating point numbers, which poses a largecommunication bottleneck. To suppress this issue, we implemented 8-bit representation of eachelement proposed by Dettmers (2016 ). We used p2.xlarge instance of Amazon Web Servicesfor GPGPU environment. It contains NVIDIA K80 GPU. The batch size is 256 according to(Simonyan & Zisserman ,2014a ). Single forward-backward procedure cannot process 256 images atthe same time due to the memory limit, so we average the gradients from multiple forward-backwardprocedure.We show the speed of calculation with respect to the number of computing clients in Fig. 6(right).Although our main focus is using web browser as clients, the result on using node.js as clients is alsoshown for reference. Under current settings, use of four clients achieved 2.8 times faster computationthan with one client setting. The speed is much faster than existing OpenCL-based Caffe. Dueto the communication overhead, the speed saturates at 8 clients even when 8-bit representation isemployed.Although we used K80, a high-end GPU, for this experiment, our motivation is to use ordinarypersonal computers for distributed computing. We can assume that latest ordinary personal com-puters (not dedicated for 3D game) have 1/10 performance compared to K80. In K80, we couldtrain VGG16 with 29 seconds per iteration using 8 computers. In 1/10 performance GPU, we canestimate that maximum speed is 100 seconds per iteration using 16 computers, considering bothcalculation and network time. We compressed the weight to 1/4 size by the method of Dettmers, ifwe can compress it to 1/10 further, the maximum speed will be 31 seconds per iteration using 64computers. Thus, further improvements demand reduction of communications and a better strategyof parallelism. We leave those improvements as a subject for future work.6 C ONCLUSIONWe implemented a JavaScript based matrix library and deep learning library, to perform deep learn-ing and to develop applications that use a trained model without a dedicated computer system. UsingGPGPU via WebCL, our library provides much better performance than existing JavaScript basedlibraries. It became possible to train VGG16 and ResNet152. However, the performance is notreaching Caffe running on NVIDIA CUDA environment. A salient difficulty is that matrix multi-plication necessary for convolution is slower. Additionally, we used WebCL as GPGPU interface,but currently it is not included in web browsers. Further improvements in web technology must beundertaken to make full computing power available to scripts in web pages. In experiments of dis-tributed training of VGG16 using web browsers as computing client, 2.8x speed improvement wasgained from four clients. The speed is much faster than existing OpenCL-based Caffe using singlecomputer. The parallelization method used in the experiment is na ̈ıve, and further exploration of thisarea will be undertaken as a subject of future work.8Under review as a conference paper at ICLR 2017ACKNOWLEDGMENTSThis work was supported by CREST, JST.REFERENCESDavid P. Anderson, Jeff Cobb, Eric Korpela, Matt Lebofsky, and Dan Werthimer. SETI@home: anexperiment in public-resource computing. Communications of the ACM , 45:56–61, 2002.Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao,MarcʟAurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large scaledistributed deep networks. In NIPS , 2012.Tim Dettmers. 8-Bit Approximations for Parallelism in Deep Learning. In ICLR , 2016.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for ImageRecognition. In CVPR , 2016.Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser-gio Guadarrama, and Trevor Darrell. Caffe: Convolutional Architecture for Fast Feature Embed-ding. arXiv:1408.5093 , 2014.Jon Klein and Lee Spector. Unwitting Distributed Genetic Programming via AsynchronousJavaScript and XML. In GECCO , 2007.Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images, 2009. Master’s Thesis,Department of Computer Science, University of Toronto.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet Classification with Deep Con-volutional Neural Networks. In NIPS , 2012.Andrew Lavin. maxDNN: An Efficient Convolution Kernel for Deep Learning with Maxwell GPUs.arXiv:1501.06633 , 2015.Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86, 1998a.Yann LeCun, Corinna Cortes, and Christopher J.C. Burges. The mnist database of handwrittendigits, 1998b. http://yann.lecun.com/exdb/mnist/ .Edward Meeds, Remco Hendriks, Said al Faraby, Magiel Bruntink, and Max Welling. MLitB:Machine Learning in the Browser. arxiv:1412.2432 , 2014.J.J. Merelo-Guervos, P.A. Castillo, J.L.J. Laredo, A. Mora Garcia, and A. Prieto. Asynchronousdistributed genetic algorithms with javascript and json. In CEC, 2008.Ken Miura and Tatsuya Harada. Implementation of a practical distributed calculation system withbrowsers and javascript, and application to distributed deep learning. arxiv:1503.05743 , 2015.Ken Miura, Tetsuaki Mano, Atsushi Kanehira, Yuichiro Tsuchiya, and Tatsuya Harada. MILJS :Brand new javascript libraries for matrix calculation and machine learning. arxiv:1502.6064 ,2015.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision ,pp. 1–42, April 2015.Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale ImageRecognition. arxiv:1409.1556 , 2014a.Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognitionin videos. In NIPS , pp. 568–576, 2014b.9Under review as a conference paper at ICLR 2017Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions.arXiv:1409.4842 , 2014.Nima Tajbakhsh, Jae Y. Shin, Suryakanth R. Gurudu, R. Todd Hurst, Christopher B. Kendall,Michael B. Gotway, and Jianming Liang. Convolutional Neural Networks for Medical ImageAnalysis: Full Training or Fine Tuning? IEEE Transactions on Medical Imaging , 35:1299–1312,2016.Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. Learning DeepFeatures for Scene Recognition using Places Database. In NIPS , pp. 487–495, 2014.10
BJm4T4Kgx
Published as a conference paper at ICLR 2017ADVERSARIAL MACHINE LEARNING AT SCALEAlexey KurakinGoogle Brainkurakin@google.comIan J. GoodfellowOpenAIian@openai.comSamy BengioGoogle Brainbengio@google.comABSTRACTAdversarial examples are malicious inputs designed to fool machine learningmodels. They often transfer from one model to another, allowing attackers tomount black box attacks without knowledge of the target model’s parameters.Adversarial training is the process of explicitly training a model on adversarialexamples, in order to make it more robust to attack or to reduce its test error onclean inputs. So far, adversarial training has primarily been applied to small prob-lems. In this research, we apply adversarial training to ImageNet (Russakovskyet al., 2014). Our contributions include: (1) recommendations for how to succes-fully scale adversarial training to large models and datasets, (2) the observationthat adversarial training confers robustness to single-step attack methods, (3) thefinding that multi-step attack methods are somewhat less transferable than single-step attack methods, so single-step attacks are the best for mounting black-boxattacks, and (4) resolution of a “label leaking” effect that causes adversariallytrained models to perform better on adversarial examples than on clean examples,because the adversarial example construction process uses the true label and themodel can learn to exploit regularities in the construction process.1 I NTRODUCTIONIt has been shown that machine learning models are often vulnerable to adversarial manipulationof their input intended to cause incorrect classification (Dalvi et al., 2004). In particular, neuralnetworks and many other categories of machine learning models are highly vulnerable to attacksbased on small modifications of the input to the model at test time (Biggio et al., 2013; Szegedyet al., 2014; Goodfellow et al., 2014; Papernot et al., 2016b).The problem can be summarized as follows. Let’s say there is a machine learning system Mandinput sample Cwhich we call a clean example. Let’s assume that sample Cis correctly classified bythe machine learning system, i.e. M(C) =ytrue. It’s possible to construct an adversarial exampleAwhich is perceptually indistinguishable from Cbut is classified incorrectly, i.e. M(A)6=ytrue.These adversarial examples are misclassified far more often than examples that have been perturbedby noise, even if the magnitude of the noise is much larger than the magnitude of the adversarialperturbation (Szegedy et al., 2014).Adversarial examples pose potential security threats for practical machine learning applications.In particular, Szegedy et al. (2014) showed that an adversarial example that was designed to bemisclassified by a model M1is often also misclassified by a model M2. This adversarial exampletransferability property means that it is possible to generate adversarial examples and perform a mis-classification attack on a machine learning system without access to the underlying model. Papernotet al. (2016a) and Papernot et al. (2016b) demonstrated such attacks in realistic scenarios.It has been shown (Goodfellow et al., 2014; Huang et al., 2015) that injecting adversarial examplesinto the training set (also called adversarial training) could increase robustness of neural networksto adversarial examples. Another existing approach is to use defensive distillation to train the net-work (Papernot et al., 2015). However all prior work studies defense measures only on relativelysmall datasets like MNIST and CIFAR10. Some concurrent work studies attack mechanisms onImageNet (Rozsa et al., 2016), focusing on the question of how well adversarial examples transferbetween different types of models, while we focus on defenses and studying how well different typesof adversarial example generation procedures transfer between relatively similar models.1Published as a conference paper at ICLR 2017In this paper we studied adversarial training of Inception models trained on ImageNet. The contri-butions of this paper are the following:We successfully used adversarial training to train an Inception v3 model (Szegedy et al.,2015) on ImageNet dataset (Russakovsky et al., 2014) and to significantly increase robust-ness against adversarial examples generated by the fast gradient sign method (Goodfellowet al., 2014) as well as other one-step methods.We demonstrated that different types of adversarial examples tend to have different trans-ferability properties between models. In particular we observed that those adversarial ex-amples which are harder to resist using adversarial training are less likely to be transferrablebetween models.We showed that models which have higher capacity (i.e. number of parameters) tend tobe more robust to adversarial examples compared to lower capacity model of the samearchitecture. This provides additional cue which could help building more robust models.We also observed an interesting property we call “label leaking”. Adversarial examplesconstructed with a single-step method making use of the true labels may be easier to classifythan clean adversarial examples, because an adversarially trained model can learn to exploitregularities in the adversarial example construction process. This suggests using adversarialexample construction processes that do not make use of the true label.The rest of the paper is structured as follows: In section 2 we review different methods to generateadversarial examples. Section 3 describes details of our adversarial training algorithm. Finally,section 4 describes our experiments and results of adversarial training.2 M ETHODS GENERATING ADVERSARIAL EXAMPLES2.1 T ERMINOLOGY AND NOTATIONIn this paper we use the following notation and terminology regarding adversarial examples:1.X, the clean image — unmodified image from the dataset (either train or test set).2.Xadv, the adversarial image : the output of any procedure intended to produce an approx-imate worst-case modification of the clean image. We sometimes call this a candidateadversarial image to emphasize that an adversarial image is not necessarily misclassifiedby the neural network.3.Misclassified adversarial image — candidate adversarial image which is misclassified bythe neural network. In addition we are typically interested only in those misclassified ad-versarial images when the corresponding clean image is correctly classified.4.: The size of the adversarial perturbation. In most cases, we require the L1norm of theperturbation to be less than , as done by Goodfellow et al. (2014). We always specify in terms of pixel values in the range [0;255]. Note that some other work on adversarialexamples minimizes the size of the perturbation rather than imposing a constraint on thesize of the perturbation (Szegedy et al., 2014).5. The cost function used to train the model is denoted J(X;ytrue).6.ClipX;(A)denotes element-wise clipping A, withAi;jclipped to the range [Xi;j;Xi;j+].7.One-step methods of adversarial example generation generate a candidate adversarial im-age after computing only one gradient. They are often based on finding the optimal per-turbation of a linear approximation of the cost or model. Iterative methods apply manygradient updates. They typically do not rely on any approximation of the model and typi-cally produce more harmful adversarial examples when run for more iterations.2.2 A TTACK METHODSWe study a variety of attack methods:2Published as a conference paper at ICLR 2017Fast gradient sign method Goodfellow et al. (2014) proposed the fast gradient sign method(FGSM) as a simple way to generate adversarial examples:Xadv=X+signrXJ(X;ytrue)(1)This method is simple and computationally efficient compared to more complex methods like L-BFGS (Szegedy et al., 2014), however it usually has a lower success rate. On ImageNet, top-1 errorrate on candidate adversarial images for the FGSM is about 63%69% for2[2;32].One-step target class methods FGSM finds adversarial perturbations which increase the valueof the loss function. An alternative approach is to maximize probability p(ytargetjX)of somespecific target class ytarget which is unlikely to be the true class for a given image. For a neuralnetwork with cross-entropy loss this will lead to the following formula for the one-step target classmethod:Xadv=XsignrXJ(X;ytarget )(2)As a target class we can use the least likely class predicted by the network yLL= arg minyp(yjX), as suggested by Kurakin et al. (2016). In such case we refer to this method as one-step leastlikely class or just “step l.l.” Alternatively we can use a random class as target class. In such a casewe refer to this method as “step rnd.”.Basic iterative method A straightforward extension of FGSM is to apply it multiple times withsmall step size:Xadv0=X;XadvN+1=ClipX;nXadvN+signrXJ(XadvN;ytrue)oIn our experiments we used = 1, i.e. we changed the value of each pixel only by 1on each step.We selected the number of iterations to be min(+ 4;1:25). See more information on this methodin Kurakin et al. (2016). Below we refer to this method as “iter. basic” method.Iterative least-likely class method By running multiple iterations of the “step l.l.” method wecan get adversarial examples which are misclassified in more than 99% of the cases:Xadv0=X;XadvN+1=ClipX;XadvNsignrXJ(XadvN;yLL)and number of iterations were selected in the same way as for the basic iterative method. Belowwe refer to this method as the “iter. l.l.”.3 A DVERSARIAL TRAININGThe basic idea of adversarial training is to inject adversarial examples into the training set, con-tinually generating new adversarial examples at every step of training (Goodfellow et al., 2014).Adversarial training was originally developed for small models that did not use batch normaliza-tion. To scale adversarial training to ImageNet, we recommend using batch normalization (Ioffe &Szegedy, 2015). To do so successfully, we found that it was important for examples to be groupedinto batches containing both normal and adversarial examples before taking each training step, asdescribed in algorithm 1.We use a loss function that allows independent control of the number and relative weight of adver-sarial examples in each batch:Loss =1(mk) +k Xi2CLEANL(Xijyi) +Xi2ADVL(Xadvijyi)!whereL(Xjy)is a loss on a single example Xwith true class y;mis total number of trainingexamples in the minibatch; kis number of adversarial examples in the minibatch and is a parameterwhich controls the relative weight of adversarial examples in the loss. We used = 0:3,m= 32 ,3Published as a conference paper at ICLR 2017Algorithm 1 Adversarial training of network N.Size of the training minibatch is m. Number of adversarial images in the minibatch is k.1:Randomly initialize network N2:repeat3: Read minibatch B=fX1;:::;Xmgfrom training set4: Generatekadversarial examples fX1adv;:::;Xkadvgfrom correspondingclean examplesfX1;:::;Xkgusing current state of the network N5: Make new minibatch B0=fX1adv;:::;Xkadv;Xk+1;:::;Xmg6: Do one training step of network Nusing minibatch B07:until training convergedandk= 16 . Note that we replace each clean example with its adversarial counterpart, for a totalminibatch size of 32, which is a departure from previous approaches to adversarial training.Fraction and weight of adversarial examples which we used in each minibatch differs from Huanget al. (2015) where authors replaced entire minibatch with adversarial examples. However their ex-periments was done on smaller datasets (MNIST and CIFAR-10) in which case adversarial trainingdoes not lead to decrease of accuracy on clean images. We found that our approach works better forImageNet models (corresponding comparative experiments could be found in Appendix E).We observed that if we fix during training then networks become robust only to that specific valueof. We therefore recommend choosing randomly, independently for each training example. Inour experiments we achieved best results when magnitudes were drawn from a truncated normaldistribution defined in interval [0;16]with underlying normal distribution N(= 0;= 8) .14 E XPERIMENTSWe adversarially trained an Inception v3 model (Szegedy et al., 2015) on ImageNet. All experimentswere done using synchronous distributed training on 50machines, with a minibatch of 32exampleson each machine. We observed that the network tends to reach maximum accuracy at around 130k150kiterations. If we continue training beyond 150kiterations then eventually accuracy mightdecrease by a fraction of a percent. Thus we ran experiments for around 150kiterations and thenused the obtained accuracy as the final result of the experiment.Similar to Szegedy et al. (2015) we used RMSProp optimizer for training. We used a learning rateof0:045except where otherwise indicated.We looked at interaction of adversarial training and other forms or regularization (dropout, labelsmoothing and weight decay). By default training of Inception v3 model uses all three of them.We noticed that disabling label smoothing and/or dropout leads to small decrease of accuracy onclean examples (by 0:1%-0:5%for top 1 accuracy) and small increase of accuracy on adversarialexamples (by 1%-1:5%for top 1 accuracy). On the other hand reducing weight decay leads todecrease of accuracy on both clean and adversarial examples.We experimented with delaying adversarial training by 0,10k,20kand40kiterations. In such casewe used only clean examples during the first Ntraining iterations and after Niterations includedboth clean and adversarial examples in the minibatch. We noticed that delaying adversarial traininghas almost no effect on accuracy on clean examples (difference in accuracy within 0:2%) aftersufficient number of training iterations (more than 70kin our case). At the same time we noticedthat larger delays of adversarial training might cause up to 4%decline of accuracy on adversarialexamples with high magnitude of adversarial perturbations. For small 10kdelay changes of accuracywas not statistically significant to recommend against it. We used a delay of 10kbecause this allowedus to reuse the same partially trained model as a starting point for many different experiments.For evaluation we used the ImageNet validation set which contains 50;000images and does notintersect with the training set.1In TensorFlow this could be achieved by tf.abs(tf.truncated normal(shape, mean=0,stddev=8)) .4Published as a conference paper at ICLR 20174.1 R ESULTS OF ADVERSARIAL TRAININGWe experimented with adversarial training using several types of one-step methods. We found thatadversarial training using any type of one-step method increases robustness to all types of one-stepadversarial examples that we tested. However there is still a gap between accuracy on clean andadversarial examples which could vary depending on the combination of methods used for trainingand evaluation.Adversarial training caused a slight (less than 1%) decrease of accuracy on clean examples in our Im-ageNet experiments. This differs from results of adversarial training reported previously, where ad-versarial training increased accuracy on the test set (Goodfellow et al., 2014; Miyato et al., 2016b;a).One possible explanation is that adversarial training acts as a regularizer. For datasets with few la-beled examples where overfitting is the primary concern, adversarial training reduces test error. Fordatasets like ImageNet where state-of-the-art models typically have high training set error, addinga regularizer like adversarial training can increase training set error more than it decreases the gapbetween training and test set error. Our results suggest that adversarial training should be employedin two scenarios:1. When a model is overfitting, and a regularizer is required.2. When security against adversarial examples is a concern. In this case, adversarial trainingis the method that provides the most security of any known defense, while losing only asmall amount of accuracy.By comparing different one-step methods for adversarial training we observed that the best resultsin terms or accuracy on test set are achieved using “step l.l.” or “step rnd.” method. Moreover usingthese two methods helped the model to become robust to adversarial examples generated by otherone-step methods. Thus for final experiments we used “step l.l.” adversarial method.For brevity we omitted a detailed comparison of different one-step methods here, but the reader canfind it in Appendix A.Table 1: Top 1 and top 5 accuracies of an adversarially trained network on clean images and ad-versarial images with various test-time . Both training and evaluation were done using “step l.l.”method. Adversarially training caused the baseline model to become robust to adversarial exam-ples but lost some accuracy on clean examples. We therefore also trained a deeper model with twoadditional Inception blocks. The deeper model benefits more from adversarial training in terms ofrobustness to adversarial perturbation, and loses less accuracy on clean examples than the smallermodel does.Clean= 2= 4= 8= 16Baseline top 1 78.4% 30.8% 27.2% 27.2% 29.5%(standard training) top 5 94.0% 60.0% 55.6% 55.1% 57.2%Adv. training top 1 77.6% 73.5% 74.0% 74.5% 73.9%top 5 93.8% 91.7% 91.9% 92.0% 91.4%Deeper model top 1 78.7% 33.5% 30.0% 30.0% 31.6%(standard training) top 5 94.4% 63.3% 58.9% 58.1% 59.5%Deeper model top 1 78.1% 75.4% 75.7% 75.6% 74.4%(Adv. training) top 5 94.1% 92.6% 92.7% 92.5% 91.6%Results of adversarial training using “step l.l.” method are provided in Table 1. As it can be seen fromthe table we were able to significantly increase top-1 and top-5 accuracy on adversarial examples (upto74% and92% correspondingly) to make it to be on par with accuracy on clean images. Howeverwe lost about 0:8%accuracy on clean examples.We were able to slightly reduce the gap in the accuracy on clean images by slightly increasing thesize of the model. This was done by adding two additional Inception blocks to the model. Forspecific details about Inception blocks refer to Szegedy et al. (2015).Unfortunately, training on one-step adversarial examples does not confer robustness to iterativeadversarial examples, as shown in Table 2.5Published as a conference paper at ICLR 2017Table 2: Accuracy of adversarially trained network on iterative adversarial examples. Adversarialtraining was done using “step l.l.” method. Results were computed after 140kiterations of training.Overall, we see that training on one-step adversarial examples does not confer resistance to iterativeadversarial examples.Adv. method Training Clean= 2= 4= 8= 16Iter. l.l. Adv. training top 1 77.4% 29.1% 7.5% 3.0% 1.5%top 5 93.9% 56.9% 21.3% 9.4% 5.5%Baseline top 1 78.3% 23.3% 5.5% 1.8% 0.7%top 5 94.1% 49.3% 18.8% 7.8% 4.4%Iter. basic Adv. training top 1 77.4% 30.0% 25.2% 23.5% 23.2%top 5 93.9% 44.3% 33.6% 28.4% 26.8%Baseline top 1 78.3% 31.4% 28.1% 26.4% 25.9%top 5 94.1% 43.1% 34.8% 30.2% 28.8%We also tried to use iterative adversarial examples during training, however we were unable to gainany benefits out of it. It is computationally costly and we were not able to obtain robustness toadversarial examples or to prevent the procedure from reducing the accuracy on clean examplessignificantly. It is possible that much larger models are necessary to achieve robustness to such alarge class of inputs.4.2 L ABEL LEAKINGWe discovered a label leaking effect: when a model is trained on FGSM adversarial examplesand then evaluated using FGSM adversarial examples, the accuracy on adversarial images becomesmuch higher than the accuracy on clean images (see Table 3). This effect also occurs (but to a lesserdegree) when using other one-step methods that require the true label as input.We say that label for specific example has been leaked if and only if the model classifies an adver-sarial example correctly when that adversarial example is generated using the true label but misclas-sifies a corresponding adversarial example that was created without using the true label. If too manylabels has been leaked then accuracy on adversarial examples might become bigger than accuracyon clean examples which we observed on ImageNet dataset.We believe that the effect occurs because one-step methods that use the true label perform a verysimple and predictable transformation that the model can learn to recognize. The adversarial exam-ple construction process thus inadvertently leaks information about the true label into the input. Wefound that the effect vanishes if we use adversarial example construction processes that do not usethe true label. The effect also vanishes if an iterative method is used, presumably because the outputof an iterative process is more diverse and less predictable than the output of a one-step process.Overall due to the label leaking effect, we do not recommend to use FGSM or other methods definedwith respect to the true class label to evaluate robustness to adversarial examples; we recommend touse other one-step methods that do not directly access the label instead.We recommend to replace the true label with the most likely label predicted by the model. Alter-nately, one can maximize the cross-entropy between the full distribution over all predicted labelsgiven the clean input and the distribution over all predicted labels given the perturbed input (Miyatoet al., 2016b).We revisited the adversarially trained MNIST classifier from Goodfellow et al. (2014) and foundthat it too leaks labels. The most labels are leaked with = 0:3on MNIST data in [0;1]. Withthat, the model leaks 79 labels on the test set of 10,000 examples. However, the amount of labelleaking is small compared to the amount of error caused by adversarial examples. The error rate onadversarial examples exceeds the error rate on clean examples for 2f:05;:1;:25;:3;:4;:45;:5g.This explains why the label leaking effect was not noticed earlier.6Published as a conference paper at ICLR 20170.50 0.75 1.00 1.25 1.50 1.75 2.00scale factor ρ for number of filters0.200.250.300.350.400.45ratio of adversarial to clean top 1 accuracy/epsilon1=2/epsilon1=4/epsilon1=8/epsilon1=16No adversarial training, “step l.l.” adv. examples0.50 0.75 1.00 1.25 1.50 1.75 2.00scale factor ρ for number of filters0.900.920.940.960.98ratio of adversarial to clean top 1 accuracy/epsilon1=2/epsilon1=4/epsilon1=8/epsilon1=16 With adversarial training, “step l.l.” adv. examples0.50 0.75 1.00 1.25 1.50 1.75 2.00scale factor ρ for number of filters0.000.050.100.150.200.250.300.35ratio of adversarial to clean top 1 accuracy/epsilon1=2/epsilon1=4/epsilon1=8/epsilon1=16No adversarial training, “iter. l.l.” adv. examples0.50 0.75 1.00 1.25 1.50 1.75 2.00scale factor ρ for number of filters0.000.050.100.150.200.250.300.350.400.45ratio of adversarial to clean top 1 accuracy/epsilon1=2/epsilon1=4/epsilon1=8/epsilon1=16 With adversarial training, “iter. l.l.” adv. examples0.50 0.75 1.00 1.25 1.50 1.75 2.00scale factor ρ for number of filters0.20.30.40.50.6ratio of adversarial to clean top 1 accuracy/epsilon1=2/epsilon1=4/epsilon1=8/epsilon1=16No adversarial training, “basic iter.” adv. examples0.50 0.75 1.00 1.25 1.50 1.75 2.00scale factor ρ for number of filters0.150.200.250.300.350.400.450.500.55ratio of adversarial to clean top 1 accuracy/epsilon1=2/epsilon1=4/epsilon1=8/epsilon1=16 With adversarial training, “basic iter.” adv. examplesFigure 1: Influence of size of the model on top 1 classification accuracy of various adversarialexamples. Left column — base model without adversarial training, right column — model withadversarial training using “step l.l.” method. Top row — results on “step l.l.” adversarial images,middle row — results on “iter. l.l.” adversarial images, bottom row — results on “basic iter.”adversarial images. See text of Section 4.3 for explanation of meaning of horizontal and verticalaxes.7Published as a conference paper at ICLR 2017Table 3: Effect of label leaking on adversarial examples. When training and evaluation was doneusing FGSM accuracy on adversarial examples was higher than on clean examples. This effect wasnot happening when training and evaluation was done using “step l.l.” method. In both experimentstraining was done for 150kiterations with initial learning rate 0:0225 .Clean= 2= 4= 8= 16No label leaking, top 1 77.3% 72.8% 73.1% 73.4% 72.0%training and eval using “step l.l.” top 5 93.7% 91.1% 91.1% 91.0% 90.3%With label leaking, top 1 76.6% 86.2% 87.6% 88.7% 87.0%training and eval using FGSM top 5 93.2% 95.9% 96.4% 96.9% 96.4%4.3 I NFLUENCE OF MODEL CAPACITY ON ADVERSARIAL ROBUSTNESSWe studied how the size of the model (in terms of number of parameters) could affect robustness toadversarial examples. We picked Inception v3 as a base model and varied its size by changing thenumber of filters in each convolution.For each experiment we picked a scale factor and multiplied the number of filters in each convolu-tion by. In other words = 1means unchanged Inception v3, = 0:5means Inception with halfof the usual number of filters in convolutions, etc . . . For each chosen we trained two independentmodels: one with adversarial training and another without. Then we evaluated accuracy on cleanand adversarial examples for both trained models. We have run these experiments for 2[0:5;2:0].In earlier experiments (Table 1) we found that deeper models benefit more from adversarial training.The increased depth changed many aspects of the model architecture. These experiments varying examine the effect in a more controlled setting, where the architecture remains constant except forthe number of feature maps in each layer.In all experiments we observed that accuracy on clean images kept increasing with increase of ,though its increase slowed down as became bigger. Thus as a measure of robustness we used theratio of accuracy on adversarial images to accuracy on clean images because an increase of this ratiomeans that the gap between accuracy on adversarial and clean images becomes smaller. If this ratioreaches 1then the accuracy on adversarial images is the same as on clean ones. For a successfuladversarial example construction technique, we would never expect this ratio to exceed 1, since thiswould imply that the adversary is actually helpful. Some defective adversarial example constructiontechniques, such as those suffering from label leaking, can inadvertently produce a ratio greater than1.Results with ratios of accuracy for various adversarial methods and are provided in Fig. 1.For models without adversarial training, we observed that there is an optimal value of yieldingbest robustness. Models that are too large or too small perform worse. This may indicate thatmodels become more robust to adversarial examples until they become large enough to overfit insome respect.For adversarially trained models, we found that robustness consistently increases with increases inmodel size. We were not able to train large enough models to find when this process ends, but wedid find that models with twice the normal size have an accuracy ratio approaching 1for one-stepadversarial examples. When evaluated on iterative adversarial examples, the trend toward increasingrobustness with increasing size remains but has some exceptions. Also, none of our models was largeenough to approach an accuracy ratio of 1in this regime.Overall we recommend exploring increase of accuracy (along with adversarial training) as a measureto improve robustness to adversarial examples.4.4 T RANSFERABILITY OF ADVERSARIAL EXAMPLESFrom a security perspective, an important property of adversarial examples is that they tend to trans-fer from one model to another, enabling an attacker in the black-box scenario to create adversarial8Published as a conference paper at ICLR 2017Table 4: Transfer rate of adversarial examples generated using different adversarial methods andperturbation size = 16 . This is equivalent to the error rate in an attack scenario where the attackerprefilters their adversarial examples by ensuring that they are misclassified by the source modelbefore deploying them against the target. Transfer rates are rounded to the nearest percent in order tofit the table on the page. The following models were used for comparison: AandBare Inception v3models with different random initializations, Cis Inception v3 model with ELU activations insteadof Relu, Dis Inception v4 model. See also Table 6 for the absolute error rate when the attack is notprefiltered, rather than the transfer rate of adversarial examples.FGSM basic iter. iter l.l.source target model target model target modelmodel A B C D A B C D A B C Dtop 1 A (v3) 100 56 58 47 100 46 45 33 100 13 13 9B (v3) 58 100 59 51 41 100 40 30 15 100 13 10C (v3 ELU) 56 58 100 52 44 44 100 32 12 11 100 9D (v4) 50 54 52 100 35 39 37 100 12 13 13 100top 5 A (v3) 100 50 50 36 100 15 17 11 100 8 7 5B (v3) 51 100 50 37 16 100 14 10 7 100 5 4C (v3 ELU) 44 45 100 37 16 18 100 13 6 6 100 4D (v4) 42 38 46 100 11 15 15 100 6 6 6 1002 4 6 8 10 12 14 16/epsilon10.00.10.20.30.40.50.6top 1 transfer ratefastbasic iter.iter l.l.Top 1 transferability.2 4 6 8 10 12 14 16/epsilon10.00.10.20.30.40.50.6top 5 transfer ratefastbasic iter.iter l.l. Top 5 transferability.Figure 2: Influence of the size of adversarial perturbation on transfer rate of adversarial examples.Transfer rate was computed using two Inception v3 models with different random intializations. Ascould be seen from these plots, increase of leads to increase of transfer rate. It should be notedthat transfer rate is a ratio of number of transferred adversarial examples to number of successfuladversarial examples for source network. Both numerator and denominator of this ratio are increas-ing with increase of , however we observed that numerator (i.e. number of transferred examples)is increasing much faster compared to increase of denominator. For example when increases from8to16relative increase of denominator is less than 1%for each of the considered methods, at thesame time relative increase of numerator is more than 20%.examples for their own substitute model, then deploy those adversarial examples to fool a targetmodel (Szegedy et al., 2014; Goodfellow et al., 2014; Papernot et al., 2016b).We studied transferability of adversarial examples between the following models: two copies ofnormal Inception v3 (with different random initializations and order or training examples), Inceptionv4 (Szegedy et al., 2016) and Inception v3 which uses ELU activation (Clevert et al., 2015) insteadof Relu2. All of these models were independently trained from scratch until they achieved maximumaccuracy.2We achieved 78:0%top 1 and 94:1%top 5 accuracy on Inception v3 with ELU activations, which iscomparable with accuracy of Inception v3 model with Relu activations.9Published as a conference paper at ICLR 2017In each experiment we fixed the source and target networks, constructed adversarial examples from1000 randomly sampled clean images from the test set using the source network and performedclassification of all of them using both source and target networks. These experiments were doneindependently for different adversarial methods.We measured transferability using the following criteria. Among 1000 images we picked only mis-classified adversarial example for the source model (i.e. clean classified correctly, adversarial mis-classified) and measured what fraction of them were misclassified by the target model.Transferability results for all combinations of models and = 16 are provided in Table 4. Resultsfor variousbut fixed source and target model are provided in Fig. 2.As can be seen from the results, FGSM adversarial examples are the most transferable, while “iterl.l.” are the least. On the other hand “iter l.l.” method is able to fool the network in more than 99%cases (top 1 accuracy), while FGSM is the least likely to fool the network. This suggests that theremight be an inverse relationship between transferability of specific method and ability of the methodto fool the network. We haven’t studied this phenomenon further, but one possible explanation couldbe the fact that iterative methods tend to overfit to specific network parameters.In addition, we observed that for each of the considered methods transfer rate is increasing withincrease of(see Fig. 2). Thus potential adversary performing a black-box attack have an incentiveto use higher to increase the chance of success of the attack.5 C ONCLUSIONIn this paper we studied how to increase robustness to adversarial examples of large models (In-ception v3) trained on large dataset (ImageNet). We showed that adversarial training providesrobustness to adversarial examples generated using one-step methods. While adversarial trainingdidn’t help much against iterative methods we observed that adversarial examples generated byiterative methods are less likely to be transferred between networks, which provides indirect robust-ness against black box adversarial attacks. In addition we observed that increase of model capacitycould also help to increase robustness to adversarial examples especially when used in conjunc-tion with adversarial training. Finally we discovered the effect of label leaking which resulted inhigher accuracy on FGSM adversarial examples compared to clean examples when the network wasadversarially trained.REFERENCESBattista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim ˇSrndi ́c, Pavel Laskov, Gior-gio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In JointEuropean Conference on Machine Learning and Knowledge Discovery in Databases , pp. 387–402. Springer, 2013.Djork-Arn ́e Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep net-work learning by exponential linear units (elus). CoRR , abs/1511.07289, 2015. URL http://arxiv.org/abs/1511.07289 .Nilesh Dalvi, Pedro Domingos, Sumit Sanghai, Deepak Verma, et al. Adversarial classification. InProceedings of the tenth ACM SIGKDD international conference on Knowledge discovery anddata mining , pp. 99–108. ACM, 2004.Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarialexamples. CoRR , abs/1412.6572, 2014. URL http://arxiv.org/abs/1412.6572 .Ruitong Huang, Bing Xu, Dale Schuurmans, and Csaba Szepesv ́ari. Learning with a strong adver-sary. CoRR , abs/1511.03034, 2015. URL http://arxiv.org/abs/1511.03034 .Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. 2015.Alex Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world.Technical report, arXiv, 2016. URL https://arxiv.org/abs/1607.02533 .10Published as a conference paper at ICLR 2017Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Virtual adversarial training for semi-supervisedtext classification. arXiv preprint arXiv:1605.07725 , 2016a.Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributionalsmoothing with virtual adversarial training. In International Conference on Learning Represen-tations (ICLR2016) , April 2016b.N. Papernot, P. McDaniel, and I. Goodfellow. Transferability in Machine Learning: from Phe-nomena to Black-Box Attacks using Adversarial Samples. ArXiv e-prints , May 2016b. URLhttp://arxiv.org/abs/1605.07277 .Nicolas Papernot, Patrick Drew McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillationas a defense to adversarial perturbations against deep neural networks. CoRR , abs/1511.04508,2015. URL http://arxiv.org/abs/1511.04508 .Nicolas Papernot, Patrick Drew McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, andAnanthram Swami. Practical black-box attacks against deep learning systems using adversarialexamples. CoRR , abs/1602.02697, 2016a. URL http://arxiv.org/abs/1602.02697 .Andras Rozsa, Manuel G ̈unther, and Terrance E Boult. Are accuracy and robustness correlated?arXiv preprint arXiv:1610.04563 , 2016.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visualrecognition challenge. arXiv preprint arXiv:1409.0575 , 2014.Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfel-low, and Rob Fergus. Intriguing properties of neural networks. ICLR , abs/1312.6199, 2014. URLhttp://arxiv.org/abs/1312.6199 .Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re-thinking the inception architecture for computer vision. CoRR , abs/1512.00567, 2015. URLhttp://arxiv.org/abs/1512.00567 .Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet andthe impact of residual connections on learning. CoRR , abs/1602.07261, 2016. URL http://arxiv.org/abs/1602.07261 .11Published as a conference paper at ICLR 2017AppendicesA C OMPARISON OF ONE -STEP ADVERSARIAL METHODSIn addition to FGSM and “step l.l.” methods we explored several other one-step adversarial methodsboth for training and evaluation. Generally all of these methods can be separated into two largecategories. Methods which try to maximize the loss (similar to FGSM) are in the first category. Thesecond category contains methods which try to maximize the probability of a specific target class(similar to “step l.l.”). We also tried to use different types of random noise instead of adversarialimages, but random noise didn’t help with robustness against adversarial examples.The full list of one-step methods we tried is as follows:Methods increasing loss function J–FGSM (described in details in Section 2.2):Xadv=X+signrXJ(X;ytrue)–FGSM-pred or fast method with predicted class. It is similar to FGSM but uses thelabel of the class predicted by the network instead of true class ytrue.–“Fast entropy” or fast method designed to maximize the entropy of the predicted dis-tribution, thereby causing the model to become less certain of the predicted class.–“Fast grad.L2” is similar to FGSM but uses the value of gradient instead of its sign.The value of gradient is normalized to have unit L2norm:Xadv=X+rXJ(X;ytrue)rXJ(X;ytrue)2Miyato et al. (2016b) advocate this method.–“Fast grad.L1” is similar to “fast grad. L2” but usesL1norm for normalization.Methods increasing the probability of the selected target class–“Step l.l.” is one-step towards least likely class (also described in Section 2.2):Xadv=XsignrXJ(X;ytarget )whereytarget = arg minyp(yjX)is least likely class prediction by the network.–“Step rnd.” is similar to “step l.l.” but uses random class instead of least likely class.Random perturbations–Sign of random perturbation. This is an attempt to construct random perturbationwhich has similar structure to perturbations generated by FGSM:Xadv=X+signNwhereNis random normal variable with zero mean and identity covariance matrix.–Random truncated normal perturbation with zero mean and 0:5standard deviationdefined on [;]and uncorrelated pixels, which leads to the following formula forperturbed images:Xadv=X+TwhereTis a random variable with truncated normal distribution.Overall, we observed that using only one of these single step methods during adversarial training issufficient to gain robustness to all of them. Fig. 3 shows accuracy on various one-step adversarialexamples when the network was trained using only “step l.l.” method.At the same time we observed that not all one-step methods are equally good for adversarial training,as shown in Table 5. The best results (achieving both good accuracy on clean data and good accuracyon adversarial inputs) were obtained when adversarial training was done using “step l.l.” or “steprnd.” methods.12Published as a conference paper at ICLR 20170 4 8 12 16 20 24 28epsilon0.30.40.50.60.70.8top1 accuracyWith adversarial training0 4 8 12 16 20 24 28epsilon0.30.40.50.60.70.8top1 accuracy No adversarial training0 4 8 12 16 20 24 28epsilon0.50.60.70.80.91.0top5 accuracyWith adversarial training0 4 8 12 16 20 24 28epsilon0.50.60.70.80.91.0top5 accuracy No adversarial trainingCleanFGSMFGSM-predFast entropyFast grad. L2Fast grad. L∞Step l.l.Step rnd.Figure 3: Comparison of different one-step adversarial methods during eval. Adversarial trainingwas done using “step l.l.” method. Some evaluation methods show increasing accuracy with in-creasingover part of the curve, due to the label leaking effect.B A DDITIONAL RESULTS WITH SIZE OF THE MODELSection 4.3 contains details regarding the influence of size of the model on robustness to adversarialexamples. Here we provide additional Figure 4 which shows robustness calculated using top 5accuracy. Generally it exhibits the same properties as the corresponding plots for top 1 accuracy.C A DDITIONAL RESULTS ON TRANSFERABILITYSection 4.4 contains results with transfer rate of various adversarial examples between models. Inaddition to transfer rate computed only on misclassified adversarial examples it is also interesting toobserve the error rate of all candidate adversarial examples generated for one model and classifiedby other model.This result might be interesting because it models the following attack. Instead of trying to pick“good” adversarial images an adversary tries to modify all available images in order to get as muchmisclassified images as possible.To compute the error rate we randomly generated 1000 adversarial images using the source modeland then classified them using the target model. Results for various models, adversarial methods13Published as a conference paper at ICLR 2017Table 5: Comparison of different one-step adversarial methods for adversarial training. The evalua-tion was run after 90ktraining steps.*) In all cases except “fast grad L2” and “fast grad L1” the evaluation was done using FGSM. For“fast gradL2” and “fast grad L1” the evaluation was done using “step l.l.” method. In the casewhere both training and testing were done with FGSM, the performance on adversarial examples isartificially high due to the label leaking effect. Based on this table, we recommend using “step rnd.”or “step l.l.” as the method of generating adversarial examples at training time, in order to obtaingood accuracy on both clean and adversarial examples. We computed 95% confidence intervalsbased on the standard error of the mean around the test error, using the fact that the test error wasevaluated with 50,000 samples. Within each column, we indicate which methods are statisticallytied for the best using bold face.Clean= 2= 4= 8= 16No adversarial training 76.8% 40.7% 39.0% 37.9% 36.7%FGSM 74.9% 79.3% 82.8% 85.3% 83.2%Fast with predicted class 76.4% 43.2% 42.0% 40.9% 40.0%Fast entropy 76.4% 62.8% 61.7% 59.5% 54.8%Step rnd. 76.4% 73.0% 75.4% 76.5% 72.5%Step l.l. 76.3% 72.9% 75.1% 76.2% 72.2%Fast grad.L2* 76.8% 44.0% 33.2% 26.4% 22.5%Fast grad.L1* 75.6% 52.2% 39.7% 30.9% 25.0%Sign of random perturbation 76.5% 38.8% 36.6% 35.0% 32.7%Random normal perturbation 76.6% 38.3% 36.0% 34.4% 31.8%and fixed= 16 are provided in Table 6. Results for fixed source and target models and various are provided in Fig. 5.Overall the error rate of transferred adversarial examples exhibits the same behavior as the transferrate described in Section 4.4.Table 6: Error rates on adversarial examples transferred between models, rounded to the nearestpercent. Results are provided for adversarial images generated using different adversarial methodsand fixed perturbation size = 16 . The following models were used for comparison: AandBareInception v3 models with different random initializations, Cis Inception v3 model with ELU acti-vations instead of Relu, Dis Inception v4 model. See also Table 4 for the transfer rate of adversarialexamples, rather than the absolute error rate.FGSM basic iter. iter l.l.source target model target model target modelmodel A B C D A B C D A B C Dtop 1 A (v3) 65 52 53 45 78 51 50 42 100 32 31 27B (v3) 52 66 54 48 50 79 51 43 35 99 34 29C (v3 ELU) 53 55 70 50 47 46 74 40 31 30 100 28D (v4) 47 51 49 62 43 46 45 73 30 31 31 99top 5 A (v3) 46 28 28 22 76 17 18 13 94 12 12 9B (v3) 29 46 30 22 19 76 18 16 13 96 12 11C (v3 ELU) 28 29 55 25 18 19 74 15 12 12 96 9D (v4) 23 22 25 40 14 16 16 70 11 11 11 97D R ESULTS WITH DIFFERENT ACTIVATION FUNCTIONSWe evaluated robustness to adversarial examples when the network was trained using various non-linear activation functions instead of the standard relu activation when used with adversarial trainingon “step l.l.” adversarial images. We tried to use following activation functions instead of relu:tanh (x)14Published as a conference paper at ICLR 20170.50 0.75 1.00 1.25 1.50 1.75 2.00scale factor ρ for number of filters0.400.450.500.550.600.650.70ratio of adversarial to clean top 5 accuracy/epsilon1=2/epsilon1=4/epsilon1=8/epsilon1=16No adversarial training, “step l.l.” adv. examples0.50 0.75 1.00 1.25 1.50 1.75 2.00scale factor ρ for number of filters0.950.960.970.980.99ratio of adversarial to clean top 5 accuracy/epsilon1=2/epsilon1=4/epsilon1=8/epsilon1=16 With adversarial training, “step l.l.” adv. examples0.50 0.75 1.00 1.25 1.50 1.75 2.00scale factor ρ for number of filters0.00.10.20.30.40.50.60.7ratio of adversarial to clean top 5 accuracy/epsilon1=2/epsilon1=4/epsilon1=8/epsilon1=16No adversarial training, “iter. l.l.” adv. examples0.50 0.75 1.00 1.25 1.50 1.75 2.00scale factor ρ for number of filters0.00.10.20.30.40.50.60.7ratio of adversarial to clean top 5 accuracy/epsilon1=2/epsilon1=4/epsilon1=8/epsilon1=16 With adversarial training, “iter. l.l.” adv. examples0.50 0.75 1.00 1.25 1.50 1.75 2.00scale factor ρ for number of filters0.20.30.40.50.6ratio of adversarial to clean top 5 accuracy/epsilon1=2/epsilon1=4/epsilon1=8/epsilon1=16No adversarial training, “basic iter.” adv. examples0.50 0.75 1.00 1.25 1.50 1.75 2.00scale factor ρ for number of filters0.10.20.30.40.50.6ratio of adversarial to clean top 5 accuracy/epsilon1=2/epsilon1=4/epsilon1=8/epsilon1=16 With adversarial training, “basic iter.” adv. examplesFigure 4: Influence of size of the model on top 5 classification accuracy of various adversarialexamples. For a detailed explanation see Section 4.3 and Figure 1.relu6(x) =min(relu(x);6)ReluDecay (x) =relu (x)1+relu (x)2for2f0:1;0:01;0:001gTraining converged using all of these activations, however test performance was not necessarily thesame as with relu.tanh andReluDecay =0:1lose about 2%-3%of accuracy on clean examples and about 10%-20%on “step l.l.” adversarial examples. relu6,ReluDecay =0:01andReluDecay =0:001demonstratedsimilar accuracy (within 1%) torelu on clean images and few percent loss of accuracy on “stepl.l.” images. At the same time all non-linear activation functions increased classification accuracyon some of the iterative adversarial images. Detailed results are provided in Table 7.15Published as a conference paper at ICLR 20172 4 6 8 10 12 14 16/epsilon10.200.250.300.350.400.450.500.55top 1 error ratefastbasic iter.iter l.l.Top 1 error rate.2 4 6 8 10 12 14 16/epsilon10.050.100.150.200.250.30top 5 error ratefastbasic iter.iter l.l. Top 5 error rate.Figure 5: Influence of the size of adversarial perturbation on the error rate on adversarial examplesgenerated for one model and classified using another model. Both source and target models wereInception v3 networks with different random intializations.Overall non linear activation functions could be used as an additional measure of defense againstiterative adversarial images.Table 7: Activation functions and robustness to adversarial examples. For each activation functionwe adversarially trained the network on “step l.l.” adversarial images and then run classification ofclean images and adversarial images generated using various adversarial methods and .Adv. method Activation Clean= 2= 4= 8= 16Step l.l. relu 77.5% 74.6% 75.1% 75.5% 74.5%relu6 77.7% 71.8% 73.5% 74.5% 74.0%ReluDecay 0:001 78.0% 74.0% 74.9% 75.2% 73.9%ReluDecay 0:01 77.4% 73.6% 74.6% 75.0% 73.6%ReluDecay 0:1 75.3% 67.5% 67.5% 67.0% 64.8%tanh 74.5% 63.7% 65.1% 65.8% 61.9%Iter. l.l. relu 77.5% 30.2% 8.0% 3.1% 1.6%relu6 77.7% 39.8% 13.7% 4.1% 1.9%ReluDecay 0:001 78.0% 39.9% 12.6% 3.8% 1.8%ReluDecay 0:01 77.4% 36.2% 11.2% 3.2% 1.6%ReluDecay 0:1 75.3% 47.0% 25.8% 6.5% 2.4%tanh 74.5% 35.8% 6.6% 2.7% 0.9%Basic iter. relu 77.5% 28.4% 23.2% 21.5% 21.0%relu6 77.7% 31.2% 26.1% 23.8% 23.2%ReluDecay 0:001 78.0% 32.9% 27.2% 24.7% 24.1%ReluDecay 0:01 77.4% 30.0% 24.2% 21.4% 20.5%ReluDecay 0:1 75.3% 26.7% 20.6% 16.5% 15.2%tanh 74.5% 24.5% 22.0% 20.9% 20.7%E R ESULTS WITH DIFFERENT NUMBER OF ADVERSARIAL EXAMPLES IN THEMINIBATCHWe studied how number of adversarial examples kin the minibatch affect accuracy on clean andadversarial examples. Results are summarized in Table 8.Overall we noticed that increase of klead to increase of accuracy on adversarial examples and todecrease of accuracy on clean examples. At the same having more than half of adversarial examplesin the minibatch (which correspond to k>16in our case) does not provide significant improvementof accuracy on adversarial images, however lead to up to 1%of additional decrease of accuracy16Published as a conference paper at ICLR 2017on clean images. Thus for most experiments in the paper we have chosen k= 16 as a reasonabletrade-off between accuracy on clean and adversarial images.Table 8: Results of adversarial training depending on k— number of adversarial examples in theminibatch. Adversarial examples for training and evaluation were generated using step l.l. method.Row ‘No adv‘ is a baseline result without adversarial training (which is equivalent to k= 0).Rows ‘Adv,k=X‘ are results of adversarial training with Xadversarial examples in the minibatch.Total minibatch size is 32, thusk= 32 correspond to minibatch without clean examples.Clean= 2= 4= 8= 16No adv 78.2% 31.5% 27.7% 27.8% 29.7%Adv,k= 4 78.3% 71.7% 71.3% 69.4% 65.8%Adv,k= 8 78.1% 73.2% 73.2% 72.6% 70.5%Adv,k= 16 77.6% 73.8% 75.3% 76.1% 75.4%Adv,k= 24 77.1% 73.0% 75.3% 76.2% 76.0%Adv,k= 32 76.3% 73.4% 75.1% 75.9% 75.8%17
rJeKjwvclx
Published as a conference paper at ICLR 2017DYNAMIC COATTENTION NETWORKSFOR QUESTION ANSWERINGCaiming Xiong, Victor Zhong, Richard SocherSalesforce ResearchPalo Alto, CA 94301, USAfcxiong, vzhong, rsocher g@salesforce.comABSTRACTSeveral deep learning models have been proposed for question answering. How-ever, due to their single-pass nature, they have no way to recover from local max-ima corresponding to incorrect answers. To address this problem, we introducethe Dynamic Coattention Network (DCN) for question answering. The DCN firstfuses co-dependent representations of the question and the document in order tofocus on relevant parts of both. Then a dynamic pointing decoder iterates over po-tential answer spans. This iterative procedure enables the model to recover frominitial local maxima corresponding to incorrect answers. On the Stanford questionanswering dataset, a single DCN model improves the previous state of the art from71.0% F1 to 75.9%, while a DCN ensemble obtains 80.4% F1.1 I NTRODUCTIONQuestion answering (QA) is a crucial task in natural language processing that requires both naturallanguage understanding and world knowledge. Previous QA datasets tend to be high in quality dueto human annotation, but small in size (Berant et al., 2014; Richardson et al., 2013). Hence, they didnot allow for training data-intensive, expressive models such as deep neural networks.To address this problem, researchers have developed large-scale datasets through semi-automatedtechniques (Hermann et al., 2015; Hill et al., 2016). Compared to their smaller, hand-annotatedcounterparts, these QA datasets allow the training of more expressive models. However, it hasbeen shown that they differ from more natural, human annotated datasets in the types of reasoningrequired to answer the questions (Chen et al., 2016).Recently, Rajpurkar et al. (2016) released the Stanford Question Answering dataset (SQuAD), whichis orders of magnitude larger than all previous hand-annotated datasets and has a variety of qualitiesthat culminate in a natural QA task. SQuAD has the desirable quality that answers are spans in areference document. This constrains answers to the space of all possible spans. However, Rajpurkaret al. (2016) show that the dataset retains a diverse set of answers and requires different forms oflogical reasoning, including multi-sentence reasoning.We introduce the Dynamic Coattention Network (DCN), illustrated in Fig. 1, an end-to-end neuralnetwork for question answering. The model consists of a coattentive encoder that captures theinteractions between the question and the document, as well as a dynamic pointing decoder thatalternates between estimating the start and end of the answer span. Our single model obtains an F1of 75.9% compared to the best published result of 71.0% (Yu et al., 2016). In addition, our ensemblemodel obtains an F1 of 80.4% compared to the second best result of 78.1% on the official SQuADleaderboard.1Equal contribution1As of Nov. 3 2016. See https://rajpurkar.github.io/SQuAD-explorer/ for latest results.1Published as a conference paper at ICLR 20172 D YNAMIC COATTENTION NETWORKSFigure 1 illustrates an overview of the DCN. We first describe the encoders for the document andthe question, followed by the coattention mechanism and the dynamic decoder which produces theanswer span.Document encoderQuestion encoderWhat plants create most electric power?Coattention encoderThe weight of boilers and condensers generally makes the power-to-weight ... However, most electric power is generated using steam turbine plants, so that indirectly the world's industry is ...Dynamic pointer decoderstart index: 49end index: 51steam turbine plantsFigure 1: Overview of the Dynamic Coattention Network.2.1 D OCUMENT AND QUESTION ENCODERLet(xQ1;xQ2;:::;xQn)denote the sequence of word vectors corresponding to words in the questionand(xD1;xD2;:::;xDm)denote the same for words in the document. Using an LSTM (Hochreiter& Schmidhuber, 1997), we encode the document as: dt= LSTM encdt1;xDt. We define thedocument encoding matrix as D= [d1::: d md?]2R`(m+1). We also add a sentinel vector d?(Merity et al., 2016), which we later show allows the model to not attend to any particular word inthe input.The question embeddings are computed with the same LSTM to share representation power: qt=LSTM encqt1;xQt. We define an intermediate question representation Q0= [q1::: q nq?]2R`(n+1). To allow for variation between the question encoding space and the document encod-ing space, we introduce a non-linear projection layer on top of the question encoding. The finalrepresentation for the question becomes: Q= tanhW(Q)Q0+b(Q)2R`(n+1).2.2 C OATTENTION ENCODERWe propose a coattention mechanism that attends to the question and document simultaneously,similar to (Lu et al., 2016), and finally fuses both attention contexts. Figure 2 provides an illustrationof the coattention encoder.We first compute the affinity matrix, which contains affinity scores corresponding to all pairs ofdocument words and question words: L=D>Q2R(m+1)(n+1). The affinity matrix is nor-malized row-wise to produce the attention weights AQacross the document for each word in thequestion, and column-wise to produce the attention weights ADacross the question for each wordin the document:AQ= softmax ( L)2R(m+1)(n+1)andAD= softmaxL>2R(n+1)(m+1)(1)Next, we compute the summaries, or attention contexts, of the document in light of each word of thequestion.CQ=DAQ2R`(n+1): (2)2Published as a conference paper at ICLR 2017AQADdocumentproductconcatproductbi-LSTMbi-LSTMbi-LSTMbi-LSTMbi-LSTMconcatn+1m+1D:Q:CQCDutU:``Figure 2: Coattention encoder. The affinity matrix Lis not shown here. We instead directly showthe normalized attention weights ADandAQ.We similarly compute the summaries QADof the question in light of each word of the document.Similar to Cui et al. (2016), we also compute the summaries CQADof the previous attention con-texts in light of each word of the document. These two operations can be done in parallel, as isshown in Eq. 3. One possible interpretation for the operation CQADis the mapping of questionencoding into space of document encodings.CD=Q;CQAD2R2`(m+1): (3)We defineCD, a co-dependent representation of the question and document, as the coattentioncontext. We use the notation [a;b]for concatenating the vectors aandbhorizontally.The last step is the fusion of temporal information to the coattention context via a bidirectionalLSTM:ut= Bi-LSTMut1;ut+1;dt;cDt2R2`: (4)We defineU= [u1;:::;u m]2R2`m, which provides a foundation for selecting which span maybe the best possible answer, as the coattention encoding.2.3 D YNAMIC POINTING DECODERDue to the nature of SQuAD, an intuitive method for producing the answer span is by predictingthe start and end points of the span (Wang & Jiang, 2016b). However, given a question-documentpair, there may exist several intuitive answer spans within the document, each corresponding to alocal maxima. We propose an iterative technique to select an answer span by alternating betweenpredicting the start point and predicting the end point. This iterative procedure allows the model torecover from initial local maxima corresponding to incorrect answer spans.Figure 3 provides an illustration of the Dynamic Decoder, which is similar to a state machine whosestate is maintained by an LSTM-based sequential model. During each iteration, the decoder updatesits state taking into account the coattention encoding corresponding to current estimates of the startand end positions, and produces, via a multilayer neural network, new estimates of the start and endpositions.Lethi,si, andeidenote the hidden state of the LSTM, the estimate of the position, and the estimateof the end position during iteration i. The LSTM state update is then described by Eq. 5.hi= LSTM dechi1;usi1;uei1(5)whereusi1anduei1are the representations corresponding to the previous estimate of the start andend positions in the coattention encoding U.3Published as a conference paper at ICLR 20174849505152......usingsteamturbineplant,......HMNargmax (turbine)(steam)argmaxHMNhihi+1U:u48u49u50u51u52u49u51usi1uei1LSTMLSTMusiueisi: 49ei: 51Figure 3: Dynamic Decoder. Blue denotes the variables and functions related to estimating the startposition whereas red denotes the variables and functions related to estimating the end position.Given the current hidden state hi, previous start position usi1, and previous end position uei1, weestimate the current start position and end position via Eq. 6 and Eq. 7.si= argmaxt(1;:::; m) (6)ei= argmaxt(1;:::; m) (7)wheretandtrepresent the start score and end score corresponding to the tth word in the doc-ument. We compute tandtwith separate neural networks. These networks have the samearchitecture but do not share parameters.Based on the strong empirical performance of Maxout Networks (Goodfellow et al., 2013) and High-way Networks (Srivastava et al., 2015), especially with regards to deep architectures, we propose aHighway Maxout Network (HMN) to compute tas described by Eq. 8. The intuition behind us-ing such model is that the QA task consists of multiple question types and document topics. Thesevariations may require different models to estimate the answer span. Maxout provides a simple andeffective way to pool across multiple model variations.t= HMN startut;hi;usi1;uei1(8)Here,utis the coattention encoding corresponding to the tth word in the document. HMN start isillustrated in Figure 4. The end score, t, is computed similarly to the start score t, but using aseparate HMN end.We now describe the HMN model:HMNut;hi;usi1;uei1= maxW(3)hm(1)t;m(2)ti+b(3)(9)r= tanhW(D)hi;usi1;uei1(10)m(1)t = maxW(1)[ut;r] +b(1)(11)m(2)t = maxW(2)m(1)t+b(2)(12)4Published as a conference paper at ICLR 20174849505152......usingsteamturbineplant,......U:u48u49u50u51u52MAXOUTMLPusi1uei1hiMAXOUTMAXOUT......rm(1)m(2)↵49↵48↵50↵51↵52Figure 4: Highway Maxout Network. Dottedlines denote highway connections.wherer2R`is a non-linear projection of the cur-rent state with parameters W(D)2R`5`,m(1)tisthe output of the first maxout layer with parame-tersW(1)2Rp`3`andb(1)2Rp`, andm(2)tis the output of the second maxout layer with pa-rametersW(2)2Rp``andb(2)2Rp`.m(1)tandm(2)tare fed into the final maxout layer, whichhas parameters W(3)2Rp12`, andb(3)2Rp.pis the pooling size of each maxout layer. The maxoperation computes the maximum value over thefirst dimension of a tensor. We note that there ishighway connection between the output of the firstmaxout layer and the last maxout layer.To train the network, we minimize the cumulativesoftmax cross entropy of the start and end pointsacross all iterations. The iterative procedure haltswhen both the estimate of the start position and theestimate of the end position no longer change, orwhen a maximum number of iterations is reached.Details can be found in Section 4.13 R ELATED WORKStatistical QA Traditional approaches to question answering typically involve rule-based algorithmsor linear classifiers over hand-engineered feature sets. Richardson et al. (2013) proposed two base-lines, one that uses simple lexical features such as a sliding window to match bags of words, andanother that uses word-distances between words in the question and in the document. Berant et al.(2014) proposed an alternative approach in which one first learns a structured representation of theentities and relations in the document in the form of a knowledge base, then converts the questionto a structured query with which to match the content of the knowledge base. Wang et al. (2015)described a statistical model using frame semantic features as well as syntactic features such as partof speech tags and dependency parses. Chen et al. (2016) proposed a competitive statistical baselineusing a variety of carefully crafted lexical, syntactic, and word order features.Neural QA Neural attention models have been widely applied for machine comprehension orquestion-answering in NLP. Hermann et al. (2015) proposed an AttentiveReader model with therelease of the CNN/Daily Mail cloze-style question answering dataset. Hill et al. (2016) releasedanother dataset steming from the children’s book and proposed a window-based memory network.Kadlec et al. (2016) presented a pointer-style attention mechanism but performs only one attentionstep. Sordoni et al. (2016) introduced an iterative neural attention model and applied it to cloze-stylemachine comprehension tasks.Recently, Rajpurkar et al. (2016) released the SQuAD dataset. Different from cloze-style queries,answers include non-entities and longer phrases, and questions are more realistic. For SQuAD,Wang & Jiang (2016b) proposed an end-to-end neural network model that consists of a Match-LSTMencoder, originally introduced in Wang & Jiang (2016a), and a pointer network decoder (Vinyalset al., 2015); Yu et al. (2016) introduced a dynamic chunk reader, a neural reading comprehensionmodel that extracts a set of answer candidates of variable lengths from the document and ranks themto answer the question.Lu et al. (2016) proposed a hierarchical co-attention model for visual question answering, whichachieved state of the art result on the COCO-VQA dataset (Antol et al., 2015). In (Lu et al., 2016),the co-attention mechanism computes a conditional representation of the image given the question,as well as a conditional representation of the question given the image.Inspired by the above works, we propose a dynamic coattention model (DCN) that consists of anovel coattentive encoder and dynamic decoder. In our model, instead of estimating the start andend positions of the answer span in a single pass (Wang & Jiang, 2016b), we iteratively update the5Published as a conference paper at ICLR 2017Model Dev EM Dev F1 Test EM Test F1EnsembleDCN (Ours) 70.3 79.4 71.2 80.4Microsoft Research Asia 69.4 78.3Allen Institute69.2 77.8 69.9 78.1Singapore Management University67.6 76.8 67.9 77.0Google NYC68.2 76.7 Single modelDCN (Ours) 65.4 75.6 66.2 75.9Microsoft Research Asia65.9 75.2 65.5 75.0Google NYC66.4 74.9 Singapore Management University 64.7 73.7Carnegie Mellon University 62.5 73.3Dynamic Chunk Reader (Yu et al., 2016) 62.5 71.2 62.5 71.0Match-LSTM (Wang & Jiang, 2016b) 59.1 70.0 59.5 70.3Baseline (Rajpurkar et al., 2016) 40.0 51.0 40.4 51.0Human (Rajpurkar et al., 2016) 81.4 91.0 82.3 91.2Table 1: Leaderboard performance at the time of writing (Nov 4 2016).indicates that the modelused for submission is unpublished. indicates that the development scores were not publiclyavailable at the time of writing.start and end positions in a similar fashion to the Iterative Conditional Modes algorithm (Besag,1986).4 E XPERIMENTS4.1 I MPLEMENTATION DETAILSWe train and evaluate our model on the SQuAD dataset. To preprocess the corpus, we use thetokenizer from Stanford CoreNLP (Manning et al., 2014). We use as GloVe word vectors pre-trained on the 840B Common Crawl corpus (Pennington et al., 2014). We limit the vocabularyto words that are present in the Common Crawl corpus and set embeddings for out-of-vocabularywords to zero. Empirically, we found that training the embeddings consistently led to overfitting andsubpar performance, and hence only report results with fixed word embeddings.We use a max sequence length of 600 during training and a hidden state size of 200 for all recurrentunits, maxout layers, and linear layers. All LSTMs have randomly initialized parameters and aninitial state of zero. Sentinel vectors are randomly initialized and optimized during training. Forthe dynamic decoder, we set the maximum number of iterations to 4 and use a maxout pool size of16. We use dropout to regularize our network during training (Srivastava et al., 2014), and optimizethe model using ADAM (Kingma & Ba, 2014). All models are implemented and trained withChainer (Tokui et al., 2015).4.2 R ESULTSEvaluation on the SQuAD dataset consists of two metrics. The exact match score (EM) calculatesthe exact string match between the predicted answer and a ground truth answer. The F1 scorecalculates the overlap between words in the predicted answer and a ground truth answer. Becausea document-question pair may have several ground truth answers, the EM and F1 for a document-question pair is taken to be the maximum value across all ground truth answers. The overall metricis then computed by averaging over all document-question pairs. The offical SQuAD evaluation ishosted on CodaLab2. The training and development sets are publicly available while the test set iswithheld.2https://worksheets.codalab.org6Published as a conference paper at ICLR 2017The performance of the Dynamic Coattention Network on the SQuAD dataset, compared to othersubmitted models on the leaderboard3, is shown in Table 1. At the time of writing, our single-model DCN ranks first at 66.2% exact match and 75.9% F1 on the test data among single-modelsubmissions. Our ensemble DCN ranks first overall at 71.6% exact match and 80.4% F1 on the testdata.The DCN has the capability to estimate the start and end points of the answer span multiple times,each time conditioned on its previous estimates. By doing so, the model is able to explore localmaxima corresponding to multiple plausible answers, as is shown in Figure 5.s : 5e : 22s : 6e : 22s : 21e : 22Question 1: Who recovered Tolbert's fumble?Answer: Danny TrevathanGroundtruth: Danny Trevathans : 66e : 66s : 84e : 94Answer: gain support from China for a planned $2.5 billion railway...Question 2: What did the Kenyan business people hope for when meeting with the Chinese?Groundtruth: support from China for a planned $2.5 billion railways : 23e : 25s : 24e : 26s : 23e : 25s : 24e : 26Question 3: What kind of weapons did Tesla's treatise concern?Answer: particle beam weaponsGroundtruth: charged particle beam......Figure 5: Examples of the start and end conditional distributions produced by the dynamic decoder.Odd (blue) rows denote the start distributions and even (red) rows denote the end distributions. iindicates the iteration number of the dynamic decoder. Higher probability mass is indicated bydarker regions. The offset corresponding to the word with the highest probability mass is shownon the right hand side. The predicted span is underlined in red, and a ground truth answer span isunderlined in green.For example, Question 1 in Figure 5 demonstrates an instance where the model initially guessesan incorrect start point and a correct end point. In subsequent iterations, the model adjusts the startpoint, ultimately arriving at the correct start point in iteration 3. Similarly, the model gradually shiftsprobability mass for the end point to the correct word.Question 2 shows an example in which both the start and end estimates are initially incorrect. Themodel then settles on the correct answer in the next iteration.3https://rajpurkar.github.io/SQuAD-explorer7Published as a conference paper at ICLR 20170 100 200 300 400 500 600 700# Tokens in Document0.00.20.40.60.81.01.2F10 5 10 15 20 25 30 35# Tokens in Question0 5 10 15 20 25Average # Tokens in AnswerFigure 6: Performance of the DCN for various lengths of documents, questions, and answers. Theblue dot indicates the mean F1 at given length. The vertical bar represents the standard deviation ofF1s at a given length.While the dynamic nature of the decoder allows the model to escape initial local maxima corre-sponding to incorrect answers, Question 3 demonstrates a case where the model is unable to decidebetween multiple local maxima despite several iterations. Namely, the model alternates between theanswers “charged particle beam” and “particle beam weapons” indefinitely. Empirically, we observethat the model, trained with a maximum iteration of 4, takes 2.7 iterations to converge to an answeron average.Model Dev EM Dev F1Dynamic Coattention Network (DCN)pool size 16 HMN 65.4 75.6pool size 8 HMN 64.4 74.9pool size 4 HMN 65.2 75.2DCN with 2-layer MLP instead of HMN 63.8 74.4DCN with single iteration decoder 63.7 74.0DCN with Wang & Jiang (2016b) attention 63.7 73.7Table 2: Single model ablations on the development set.Model Ablation The perfor-mance of our model and itsablations on the SQuAD de-velopment set is shown in Ta-ble 2. On the decoder side,we experiment with variouspool sizes for the HMN max-out layers, using a 2-layerMLP instead of a HMN, andforcing the HMN decoder toa single iteration. Empiri-cally, we achieve the best per-formance on the developmentset with an iterative HMNwith pool size 16, and find that the model consistently benefits from a deeper, iterative decodernetwork. The performance improves as the number of maximum allowed iterations increases, withlittle improvement after 4 iterations. On the encoder side, replacing the coattention mechanism withan attention mechanism similar to Wang & Jiang (2016b) by setting CDtoQADin equation 3 re-sults in a 1.9 point F1 drop. This suggests that, at an additional cost of a softmax computation anda dot product, the coattention mechanism provides a simple and effective means to better encodethe document and question sequences. Further studies, such as performance without attention andperformance on questions requiring different types of reasoning can be found in the appendix.What Who How When Which Where Why OtherQuestion Type0.00.20.40.60.81.01.2F16073 1242 1187 712 642 474 150 90Figure 7: Performance of the DCN across ques-tion types. The height of each bar represents themean F1 for the given question type. The lowernumber denotes how many instances in the dev setare of the corresponding question type.Performance across length One point of inter-est is how the performance of the DCN varieswith respect to the length of document. Intu-itively, we expect the model performance to de-teriorate with longer examples, as is the casewith neural machine translation (Luong et al.,2015). However, as in shown in Figure 6,there is no notable performance degradation forlonger documents and questions contrary to ourexpectations. This suggests that the coattentiveencoder is largely agnostic to long documents,and is able to focus on small sections of rel-evant text while ignoring the rest of the (po-tentially very long) document. We do note aperformance degradation with longer answers.However, this is intuitive given the nature of theevaluation metric. Namely, it becomes increas-8Published as a conference paper at ICLR 2017ingly challenging to compute the correct wordspan as the number of words increases.Performance across question type Another natural way to analyze the performance of the modelis to examine its performance across question types. In Figure 7, we note that the mean F1 of DCNexceeds those of previous systems (Wang & Jiang, 2016b; Yu et al., 2016) across all question types.The DCN, like other models, is adept at “when” questions and struggles with the more complex“why” questions.Breakdown of F1 distribution Finally, we note that the DCN performance is highly bimodal. Onthe development set, the model perfectly predicts (100% F1) an answer for 62.2% of examples andpredicts a completely wrong answer (0% F1) for 16.3% of examples. That is, the model picks outpartial answers only 21.5% of the time. Upon qualitative inspections of the 0% F1 answers, some ofwhich are shown in Appendix A.4, we observe that when the model is wrong, its mistakes tend tohave the correct “answer type” (eg. person for a “who” question, method for a “how” question) andthe answer boundaries encapsulate a well-defined phrase.5 C ONCLUSIONWe proposed the Dynamic Coattention Network, an end-to-end neural network architecture for ques-tion answering. The DCN consists of a coattention encoder which learns co-dependent representa-tions of the question and of the document, and a dynamic decoder which iteratively estimates theanswer span. We showed that the iterative nature of the model allows it to recover from initial lo-cal maxima corresponding to incorrect predictions. On the SQuAD dataset, the DCN achieves thestate of the art results at 75.9% F1 with a single model and 80.4% F1 with an ensemble. The DCNsignificantly outperforms all other models.ACKNOWLEDGMENTSWe thank Kazuma Hashimoto and Bryan McCann for their help and insights.REFERENCESStanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zit-nick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE InternationalConference on Computer Vision , pp. 2425–2433, 2015.Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, BradHuang, Peter Clark, and Christopher D Manning. Modeling biological processes for readingcomprehension. In EMNLP , 2014.Julian Besag. On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society.Series B (Methodological) , pp. 259–302, 1986.Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn/dailymail reading comprehension task. In Association for Computational Linguistics (ACL) , 2016.Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over-attention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423 , 2016.Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, and Yoshua Bengio. Max-out networks. ICML (3) , 28:1319–1327, 2013.Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, MustafaSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances inNeural Information Processing Systems , pp. 1693–1701, 2015.Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Readingchildren’s books with explicit memory representations. In ICLR , 2016.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.9Published as a conference paper at ICLR 2017Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with theattention sum reader network. arXiv preprint arXiv:1603.01547 , 2016.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attentionfor visual question answering. arXiv preprint arXiv:1606.00061 , 2016.Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Meth-ods in Natural Language Processing , pp. 1412–1421. Association for Computational Linguistics,September 2015.Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, andDavid McClosky. The stanford corenlp natural language processing toolkit. In ACL (SystemDemonstrations) , pp. 55–60, 2014.Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixturemodels. arXiv preprint arXiv:1609.07843 , 2016.Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for wordrepresentation. In EMNLP , volume 14, pp. 1532–43, 2014.P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. Squad: 100,000+ questions for machine compre-hension of text. In Empirical Methods in Natural Language Processing (EMNLP) , 2016.Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset forthe open-domain machine comprehension of text. In EMNLP , volume 3, pp. 4, 2013.Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative alternating neural attention formachine reading. arXiv preprint arXiv:1606.02245 , 2016.Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine LearningResearch , 15(1):1929–1958, 2014.Rupesh K Srivastava, Klaus Greff, and Juergen Schmidhuber. Training very deep networks. InAdvances in Neural Information Processing Systems 28 , pp. 2377–2385, 2015.Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. Chainer: a next-generation opensource framework for deep learning. In Proceedings of Workshop on Machine Learning Sys-tems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information ProcessingSystems (NIPS) , 2015.Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in NeuralInformation Processing Systems , pp. 2692–2700, 2015.Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Machine comprehension withsyntax, frames, and semantics. In Proceedings of the 53rd Annual Meeting of the Associationfor Computational Linguistics and the 7th International Joint Conference on Natural LanguageProcessing (Volume 2: Short Papers) , pp. 700–706. Association for Computational Linguistics,2015.Shuohang Wang and Jing Jiang. Learning natural language inference with LSTM. In Proceedingsof the 2016 Conference of the North American Chapter of the Association for ComputationalLinguistics: Human Language Technologies , pp. 1442–1451. Association for Computational Lin-guistics, 2016a.Shuohang Wang and Jing Jiang. Machine comprehension using match-LSTM and answer pointer.arXiv preprint arXiv:1608.07905 , 2016b.Y . Yu, W. Zhang, K. Hasan, M. Yu, B. Xiang, and B. Zhou. End-to-End Reading Comprehensionwith Dynamic Answer Chunk Ranking. ArXiv e-prints , October 2016.Yang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end answer chunkextraction and ranking for reading comprehension. arXiv preprint arXiv:1610.09996v2 , 2016.10Published as a conference paper at ICLR 2017A A PPENDIXA.1 P ERFORMANCE WITHOUT ATTENTIONIn our experiments, we also investigate a model without any attention mechanism. In this model,the encoder is a simple LSTM network that first ingests the question and then ingests the document.The hidden states corresponding to words in the document is then passed to the decoder. This modelachieves 33.3% exact match and 41.9% F1, significantly worse than models with attention.A.2 S AMPLES REQUIRING DIFFERENT TYPES OF REASONINGWe generate predictions for examples requiring different types of reasoning, given by Rajpurkaret al. (2016). Because this set of examples is very limited, they do not conclusively demonstrate theeffectiveness of the model on different types of reasoning tasks. Nevertheless, these examples showthat the DCN is a promising architecture for challenging question answering tasks including thosethat involve reasoning over multiple sentences.WHAT IS THE RANKINE CYCLE SOMETIMES CALLED ?The Rankine cycle is sometimes referred to as a practical Carnot cycle because, when an efficientturbine is used, the TS diagram begins to resemble the Carnot cycle.Type of reasoning Lexical variation (synonymy)Ground truth practical Carnot cyclePrediction practical Carnot cycleWHICH TWO GOVERNING BODIES HAVE LEGISLATIVE VETO POWER ?While the Commision has a monopoly on initiating legislation, the European Parliament and theCouncil of the European Union have powers of amendment and veto during the legislative progress.Type of reasoning Lexical variation (world knowledge)Ground truth the European Parliament and the Council of the European UnionPrediction European Parliament and the Council of the European UnionWHAT SHAKESPEARE SCHOLAR IS CURRENTLY ON THE UNIVERSITYS FACULTY ?Current faculty include the anthropologist Marshall Sahlins, historian Dipesh Chakrabarty, ... Shake-speare scholar David Bevington, and renowned political scientists John Mearsheimer and RobertPape.Type of reasoning Syntactic variationGround truth David BevingtonPrediction David BevingtonWHAT COLLECTION DOES THE V&A T HEATRE & P ERFORMANCE GALLERIES HOLD ?The V&A Theatre & Performance galleries, formerly the Theatre Museum, opened in March 2009.The collections are stored by the V&A, and are available for research, exhibitions and other shows.They hold the UK’s biggest national collection of material about live performance in the UK sinceShakespeare’s day, covering drama, dance, musical theatre, circus, music hall, rock and pop, andmost other forms of live entertainment.Type of reasoning Multiple sentence reasoningGround truth Material about live performance11Published as a conference paper at ICLR 2017Prediction UK’s biggest national collection of material about live performance in the UK sinceShakespeare’s dayWHAT IS THE MAIN GOAL OF CRIMINAL PUNISHMENT OF CIVIL DISOBEDIENTS ?Type of reasoning AmbiguousAlong with giving the offender his ”just deserts”, achieving crime control via incapacitation anddeterrence is a major goal of crime punishment.Ground truth achieving crime control via incapacitation and deterrencePrediction achieving crime control via incapacitation and deterrenceA.3 S AMPLES OF CORRECT SQUAD PREDICTIONS BY THE DYNAMIC COATTENTIONNETWORKHOW DID THE MONGOLS ACQUIRE CHINESE PRINTING TECHNOLOGY ?ID: 572882242ca10214002da420The Mongol rulers patronized the Yuan printing industry. Chinese printing technology was trans-ferred to the Mongols through Kingdom of Qocho and Tibetan intermediaries. Some Yuan docu-ments such as Wang Zhen’s Nong Shu were printed with earthenware movable type, a technologyinvented in the 12th century. However, most published works were still produced through tradi-tional block printing techniques. The publication of a Taoist text inscribed with the name of TregeneKhatun, gedei’s wife, is one of the first printed works sponsored by the Mongols. In 1273, theMongols created the Imperial Library Directorate, a government-sponsored printing office. TheYuan government established centers for printing throughout China. Local schools and governmentagencies were funded to support the publishing of books.Ground truth through Kingdom of Qocho and Tibetan intermediariesPrediction: through Kingdom of Qocho and Tibetan intermediariesWHO APPOINTS ELDERS ?ID5730d473b7151e1900c0155bElders are called by God, affirmed by the church, and ordained by a bishop to a ministry of Word,Sacrament, Order and Service within the church. They may be appointed to the local church, or toother valid extension ministries of the church. Elders are given the authority to preach the Word ofGod, administer the sacraments of the church, to provide care and counseling, and to order the lifeof the church for ministry and mission. Elders may also be assigned as District Superintendents, andthey are eligible for election to the episcopacy. Elders serve a term of 23 years as provisional Eldersprior to their ordination.Ground truth bishop, the local churchPrediction a bishopAN ALGORITHM FOR XWHICH REDUCES TO CWOULD ALLOW US TO DO WHAT ?ID56e1ce08e3433e14004231a6This motivates the concept of a problem being hard for a complexity class. A problem X is hard fora class of problems C if every problem in C can be reduced to X. Thus no problem in C is harderthan X, since an algorithm for X allows us to solve any problem in C. Of course, the notion ofhard problems depends on the type of reduction being used. For complexity classes larger than P,polynomial-time reductions are commonly used. In particular, the set of problems that are hard forNP is the set of NP-hard problems.Ground truth solve any problem in C12Published as a conference paper at ICLR 2017Prediction solve any problem in CHOW MANY GENERAL QUESTIONS ARE AVAILABLE TO OPPOSITION LEADERS ?ID572fd7b8947a6a140053cd3eParliamentary time is also set aside for question periods in the debating chamber. A ”General Ques-tion Time” takes place on a Thursday between 11:40 a.m. and 12 p.m. where members can directquestions to any member of the Scottish Government. At 2.30pm, a 40-minute long themed ”Ques-tion Time” takes place, where members can ask questions of ministers in departments that are se-lected for questioning that sitting day, such as health and justice or education and transport. Between12 p.m. and 12:30 p.m. on Thursdays, when Parliament is sitting, First Minister’s Question Timetakes place. This gives members an opportunity to question the First Minister directly on issuesunder their jurisdiction. Opposition leaders ask a general question of the First Minister and thensupplementary questions. Such a practice enables a ”lead-in” to the questioner, who then uses theirsupplementary question to ask the First Minister any issue. The four general questions available toopposition leaders are:Ground truth fourPrediction fourWHAT ARE SOME OF THE ACCEPTED GENERAL PRINCIPLES OF EUROPEAN UNION LAW ?ID5726a00cf1498d1400e8e551The principles of European Union law are rules of law which have been developed by the EuropeanCourt of Justice that constitute unwritten rules which are not expressly provided for in the treaties butwhich affect how European Union law is interpreted and applies. In formulating these principles, thecourts have drawn on a variety of sources, including: public international law and legal doctrines andprinciples present in the legal systems of European Union member states and in the jurisprudence ofthe European Court of Human Rights. Accepted general principles of European Union Law includefundamental rights (see human rights), proportionality, legal certainty, equality before the law andsubsidiarity.Ground truth fundamental rights (see human rights), proportionality, legal certainty, equality be-fore the law and subsidiarityPrediction fundamental rights (see human rights), proportionality, legal certainty, equality beforethe law and subsidiarityWHY WAS TESLA RETURNED TO GOSPIC ?ID56dfaa047aa994140058dfbdOn 24 March 1879, Tesla was returned to Gospi under police guard for not having a residencepermit. On 17 April 1879, Milutin Tesla died at the age of 60 after contracting an unspecified illness(although some sources say that he died of a stroke). During that year, Tesla taught a large class ofstudents in his old school, Higher Real Gymnasium, in Gospi.Ground truth not having a residence permitPrediction not having a residence permitA.4 S AMPLES OF INCORRECT SQUAD PREDICTIONS BY THE DYNAMIC COATTENTIONNETWORKWHAT IS ONE SUPPLEMENTARY SOURCE OF EUROPEAN UNION LAW ?ID5725c3a9ec44d21400f3d506European Union law is applied by the courts of member states and the Court of Justice of the Euro-pean Union. Where the laws of member states provide for lesser rights European Union law can be13Published as a conference paper at ICLR 2017enforced by the courts of member states. In case of European Union law which should have beentransposed into the laws of member states, such as Directives, the European Commission can takeproceedings against the member state under the Treaty on the Functioning of the European Union.The European Court of Justice is the highest court able to interpret European Union law. Supple-mentary sources of European Union law include case law by the Court of Justice, international lawand general principles of European Union law.Ground truth international lawPrediction case law by the Court of JusticeComment The prediction produced by the model is correct, however it was not selected by Mechan-ical Turk annotators.WHO DESIGNED THE ILLUMINATION SYSTEMS THAT TESLA ELECTRIC LIGHT &MANUFACTURING INSTALLED ?ID56e0d6cf231d4119001ac424After leaving Edison’s company Tesla partnered with two businessmen in 1886, Robert Lane andBenjamin Vail, who agreed to finance an electric lighting company in Tesla’s name, Tesla ElectricLight & Manufacturing. The company installed electrical arc light based illumination systems de-signed by Tesla and also had designs for dynamo electric machine commutators, the first patentsissued to Tesla in the US.Ground truth TeslaPrediction Robert Lane and Benjamin VailComment The model produces an incorrect prediction that corresponds to people that funded Tesla,instead of Tesla who actually designed the illumination system. Empirically, we find that mostmistakes made by the model have the correct type (eg. named entity type) despite not includingtypes as prior knowledge to the model. In this case, the incorrect response has the correct type ofperson.CYDIPPID ARE TYPICALLY WHAT SHAPE ?ID57265746dd62a815002e821aCydippid ctenophores have bodies that are more or less rounded, sometimes nearly spherical andother times more cylindrical or egg-shaped; the common coastal ”sea gooseberry,” Pleurobrachia,sometimes has an egg-shaped body with the mouth at the narrow end, although some individuals aremore uniformly round. From opposite sides of the body extends a pair of long, slender tentacles,each housed in a sheath into which it can be withdrawn. Some species of cydippids have bodies thatare flattened to various extents, so that they are wider in the plane of the tentacles.Ground truth more or less rounded, egg-shapedPrediction sphericalComment Although the mistake is subtle, the prediction is incorrect. The statement “are more orless rounded, sometimes nearly spherical” suggests that the entity is more often “rounded” than“spherical” or “cylindrical” or “egg-shaped” (an answer given by an annotator). This suggests thatthe model has trouble discerning among multiple intuitive answers due to a lack of understanding ofthe relative severity of “more or less” versus “sometimes” and “other times”.14
HJgXCV9xx
Published as a conference paper at ICLR 2017DIALOGUE LEARNING WITHHUMAN -IN-THE-LOOPJiwei Li, Alexander H. Miller, Sumit Chopra, Marc’Aurelio Ranzato, Jason WestonFacebook AI Research,New York, USAfjiwel,ahm,spchopra,ranzato,jase g@fb.comABSTRACTAn important aspect of developing conversational agents is to give a bot the abilityto improve through communicating with humans and to learn from the mistakesthat it makes. Most research has focused on learning from fixed training sets oflabeled data rather than interacting with a dialogue partner in an online fashion.In this paper we explore this direction in a reinforcement learning setting wherethe bot improves its question-answering ability from feedback a teacher gives fol-lowing its generated responses. We build a simulator that tests various aspects ofsuch learning in a synthetic environment, and introduce models that work in thisregime. Finally, real experiments with Mechanical Turk validate the approach.1 I NTRODUCTIONA good conversational agent (which we sometimes refer to as a learner or bot1) should have theability to learn from the online feedback from a teacher: adapting its model when making mistakesand reinforcing the model when the teacher’s feedback is positive. This is particularly importantin the situation where the bot is initially trained in a supervised way on a fixed synthetic, domain-specific or pre-built dataset before release, but will be exposed to a different environment afterrelease (e.g., more diverse natural language utterance usage when talking with real humans, differentdistributions, special cases, etc.). Most recent research has focused on training a bot from fixedtraining sets of labeled data but seldom on how the bot can improve through online interaction withhumans. Human (rather than machine) language learning happens during communication (Bassiri,2011; Werts et al., 1995), and not from labeled datasets, hence making this an important subject tostudy.In this work, we explore this direction by training a bot through interaction with teachers in anonline fashion. The task is formalized under the general framework of reinforcement learning viathe teacher’s (dialogue partner’s) feedback to the dialogue actions from the bot. The dialogue takesplace in the context of question-answering tasks and the bot has to, given either a short story or aset of facts, answer a set of questions from the teacher. We consider two types of feedback: explicitnumerical rewards as in conventional reinforcement learning, and textual feedback which is morenatural in human dialogue, following (Weston, 2016). We consider two online training scenarios:(i) where the task is built with a dialogue simulator allowing for easy analysis and repeatability ofexperiments; and (ii) where the teachers are real humans using Amazon Mechanical Turk.We explore important issues involved in online learning such as how a bot can be most efficientlytrained using a minimal amount of teacher’s feedback, how a bot can harness different types offeedback signal, how to avoid pitfalls such as instability during online learing with different typesof feedback via data balancing and exploration, and how to make learning with real humans feasiblevia data batching. Our findings indicate that it is feasible to build a pipeline that starts from a modeltrained with fixed data and then learns from interactions with humans to improve itself.1In this paper, we refer to a learner (either a human or a bot/dialogue agent which is a machine learningalgorithm) as the student, and their more knowledgeable dialogue partner as the teacher.1Published as a conference paper at ICLR 20172 R ELATED WORKReinforcement learning has been widely applied to dialogue, especially in slot filling to solvedomain-specific tasks (Walker, 2000; Schatzmann et al., 2006; Singh et al., 2000; 2002). Effortsinclude Markov Decision Processes (MDPs) (Levin et al., 1997; 2000; Walker et al., 2003; Pierac-cini et al., 2009), POMDP models (Young et al., 2010; 2013; Ga ˇsic et al., 2013; 2014) and policylearning (Su et al., 2016). Such a line of research focuses mainly on frames with slots to fill, wherethe bot will use reinforcement learning to model a state transition pattern, generating dialogue ut-terances to prompt the appropriate user responses to put in the desired slots. This goal is differentfrom ours, where we study end-to-end learning systems and also consider non-reward based setupsvia textual feedback.Our work is related to the line of research that focuses on supervised learning for question answering(QA) from dialogues (Dodge et al., 2015; Weston, 2016), either given a database of knowledge(Bordes et al., 2015; Miller et al., 2016) or short texts (Weston et al., 2015; Hermann et al., 2015;Rajpurkar et al., 2016). In our work, the discourse includes the statements made in the past, thequestion and answer, and crucially the response from the teacher. The latter is what makes thesetting different from the standard QA setting, i.e. we use methods that leverage this response also,not just answering questions. Further, QA works only consider fixed datasets with gold annotations,i.e. they do not consider a reinforcement learning setting.Our work is closely related to a recent work from Weston (2016) that learns through conductingconversations where supervision is given naturally in the response during the conversation. Thatwork introduced the use of forward prediction that learns by predicting the teacher’s feedback, inaddition to using reward-based learning of correct answers. However, two important issues werenot addressed: (i) it did not use a reinforcement learning setting, but instead used pre-built datasetswith fixed policies given in advance; and (ii) experiments used only simulated and no real languagedata. Hence, models that can learn policies from real online communication were not investigated.To make the differences with our work clear, we will now detail these points further.The experiments in (Weston, 2016) involve constructing pre-built fixed datasets, rather than trainingthe learner within a simulator, as in our work. Pre-built datasets can only be made by fixing a priorin advance. They achieve this by choosing an omniscient (but deliberately imperfect) labeler thatgetsaccexamples always correct (the paper looked at values 50%, 10% and 1%). Again, this wasnot learned, and was fixed to generate the datasets. Note that the paper refers to these answers ascoming from “the learner” (which should be the model), but since the policy is fixed it actually doesnot depend on the model. In a realistic setting one does not have access to an omniscient labeler,one has to learn a policy completely from scratch, online, starting with a random policy, so theirsetting was not practically viable. In our work, when policy training is viewed as batch learningover iterations of the dataset, updating the policy on each iteration, (Weston, 2016) can be viewedas training only one iteration, whereas we perform multiple iterations. This is explained further inSections 4.2 and 5.1. We show in our experiments that performance improves over the iterations,i.e. it is better than the first iteration. We show that such online learning works for both reward-based numerical feedback and for forward prediction methods using textual feedback (under certainconditions which are detailed). This is a key contribution of our work.Finally, (Weston, 2016) only conducted experiments on synthetic or templated language, and notreal language, especially the feedback from the teacher was scripted. While we believe that syntheticdatasets are very important for developing understanding (hence we develop a simulator and conductexperiments also with synthetic data), for a new method to gain traction it must be shown to workon real data. We hence employ Mechanical Turk to collect real language data for the questions andimportantly for the teacher feedback and construct experiments in this real setting.3 D ATASET AND TASKSWe begin by describing the data setup we use. In our first set of experiments we build a simulatoras a testbed for learning algorithms. In our second set of experiments we use Mechanical Turk toprovide real human teachers giving feedback.2Published as a conference paper at ICLR 20173.1 S IMULATORThe simulator adapts two existing fixed datasets to our online setting. Following Weston (2016), weuse (i) the single supporting fact problem from the bAbI datasets (Weston et al., 2015) which consistsof 1000 short stories from a simulated world interspersed with questions; and (ii) the WikiMoviesdataset (Weston et al., 2015) which consists of roughly 100k (templated) questions over 75k entitiesbased on questions with answers in the open movie database (OMDb). Each dialogue takes placebetween a teacher, scripted by the simulation, and a bot. The communication protocol is as follows:(1) the teacher first asks a question from the fixed set of questions existing in the dataset, (2) the botanswers the question, and finally (3) the teacher gives feedback on the bot’s answer.We follow the paradigm defined in (Weston, 2016) where the teacher’s feedback takes the form ofeither textual feedback, a numerical reward, or both, depending on the task. For each dataset, thereare ten tasks, which are further described in Sec. A and illustrated in Figure 5 of the appendix. Wealso refer the readers to (Weston, 2016) for more detailed descriptions and the motivation behindthese tasks. In the main text of this paper we only consider Task 6 (“partial feedback”): the teacherreplies with positive textual feedback (6 possible templates) when the bot answers correctly, andpositive reward is given only 50% of the time. When the bot is wrong, the teacher gives textualfeedback containing the answer. Descriptions and experiments on the other tasks are detailed in theappendix. Example dialogues are given in Figure 1.The difference between our simulation and the original fixed tasks of Weston (2016) is that modelsare trained on-the-fly. After receiving feedback and/or rewards, we update the model (policy) andthen deploy it to collect teacher’s feedback in the next episode or batch. This means the model’spolicy affects the data which is used to train it, which was not the case in the previous work.Figure 1: Simulator sample dialogues for the bAbI task (left) and WikiMovies (right). Weconsider 10 different tasks following Weston (2016) but here describe only Task 6; other tasks aredetailed in the appendix. The teacher’s dialogue is in black and the bot is in red. (+) indicatesreceiving positive reward, given only 50% of the time even when correct.bAbI Task 6: Partial Rewards WikiMovies Task 6: Partial RewardsMary went to the hallway. What films are about Hawaii? 50 First DatesJohn moved to the bathroom. Correct!Mary travelled to the kitchen. Who acted in Licence to Kill? Billy MadisonWhere is Mary? kitchen No, the answer is Timothy Dalton.Yes, that’s right! What genre is Saratoga Trunk in? DramaWhere is John? bathroom Yes! (+)Yes, that’s correct! (+) . . .Figure 2: Human Dialogue from Mechanical Turk (based on WikiMovies) The human teacher’sdialogue is in black and the bot is in red. We show examples where the bot answers correctly (left)and incorrectly (right). Real humans provide more variability of language in both questions andtextual feedback than in the simulator setup (cf. Figure 1).Sample dialogues with correct answers from the bot:Who wrote the Linguini Incident ? richard shepardRichard Shepard is one of the right answers here.What year did The World Before Her premiere? 2012Yep! That’s when it came out.Which are the movie genres of Mystery of the 13th Guest? crimeRight, it can also be categorized as a mystery.Sample dialogues with incorrect answers from the bot:What are some movies about a supermarket ? supermarketThere were many options and this one was not among them.Which are the genres of the film Juwanna Mann ? kevin pollakThat is incorrect. Remember the question asked for a genre not name.Who wrote the story of movie Coraline ? fantasyThat’s a movie genre and not the name of the writer. A better answer would of been Henry Selickor Neil Gaiman.3Published as a conference paper at ICLR 20173.2 M ECHANICAL TURK EXPERIMENTSFinally, we extended WikiMovies using Mechanical Turk so that real human teachers are givingfeedback rather than using a simulation. As both the questions and feedback are templated in thesimulation, they are now both replaced with natural human utterances. Rather than having a set ofsimulated tasks, we have only one task, and we gave instructions to the teachers that they couldgive feedback as they see fit. The exact instructions given to the Turkers is given in Appendix B. Ingeneral, each independent response contains feedback like (i) positive or negative sentences; or (ii)a phrase containing the answer or (iii) a hint, which are similar to setups defined in the simulator.However, some human responses cannot be so easily categorized, and the lexical variability is muchlarger in human responses. Some examples of the collected data are given in Figure 2.4 M ETHODS4.1 M ODEL ARCHITECTUREIn our experiments, we used variants of the End-to-End Memory Network (MemN2N) model(Sukhbaatar et al., 2015) as our underlying architecture for learning from dialogue.The input to MemN2N is the last utterance of the dialogue history xas well as a set of memories(context)C=c1,c2, ...,cN. The memory Cencodes both short-term memory, e.g., dialogue historiesbetween the bot and the teacher, and long-term memories, e.g., the knowledge base facts that the bothas access to. Given the input xandC, the goal is to produce an output/label a.In the first step, the query xis transformed to a vector representation u0by summing up its con-stituent word embeddings: u0=Ax. The inputxis a bag-of-words vector and Ais thedVwordembedding matrix where ddenotes the emebbding dimension and Vdenotes the vocabulary size.Each memory ciis similarly transformed to a vector mi. The model will read information from thememory by comparing input representation u0with memory vectors miusing softmax weights:o1=Xip1imip1i=softmax (uT0mi) (1)This process selects memories relevant to the last utterance x, i.e., the memories with large valuesofp1i. The returned memory vector o1is the weighted sum of memory vectors. This process canbe repeated to query the memory N times (so called “hops”) by adding onto the original input,u1=o1+u0, or to the previous state, un=on+un1, and then using unto query the memoriesagain.In the end,uNis input to a softmax function for the final prediction:a=softmax (uTNy1;uTNy2;:::;uTNyL) (2)wherey1;:::;y Ldenote the set of candidate answers. If the answer is a word, yiis the correspondingword embedding. If the answer is a sentence, yiis the embedding for the sentence achieved in thesame way that we obtain embeddings for query xand memory C.The standard way MemN2N is trained is via a cross entropy criterion on known input-output pairs,which we refer to as supervised or imitation learning. As our work is in a reinforcement learningsetup where our model must make predictions to learn, this procedure will not work, so we insteadconsider reinforcement learning algorithms which we describe next.4.2 R EINFORCEMENT LEARNINGIn this section, we present the algorithms we used to train MemN2N in an online fashion. Our learn-ing setup can be cast as a particular form of Reinforcement Learning. The policy is implemented bythe MemN2N model. The state is the dialogue history. The action space corresponds to the set ofanswers the MemN2N has to choose from to answer the teacher’s question. In our setting, the policychooses only one action for each episode. The reward is either 1(a reward from the teacher whenthe bot answers correctly) or 0otherwise. Note that in our experiments, a reward equal to 0mightmean that the answer is incorrect or that the positive reward is simply missing. The overall setup isclosest to standard contextual bandits, except that the reward is binary.4Published as a conference paper at ICLR 2017When working with real human dialogues, e.g. collecting data via Mechanical Turk, it is easier toset up a task whereby a bot is deployed to respond to a large batch of utterances, as opposed to asingle one. The latter would be more difficult to manage and scale up since it would require someform of synchronization between the model replicas interacting with each human.This is comparable to the real world situation where a teacher can either ask a student a singlequestion and give feedback right away, or set up a test that contains many questions and grade all ofthem at once. Only after the learner completes all questions, it can hear feedback from the teacher.We use batch size to refer to how many dialogue episodes the current model is used to collectfeedback before updating its parameters. In the Reinforcement Learning literature, batch size isrelated to off-policy learning since the MemN2N policy is trained using episodes collected with astale version of the model. Our experiments show that our model and base algorithms are very robustto the choice of batch size, alleviating the need for correction terms in the learning algorithm (Bottouet al., 2013).We consider two strategies: (i) online batch size, whereby the target policy is updated after doing asingle pass over each batch (a batch size of 1 reverts to the usual on-policy online learning); and (ii)dataset-sized batch, whereby training is continued to convergence on the batch which is the size ofthe dataset, and then the target policy is updated with the new model, and a new batch is drawn andthe procedure iterates. These strategies can be applied to all the methods we use, described below.Next, we discuss the learning algorithms we considered in this work.4.2.1 R EWARD -BASED IMITATION (RBI)The simplest algorithm we first consider is the one employed in Weston (2016). RBI relies onpositive rewards provided by the teacher. It is trained to imitate the correct behavior of the learner,i.e., learning to predict the correct answers (with reward 1) at training time and disregarding theother ones. This is implemented by using a MemN2N that maps a dialogue input to a prediction, i.e.using the cross entropy criterion on the positively rewarded subset of the data.In order to make this work in the online setting which requires exploration to find the correct answer,we employ an -greedy strategy: the learner makes a prediction using its own model (the answerassigned the highest probability) with probability 1, otherwise it picks a random answer withprobability. The teacher will then give a reward of +1if the answer is correct, otherwise 0. Thebot will then learn to imitate the correct answers: predicting the correct answers while ignoring theincorrect ones.4.2.2 REINFORCEThe second algorithm we use is the REINFORCE algorithm (Williams, 1992), which maximizesthe expected cumulative reward of the episode, in our case the expected reward provided by theteacher. The expectation is approximated by sampling an answer from the model distribution. Let adenote the answer that the learner gives, p(a)denote the probability that current model assigns to a,rdenote the teacher’s reward, and J()denote the expectation of the reward. We have:rJ()rlogp(a)[rb] (3)wherebis the baseline value, which is estimated using a linear regression model that takes as inputthe output of the memory network after the last hop, and outputs a scalar bdenoting the estimationof the future reward. The baseline model is trained by minimizing the mean squared loss betweenthe estimated reward band actual reward r,jjrbjj2. We refer the readers to (Ranzato et al., 2015;Zaremba & Sutskever, 2015) for more details. The baseline estimator model is independent fromthe policy model, and its error is not backpropagated through the policy model.The major difference between RBI and REINFORCE is that (i) the learner only tries to imitatecorrect behavior in RBI while in REINFORCE it also leverages the incorrect behavior, and (ii) thelearner explores using an -greedy strategy in RBI while in REINFORCE it uses the distributionover actions produced by the model itself.5Published as a conference paper at ICLR 20174.2.3 F ORWARD PREDICTION (FP)FP (Weston, 2016) handles the situation where a numerical reward for a bot’s answer is not available,meaning that there are no +1 or 0 labels available after a student’s utterance. Instead, the modelassumes the teacher gives textual feedback tto the bot’s answer, taking the form of a dialogueutterance, and the model tries to predict this instead. Suppose that xdenotes the teacher’s questionandC=c1,c2, ...,cNdenotes the dialogue history as before. In FP, the model first maps the teacher’sinitial question xand dialogue history Cto a vector representation uusing a memory network withmultiple hops. Then the model will perform another hop of attention over all possible student’sanswers in A, with an additional part that incorporates the information of which candidate (i.e., a)was actually selected in the dialogue:p^a=softmax (uTy^a)o=X^a2Ap^a(y^a+1[^a=a]) (4)wherey^adenotes the vector representation for the student’s answer candidate ^a.is a (learned)d-dimensional vector to signify the actual action athat the student chooses. ois then combined withuto predict the teacher’s feedback tusing a softmax:u1=o+u t =softmax (uT1xr1;uT1xr2;:::;uT1xrN) (5)wherexridenotes the embedding for the ithresponse. In the online setting, the teacher will givetextual feedback, and the learner needs to update its model using the feedback. It was shown inWeston (2016) that in an off-line setting this procedure can work either on its own, or in conjunctionwith a method that uses numerical rewards as well for improved performance. In the online setting,we consider two simple extensions:-greedy exploration: with probability the student will give a random answer, and withprobability 1it will give the answer that its model assigns the largest probability. Thismethod enables the model to explore the space of actions and to potentially discover correctanswers.data balancing: cluster the set of teacher responses tand then balance training across theclusters equally.2This is a type of experience replay (Mnih et al., 2013) but sampling withan evened distribution. Balancing stops part of the distribution dominating the learning. Forexample, if the model is not exposed to sufficient positive and negative feedback, and oneclass overly dominates, the learning process degenerates to a model that always predictsthe same output regardless of its input.5 E XPERIMENTSExperiments are first conducted using our simulator, and then using Amazon Mechanical Turk withreal human subjects taking the role of the teacher3.5.1 S IMULATOROnline Experiments In our first experiments, we considered both the bAbI and WikiMovies tasksand varied batch size, random exploration rate , and type of model. Figure 3 and Figure 4 shows(Task 6) results on bAbI and WikiMovies. Other tasks yield similar conclusions and are reported inthe appendix.Overall, we obtain the following conclusions:In general RBI and FP do work in a reinforcement learning setting, but can perform betterwith random exploration.In particular RBI can fail without exploration. RBI needs random noise for exploring labelsotherwise it can get stuck predicting a subset of labels and fail.2In the simulated data, because the responses are templates, this can be implemented by first randomlysampling the response, and then randomly sampling a story with that response; we keep the history of allstories seen from which we sample. For real data slightly more sophisticated clustering should be used.3Code and data are available at https://github.com/facebook/MemNN/tree/master/HITL .6Published as a conference paper at ICLR 20170 20 40 60 80Epoch0.20.30.40.50.60.70.80.91.0AccuracyRandom Exploration for RBI/epsilon1=0/epsilon1=0.2/epsilon1=0.4/epsilon1=0.6/epsilon1=0.8/epsilon1=10 20 40 60 80Epoch0.20.30.40.50.60.70.80.9AccuracyRandom Exploration for FP/epsilon1=0/epsilon1=0.2/epsilon1=0.4/epsilon1=0.6/epsilon1=0.8/epsilon1=10 20 40 60 80Epoch0.20.30.40.50.60.70.80.9AccuracyRandom Exploration for FP with Balancing/epsilon1=0/epsilon1=0.2/epsilon1=0.4/epsilon1=0.6/epsilon1=0.8/epsilon1=10 20 40 60 80Epoch0.20.30.40.50.60.70.80.91.0AccuracyComparing RBI, FP and REINFORCEREINFORCERBIFP0 20 40 60 80 100Epoch0.20.30.40.50.60.70.80.91.0AccuracyRBI (eps=0.6) Varying Batch Sizebatch 20batch 80batch 320batch 10000 20 40 60 80 100Epoch0.30.40.50.60.70.80.9AccuracyFP (eps=0.6) Varying Batch Sizebatch 20batch 80batch 320batch 1000Figure 3: Training epoch vs. test accuracy for bAbI (Task 6) varying exploration and batchsize. Random exploration is important for both reward-based (RBI) and forward prediction (FP).Performance is largely independent of batch size, and RBI performs similarly to REINFORCE.Note that supervised, rather than reinforcement learning, with gold standard labels achieves 100%accuracy on this task.REINFORCE obtains similar performance to RBI with optimal .FP with balancing or with exploration via both outperform FP alone.For both RBI and FP, performance is largely independent of online batch size.Dataset Batch Size Experiments Given that larger online batch sizes appear to work well, andthat this could be important in a real-world data collection setup where the same model is deployed togather a large amount of feedback from humans, we conducted further experiments where the batchsize is exactly equal to the dataset size and for each batch training is completed to convergence.7Published as a conference paper at ICLR 20170 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyRandom Exploration for RBI/epsilon1=0/epsilon1=0.1/epsilon1=0.2/epsilon1=0.3/epsilon1=0.4/epsilon1=0.5/epsilon1=10 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyRandom Exploration for FP/epsilon1=0/epsilon1=0.1/epsilon1=0.2/epsilon1=0.3/epsilon1=0.4/epsilon1=0.5/epsilon1=10 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyRBI (eps=0.5) Varying Batch Sizebatch 32batch 320batch 3200batch 32000full dataset0 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyComparing RBI, FP and REINFORCEREINFORCERBIFPFigure 4: WikiMovies : Training epoch vs. test accuracy on Task 6varying (top left panel) explo-ration ratewhile setting batch size to 32for RBI, (top right panel) for FP, (bottom left) batch sizefor RBI, and (bottom right) comparing RBI, REINFORCE and FP with = 0:5. The model is robustto the choice of batch size. RBI and REINFORCE perform comparably. Note that supervised, ratherthan reinforcement learning, with gold standard labels achieves 80% accuracy on this task (Weston,2016).After the model has been trained on the dataset, it is deployed to collect a new dataset of questionsand answers, and the process is repeated. Table 1 reports test error at each iteration of training,using the bAbI Task 6as the case study (see the appendix for results on other tasks). The followingconclusions can be made for this setting:RBI improves in performance as we iterate. Unlike in the online case, RBI does not needrandom exploration. We believe this is because the first batch, which is collected with arandomly initialized model, contains enough variety of examples with positive rewards thatthe model does not get stuck predicting a subset of labels.FP is not stable in this setting. This is because once the model gets very good at makingpredictions (at the third iteration), it is not exposed to a sufficient number of negative re-sponses anymore. From that point on, learning degenerates and performance drops as themodel always predicts the same responses. At the next iteration, it will recover again sinceit has a more balanced training set, but then it will collapse again in an oscillating behavior.FP does work if extended with balancing or random exploration with sufficiently large .RBI+FP also works well and helps with the instability of FP, alleviating the need for randomexploration and data balancing.Overall, our simulation results indicate that while a bot can be effectively trained fully online frombot-teacher interactions, collecting real dialogue data in batches (which is easier to collect and iterateexperiments over) is also a viable approach. We hence pursue the latter approach in our next set ofexperiments.8Published as a conference paper at ICLR 2017Iteration 1 2 3 4 5 6Imitation Learning 0.24 0.23 0.23 0.22 0.23 0.23Reward Based Imitation (RBI) 0.74 0.87 0.90 0.96 0.96 0.98Forward Pred. (FP) 0.99 0.96 1.00 0.30 1.00 0.29RBI+FP 0.99 0.96 0.97 0.95 0.94 0.97FP (balanced) 0.99 0.97 0.97 0.97 0.97 0.97FP (rand. exploration = 0:25)0.96 0.88 0.94 0.26 0.64 0.99FP (rand. exploration = 0:5) 0.98 0.98 0.99 0.98 0.95 0.99Table 1: Test accuracy of various models per iteration in the dataset batch size case (using batch sizeequal to the size of the full training set) for bAbI, Task 6. Results>0:95are in bold.Relation to experiments in Weston (2016) As described in detail in Section 2 the datasets weuse in our experiments were introduced in (Weston et al., 2015). However, that work involvedconstructing pre-built fixed policies (and hence, datasets), rather than training the learner in a rein-forcement/interactive learning using a simulator, as in our work. They achieved this by choosingan omniscient (but deliberately imperfect) labeler that gets accexamples always correct (the paperlooked at values 1%, 10% and 50%). In a realistic setting one does not have access to an omniscientlabeler, one has to learn a policy completely from scratch, online, starting with a random policy, aswe do here. Nevertheless, it is possible to compare our learnt policies to those results because weuse the same train/valid/test splits.The clearest comparison comparison is via Table 1, where the policy is learnt using batch iterationsof the dataset, updating the policy on each iteration. Weston et al. (2015) can be viewed as trainingonly one iteration, with a pre-built policy, as explained above, where 59%, 81% and 99% accuracywas obtained for RBI for accwith 1%, 10% and 50% respectively4. Whileaccof 50% is goodenough to solve the task, lower values are not. In this work a random policy begins with 74%accuracy on the first iteration, but importantly on each iteration the policy is updated and improves,with values of 87%, 90% on iterations 2 and 3 respectively, and 98% on iteration 6. This is a keydifferentiator to the work of (Weston et al., 2015) where such improvement was not shown. Weshow that such online learning works for both reward-based numerical feedback and for forwardprediction methods using textual feedback (as long as balancing or random exploration is performedsufficiently). The final performance outperforms most values of accfrom Weston et al. (2015)unlessis so large that the task is already solved. This is a key contribution of our work.Similar conclusions can be made for Figures 3 and 4. Despite our initial random policy starting atclose to 0% accuracy, if random exploration 0:2is employed then after a number of epochsthe performance is better than most values of accfrom Weston et al. (2015), e.g. compare theaccuracies given in the previous paragraph (59%, 81% and 99%) to Figure 3, top left.5.2 H UMAN FEEDBACKWe employed Turkers to both ask questions and then give textual feedback on the bot’s answers, asdescribed in Section 3.2. Our experimental protocol was as follows. We first trained a MemN2Nusing supervised (i.e., imitation) learning on a training set of 1000 questions produced by Turkersand using the known correct answers provided by the original dataset (and no textual feedback).Next, using the trained policy, we collected textual feedback for the responses of the bot for anadditional 10,000 questions. Examples from the collected dataset are given in Figure 2. Given thisdataset, we compare various models: RBI, FP and FP+RBI. As we know the correct answers tothe additional questions, we can assign a positive reward to questions the bot got correct. We hencemeasure the impact of the sparseness of this reward signal, where a fraction rof additional exampleshave rewards. The models are tested on a test set of 8,000 questions (produced by Turkers), andhyperparameters are tuned on a similarly sized validation set. Note this is a harder task than theWikiMovies task in the simulator due to the use natural language from Turkers, hence lower testperformance is expected.4Note, this is not the same as a randomly initialized neural network policy, because due to the syntheticconstruction with an omniscient labeler the labels will be balanced. In our work, we learn the policy fromrandomly initialized weights which are updated as we learn the policy.9Published as a conference paper at ICLR 2017Results are given in Table 2. They indicate that both RBI and FP are useful. When rewards aresparse, FP still works via the textual feedback while RBI can only use the initial 1000 exampleswhenr= 0. As FP does not use numericalrewards at all, it is invariant to the parameter r. Thecombination of FP and RBI outperforms either alone.Model r= 0r= 0:1r= 0:5r= 1Reward Based Imitation (RBI) 0.333 0.340 0.365 0.375Forward Prediction (FP) 0.358 0.358 0.358 0.358RBI+FP 0.431 0.438 0.443 0.441Table 2: Incorporating Feedback From Humans via Mechanical Turk. Textual feedback isprovided for 10,000 model predictions (from a model trained with 1k labeled training examples),and additional sparse binary rewards (fraction rof examples have rewards). Forward Prediction andReward-based Imitation are both useful, with their combination performing best.We also conducted additional experiments comparing with (i) synthetic feedback and (ii) the fullysupervised case which are given in Appendix C.1. They show that the results with human feedbackare competitive with these approaches.6 C ONCLUSIONWe studied dialogue learning of end-to-end models using textual feedback and numerical rewards.Both fully online and iterative batch settings are viable approaches to policy learning, as long aspossible instabilities in the learning algorithms are taken into account. Secondly, we showed forthe first time that the recently introduced FP method can work in both an online setting and onreal human feedback. Overall, our results indicate that it is feasible to build a practical pipelinethat starts with a model trained on an initial fixed dataset, which then learns from interactions withhumans in a (semi-)online fashion to improve itself. Future research should work towards doing thisin a never-ending learning setup.REFERENCESMohammad Amin Bassiri. Interactional feedback and the impact of attitude and motivation onnoticing l2 form. English Language and Literature Studies , 1(2):61, 2011.Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple questionanswering with memory networks. arXiv preprint arXiv:1506.02075 , 2015.Leon Bottou, Jonas Peters, Denis X. Quionero-Candela, Joaquin amd Charles, D. Max Chicker-ing, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. Counterfactual reasoningand learning systems: The example of computational advertising. Journal of Machine LearningResearch , 14:3207–3260, 2013.Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, ArthurSzlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog sys-tems. arXiv preprint arXiv:1511.06931 , 2015.Milica Ga ˇsic, Catherine Breslin, Matthew Henderson, Dongho Kim, Martin Szummer, Blaise Thom-son, Pirros Tsiakoulis, and Steve Young. Pomdp-based dialogue manager adaptation to extendeddomains. In Proceedings of SIGDIAL , 2013.Milica Ga ˇsic, Dongho Kim, Pirros Tsiakoulis, Catherine Breslin, Matthew Henderson, MartinSzummer, Blaise Thomson, and Steve Young. Incremental on-line adaptation of pomdp-baseddialogue managers to extended domains. In Proceedings on InterSpeech , 2014.Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, MustafaSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances inNeural Information Processing Systems , pp. 1693–1701, 2015.10Published as a conference paper at ICLR 2017Esther Levin, Roberto Pieraccini, and Wieland Eckert. Learning dialogue strategies within themarkov decision process framework. In Automatic Speech Recognition and Understanding, 1997.Proceedings., 1997 IEEE Workshop on , pp. 72–79. IEEE, 1997.Esther Levin, Roberto Pieraccini, and Wieland Eckert. A stochastic model of human-machine in-teraction for learning dialog strategies. IEEE Transactions on speech and audio processing , 8(1):11–23, 2000.Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Ja-son Weston. Key-value memory networks for directly reading documents. arXiv preprintarXiv:1606.03126 , 2016.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier-stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprintarXiv:1312.5602 , 2013.Roberto Pieraccini, David Suendermann, Krishna Dayanidhi, and Jackson Liscombe. Are we thereyet? research in commercial spoken dialog systems. In International Conference on Text, Speechand Dialogue , pp. 3–13. Springer, 2009.Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questionsfor machine comprehension of text. arXiv preprint arXiv:1606.05250 , 2016.Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train-ing with recurrent neural networks. arXiv preprint arXiv:1511.06732 , 2015.Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. A survey of statistical user sim-ulation techniques for reinforcement-learning of dialogue management strategies. The knowledgeengineering review , 21(02):97–126, 2006.Satinder Singh, Michael Kearns, Diane J Litman, Marilyn A Walker, et al. Empirical evaluation ofa reinforcement learning spoken dialogue system. In AAAI/IAAI , pp. 645–651, 2000.Satinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. Optimizing dialogue man-agement with reinforcement learning: Experiments with the njfun system. Journal of ArtificialIntelligence Research , 16:105–133, 2002.Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas-Barahona, Stefan Ultes, David Vandyke,Tsung-Hsien Wen, and Steve Young. Continuously learning neural dialogue management. arXivpreprint arXiv:1606.02689 , 2016.Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advancesin neural information processing systems , pp. 2440–2448, 2015.Marilyn A. Walker. An application of reinforcement learning to dialogue strategy selection in aspoken dialogue system for email. Journal of Artificial Intelligence Research , 12:387–416, 2000.Marilyn A Walker, Rashmi Prasad, and Amanda Stent. A trainable generator for recommendationsin multimodal dialog. In INTERSPEECH , 2003.Margaret G Werts, Mark Wolery, Ariane Holcombe, and David L Gast. Instructive feedback: Reviewof parameters and effects. Journal of Behavioral Education , 5(1):55–75, 1995.Jason Weston. Dialog-based language learning. arXiv preprint arXiv:1604.06045 , 2016.Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri ̈enboer, ArmandJoulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toytasks. arXiv preprint arXiv:1502.05698 , 2015.Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcementlearning. Machine learning , 8(3-4):229–256, 1992.Steve Young, Milica Ga ˇsi ́c, Simon Keizer, Franc ̧ois Mairesse, Jost Schatzmann, Blaise Thomson,and Kai Yu. The hidden information state model: A practical framework for pomdp-based spokendialogue management. Computer Speech & Language , 24(2):150–174, 2010.11Published as a conference paper at ICLR 2017Steve Young, Milica Ga ˇsi ́c, Blaise Thomson, and Jason D Williams. Pomdp-based statistical spokendialog systems: A review. Proceedings of the IEEE , 101(5):1160–1179, 2013.Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXivpreprint arXiv:1505.00521 , 362, 2015.12Published as a conference paper at ICLR 2017A F URTHER SIMULATOR TASK DETAILSThe tasks in Weston (2016) were specifically:-Task 1 : The teacher tells the student exactly what they should have said (supervised baseline).-Task 2 : The teacher replies with positive textual feedback and reward, or negative textual feedback.-Task 3 : The teacher gives textual feedback containing the answer when the bot is wrong.-Task 4 : The teacher provides a hint by providing the class of the correct answer, e.g., “No it’s amovie” for the question “which movie did Forest Gump star in?”.-Task 5 : The teacher provides a reason why the student’s answer is wrong by pointing out therelevant supporting fact from the knowledge base.-Task 6 : The teacher gives positive reward only 50% of the time.-Task 7 : Rewards are missing and the teacher only gives natural language feedback.-Task 8 : Combines Tasks 1 and 2 to see whether a learner can learn successfully from both formsof supervision at once.-Task 9 : The bot asks questions of the teacher about what it has done wrong.-Task 10 : The bot will receive a hint rather than the correct answer after asking for help.We refer the readers to (Weston, 2016) for more detailed descriptions and the motivation behindthese tasks. The difference in our system is that the model can be trained on-the-fly via the simulator:after receiving feedback and/or rewards, the model can update itself and apply its learning to the nextepisode. We present results on Tasks 2, 3 and 4 in this appendixB I NSTRUCTIONS GIVEN TO TURKERSThese are the instructions given for the textual feedback mechanical turk task (we also constructeda separate task to collect the initial questions, not described here):Title: Write brief responses to given dialogue exchanges (about 15 min)Description: Write a brief response to a student’s answer to a teacher’s question, providing feedbackto the student on their answer.Instructions:Each task consists of the following triplets:1. a question by the teacher2. the correct answer(s) to the question (separated by “OR”)3. a proposed answer in reply to the question from the studentConsider the scenario where you are the teacher and have already asked the question, and receivedthe reply from the student. Please compose a brief response giving feedback to the student abouttheir answer. The correct answers are provided so that you know whether the student was correct ornot.For example, given 1) question: “what is a color in the united states flag?”; 2) correct answer:“white, blue, red”; 3) student reply: “red”, your response could be something like “that’s right!”; for3) reply: “green”, you might say “no that’s not right” or “nope, a correct answer is actually white”.Please vary responses and try to minimize spelling mistakes. If the same responses are copied/pastedor overused, we’ll reject the HIT.Avoid naming the student or addressing “the class” directly.We will consider bonuses for higher quality responses during review.13Published as a conference paper at ICLR 2017T : Which movie did Tom Hanks star in ? T : Which movie did Tom Hanks star in ?S : Forrest Gump S : Brad Pitt.Task 1: Imitating an Expert Student Task 1: Imitating an Expert StudentS: Forrest Gump S: Forrest GumpT: (no response) T: (no response)Task 2: Positive and Negative Feedback Task 2: Positive and Negative FeedbackT: Yes, that’s right! (+) T: No, that’s incorrect!Task 3: Answers Supplied by Teacher Task 3: Answers Supplied by TeacherT: Yes, that is correct. (+) T: No, the answer is Forrest Gump !Task 4: Hints Supplied by Teacher Task 4: Hints Supplied by TeacherT: Correct! (+) T: No, it’s a movie !Task 5: Supporting Facts Supplied by Teacher Task 5: Supporting Facts Supplied by TeacherT: That’s right. (+) T: No, because Forrest Gump starred actorsTom Hanks, Robin Wright, Gary Sinise !Task 6: Partial Feedback Task 6: Partial Feedbackif random(0,1) <0.5 then T: Sorry, wrong.T: That’s correct. (+)else T: That’s correct.Task 7: No Feedback Task 7: No FeedbackT: Yes. T: No.Task 8: Imitation and Feedback Mixture Task 8: Imitation and Feedback Mixtureif random(0,1) <0.5 then if random(0,1) <0.5 thenT: Yes, that’s right! (+) T: Wrong.else T: (no response) else S: Forrest GumpTask 9: Asking For Corrections Task 9: Asking For CorrectionsT: Correct! (+) T: No, that’s wrong.S: Can you help me?T: Forrest Gump !Task 10: Asking For Supporting Facts Task 10: Asking For Supporting FactsT: Yes, that’s right! (+) T: Sorry, that’s not it.S: Can you help me?T: A relevant fact is that Forrest Gump starred actorsTom Hanks, Robin Wright, Gary Sinise !Figure 5: The ten tasks our simulator implements, which evaluate different forms of teacher responseand binary feedback. In each case the same example from WikiMovies is given for simplicity, wherethe student answered correctly for all tasks (left) or incorrectly (right). Red text denotes responsesby the bot with S denoting the bot. Blue text is spoken by the teacher with T denoting the teacher’sresponse. For imitation learning the teacher provides the response the student should say denotedwith S in Tasks 1 and 8. A (+) denotes a positive reward.C A DDITIONAL EXPERIMENTSIteration 1 2 3 4 5 6Imitation Learning 0.24 0.23 0.23 0.23 0.25 0.25Reward Based Imitation (RBI) 0.95 0.99 0.99 0.99 1.00 1.00Forward Pred. (FP) 1.00 0.19 0.86 0.30 99 0.22RBI+FP 0.99 0.99 0.99 0.99 99 0.99FP (balanced) 0.99 0.97 0.98 0.98 0.96 0.97FP (rand. exploration = 0:25)0.99 0.91 0.93 0.88 0.94 0.94FP (rand. exploration = 0:5)0.98 0.93 0.97 0.96 0.95 0.97Table 3: Test accuracy of various models in the dataset batch size case (using batch size equal to thesize of the full training set) for bAbI, task 3. Results>0:95are in bold.14Published as a conference paper at ICLR 20170 20 40 60 80Epoch0.20.30.40.50.60.70.80.91.0AccuracyRandom Exploration for RBI/epsilon1=0/epsilon1=0.2/epsilon1=0.4/epsilon1=0.6/epsilon1=0.8/epsilon1=10 20 40 60 80Epoch0.20.30.40.50.60.70.80.9AccuracyRandom Exploration for FP/epsilon1=0/epsilon1=0.2/epsilon1=0.4/epsilon1=0.6/epsilon1=0.8/epsilon1=10 20 40 60 80Epoch0.20.30.40.50.60.70.80.9AccuracyRandom Exploration for FP with Balancing/epsilon1=0/epsilon1=0.2/epsilon1=0.4/epsilon1=0.6/epsilon1=0.8/epsilon1=10 20 40 60 80Epoch0.20.30.40.50.60.70.80.91.0AccuracyComparing RBI, FP and REINFORCEREINFORCERBIFP0 20 40 60 80 100Epoch0.20.30.40.50.60.70.80.91.0AccuracyRBI (eps=0.6) Varying Batch Sizebatch 20batch 80batch 320batch 10000 20 40 60 80 100Epoch0.20.30.40.50.60.70.80.9AccuracyFP (eps=0.6) Varying Batch Sizebatch 20batch 80batch 320batch 1000Figure 6: Training epoch vs. test accuracy for bAbI (Task 2) varying exploration and batchsize.15Published as a conference paper at ICLR 20170 20 40 60 80Epoch0.20.30.40.50.60.70.80.91.0AccuracyRandom Exploration for RBI/epsilon1=0/epsilon1=0.2/epsilon1=0.4/epsilon1=0.6/epsilon1=0.8/epsilon1=10 20 40 60 80Epoch0.20.30.40.50.60.70.80.9AccuracyRandom Exploration for FP/epsilon1=0/epsilon1=0.2/epsilon1=0.4/epsilon1=0.6/epsilon1=0.8/epsilon1=10 20 40 60 80Epoch0.20.30.40.50.60.70.80.9AccuracyRandom Exploration for FP with Balancing/epsilon1=0/epsilon1=0.2/epsilon1=0.4/epsilon1=0.6/epsilon1=0.8/epsilon1=10 20 40 60 80Epoch0.20.30.40.50.60.70.80.91.0AccuracyComparing RBI, FP and REINFORCEREINFORCERBIFP0 20 40 60 80 100Epoch0.20.30.40.50.60.70.80.91.0AccuracyRBI (eps=0.6) Varying Batch Sizebatch 20batch 80batch 320batch 10000 20 40 60 80 100Epoch0.30.40.50.60.70.80.9AccuracyFP (eps=0.6) Varying Batch Sizebatch 20batch 80batch 320batch 1000Figure 7: Training epoch vs. test accuracy for bAbI (Task 3) varying exploration and batchsize. Random exploration is important for both reward-based (RBI) and forward prediction (FP).16Published as a conference paper at ICLR 20170 20 40 60 80Epoch0.20.30.40.50.60.70.80.91.0AccuracyRandom Exploration for RBI/epsilon1=0/epsilon1=0.2/epsilon1=0.4/epsilon1=0.6/epsilon1=0.8/epsilon1=10 20 40 60 80Epoch0.20.30.40.50.60.70.80.9AccuracyRandom Exploration for FP/epsilon1=0/epsilon1=0.2/epsilon1=0.4/epsilon1=0.6/epsilon1=0.8/epsilon1=10 20 40 60 80Epoch0.20.30.40.50.60.70.80.9AccuracyRandom Exploration for FP with Balancing/epsilon1=0/epsilon1=0.2/epsilon1=0.4/epsilon1=0.6/epsilon1=0.8/epsilon1=10 20 40 60 80Epoch0.20.30.40.50.60.70.80.91.0AccuracyComparing RBI, FP and REINFORCEREINFORCERBIFP0 20 40 60 80 100Epoch0.20.30.40.50.60.70.80.91.0AccuracyRBI (eps=0.6) Varying Batch Sizebatch 20batch 80batch 320batch 10000 20 40 60 80 100Epoch0.20.30.40.50.60.70.80.9AccuracyFP (eps=0.6) Varying Batch Sizebatch 20batch 80batch 320batch 1000Figure 8: Training epoch vs. test accuracy for bAbI (Task 4) varying exploration and batchsize. Random exploration is important for both reward-based (RBI) and forward prediction (FP).17Published as a conference paper at ICLR 20170 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyRandom Exploration for RBI/epsilon1=0/epsilon1=0.1/epsilon1=0.2/epsilon1=0.3/epsilon1=0.4/epsilon1=0.5/epsilon1=10 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyRandom Exploration for FP/epsilon1=0/epsilon1=0.1/epsilon1=0.2/epsilon1=0.3/epsilon1=0.4/epsilon1=0.5/epsilon1=10 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyRBI (eps=0.5) Varying Batch Sizebatch 32batch 320batch 3200batch 32000full dataset0 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyComparing RBI, FP and REINFORCEREINFORCERBIFPFigure 9: WikiMovies : Training epoch vs. test accuracy on Task 2varying (top left panel) explo-ration ratewhile setting batch size to 32for RBI, (top right panel) for FP, (bottom left) batch sizefor RBI, and (bottom right) comparing RBI, REINFORCE and FP setting = 0:5. The model isrobust to the choice of batch size. RBI and REINFORCE perform comparably.18Published as a conference paper at ICLR 20170 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyRandom Exploration for RBI/epsilon1=0/epsilon1=0.1/epsilon1=0.2/epsilon1=0.3/epsilon1=0.4/epsilon1=0.5/epsilon1=10 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyRandom Exploration for FP/epsilon1=0/epsilon1=0.1/epsilon1=0.2/epsilon1=0.3/epsilon1=0.4/epsilon1=0.5/epsilon1=10 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyRBI (eps=0.5) Varying Batch Sizebatch 32batch 320batch 3200batch 32000full dataset0 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyComparing RBI, FP and REINFORCEREINFORCERBIFPFigure 10: WikiMovies : Training epoch vs. test accuracy on Task 3varying (top left panel) explo-ration ratewhile setting batch size to 32for RBI, (top right panel) for FP, (bottom left) batch sizefor RBI, and (bottom right) comparing RBI, REINFORCE and FP setting = 0:5. The model isrobust to the choice of batch size. RBI and REINFORCE perform comparably.19Published as a conference paper at ICLR 20170 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyRandom Exploration for RBI/epsilon1=0/epsilon1=0.1/epsilon1=0.2/epsilon1=0.3/epsilon1=0.4/epsilon1=0.5/epsilon1=10 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyRandom Exploration for FP/epsilon1=0/epsilon1=0.1/epsilon1=0.2/epsilon1=0.3/epsilon1=0.4/epsilon1=0.5/epsilon1=10 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyRBI (eps=0.5) Varying Batch Sizebatch 32batch 320batch 3200batch 32000full dataset0 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyComparing RBI, FP and REINFORCEREINFORCERBIFPFigure 11: WikiMovies : Training epoch vs. test accuracy on Task 4varying (top left panel) explo-ration ratewhile setting batch size to 32for RBI, (top right panel) for FP, (bottom left) batch sizefor RBI, and (bottom right) comparing RBI, REINFORCE and FP setting = 0:5. The model isrobust to the choice of batch size. RBI and REINFORCE perform comparably.20Published as a conference paper at ICLR 20170 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyFP (eps=0.5) Varying Batch Sizebatch 32batch 320batch 3200batch 32000full dataset0 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyFP (eps=0.5) Varying Batch Sizebatch 32batch 320batch 3200batch 32000full dataset0 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyFP (eps=0.5) Varying Batch Sizebatch 32batch 320batch 3200batch 32000full dataset0 5 10 15 20Epoch0.10.20.30.40.50.60.7AccuracyFP (eps=0.5) Varying Batch Sizebatch 32batch 320batch 3200batch 32000full datasetFigure 12: WikiMovies : Training epoch vs. test accuracy with varying batch size for FP on Task 2(top left panel), 3 (top right panel), 4 (bottom left panel) and 6 (top right panel) setting = 0:5. Themodel is robust to the choice of batch size.21Published as a conference paper at ICLR 2017C.1 A DDITIONAL EXPERIMENTS FORMECHANICAL TURK SETUPIn the experiment in Section 5.2 we conducted experiments with real human feedback. Here, wecompare this to a form of synthetic feedback, mostly as a sanity check, but also to see how muchimprovement we can get if the signal is simpler and cleaner (as it is synthetic). We hence constructedsynthetic feedback for the 10,000 responses, using either Task 2 (positive or negative feedback), Task3 (answers provided by teacher) or a mix (Task 2+3) where we use one or the other for each example(50% chance of each). The latter makes the synthetic data have a mixed setup of responses, whichmore closely mimics the real data case. The results are given in Table 4. The RBI+FP combination isbetter using the synthetic data than the real data with Task 2+3 or Task 3, which is to be expected, butthe real data is competitive, despite the difficulty of dealing with its lexical and semantic variability.The real data is better than using Task 2 synthetic data.For comparison purposes, we also ran a supervised (imitation learning) MemN2N on different sizedtraining sets of turker authored questions with gold annotated labels (so, there are no numericalrewards or textual feedback, this is a pure supervised setting). The results are given in Table 5. Theyindicate that RBI+FP and even FP alone get close to the performance of fully supervised learning.Model r= 0r= 0:1r= 0:5r= 1Reward Based Imitation (RBI) 0.333 0.340 0.365 0.375Forward Prediction (FP) [real] 0.358 0.358 0.358 0.358RBI+FP [real] 0.431 0.438 0.443 0.441Forward Prediction (FP) [synthetic Task 2] 0.188 0.188 0.188 0.188Forward Prediction (FP) [synthetic Task 2+3] 0.328 0.328 0.328 0.328Forward Prediction (FP) [synthetic Task 3] 0.361 0.361 0.361 0.361RBI+FP [synthetic Task 2] 0.382 0.383 0.407 0.408RBI+FP [synthetic Task 2+3] 0.459 0.465 0.464 0.478RBI+FP [synthetic Task 3] 0.473 0.486 0.490 0.494Table 4: Incorporating Feedback From Humans via Mechanical Turk: comparing real humanfeedback to synthetic feedback. Textual feedback is provided for 10,000 model predictions (froma model trained with 1k labeled training examples), and additional sparse binary rewards (fraction rof examples have rewards). We compare real feedback (rows 2 and 3) to synthetic feedback whenusing FP or RBI+FP (rows 4 and 5).Train data size 1k 5k 10k 20k 60kSupervised MemN2N 0.333 0.429 0.476 0.526 0.599Table 5: Fully Supervised (Imitation Learning) Results on Human Questionsr= 0r= 0:1r= 0:5r= 1= 0 0.499 0.502 0.501 0.502= 0:1 0.494 0.496 0.501 0.502= 0:25 0.493 0.495 0.496 0.499= 0:5 0.501 0.499 0.501 0.504= 1 0.497 0.497 0.498 0.497Table 6: Second Iteration of Feedback Using synthetic textual feedback of synthetic Task2+3 withthe RBI+FP method, an additional iteration of data collection of 10k examples, varying sparse binaryreward fraction rand exploration . The performance of the first iteration model was 0.478.C.2 S ECOND ITERATION OF FEEDBACKWe conducted experiments with an additional iteration of data collection for the case of binaryrewards and textual feedback using the synthetic Task 2+3 mix. We selected the best model fromthe previous training, using RBI+FP with r= 1 which previously gave a test accuracy of 0.478(see Table 4). Using that model as a predictor, we collected an additional 10,000 training examples.22Published as a conference paper at ICLR 2017We then continue to train our model using the original 1k+10k training set, plus the additional 10k.As before, we report the test accuracy varying ron the additional collected set. We also report theperformance from varying , the proportion of random exploration of predictions on the new set.The results are reported in Table 6. Overall, performance is improved in the second iteration, withslightly better performance for large rand= 0:5. However, the improvement is mostly invariantto those parameters, likely because FP takes advantage of feedback from incorrect predictions in anycase.23
S1c2cvqee
Published as a conference paper at ICLR 2017DESIGNING NEURAL NETWORK ARCHITECTURESUSING REINFORCEMENT LEARNINGBowen Baker, Otkrist Gupta, Nikhil Naik & Ramesh RaskarMedia LaboratoryMassachusetts Institute of TechnologyCambridge MA 02139, USAfbowen, otkrist, naik, raskar g@mit.eduABSTRACTAt present, designing convolutional neural network (CNN) architectures requiresboth human expertise and labor. New architectures are handcrafted by carefulexperimentation or modified from a handful of existing networks. We intro-duce MetaQNN, a meta-modeling algorithm based on reinforcement learning toautomatically generate high-performing CNN architectures for a given learningtask. The learning agent is trained to sequentially choose CNN layers using Q-learning with an -greedy exploration strategy and experience replay. The agentexplores a large but finite space of possible architectures and iteratively discoversdesigns with improved performance on the learning task. On image classificationbenchmarks, the agent-designed networks (consisting of only standard convolu-tion, pooling, and fully-connected layers) beat existing networks designed withthe same layer types and are competitive against the state-of-the-art methods thatuse more complex layer types. We also outperform existing meta-modeling ap-proaches for network design on image classification tasks.1 I NTRODUCTIONDeep convolutional neural networks (CNNs) have seen great success in the past few years on avariety of machine learning problems (LeCun et al., 2015). A typical CNN architecture consistsof several convolution, pooling, and fully connected layers. While constructing a CNN, a networkdesigner has to make numerous design choices: the number of layers of each type, the orderingof layers, and the hyperparameters for each type of layer, e.g., the receptive field size, stride, andnumber of receptive fields for a convolution layer. The number of possible choices makes the designspace of CNN architectures extremely large and hence, infeasible for an exhaustive manual search.While there has been some work (Pinto et al., 2009; Bergstra et al., 2013; Domhan et al., 2015) onautomated or computer-aided neural network design, new CNN architectures or network design ele-ments are still primarily developed by researchers using new theoretical insights or intuition gainedfrom experimentation.In this paper, we seek to automate the process of CNN architecture selection through a meta-modeling procedure based on reinforcement learning. We construct a novel Q-learning agent whosegoal is to discover CNN architectures that perform well on a given machine learning task with nohuman intervention. The learning agent is given the task of sequentially picking layers of a CNNmodel. By discretizing and limiting the layer parameters to choose from, the agent is left witha finite but large space of model architectures to search from. The agent learns through randomexploration and slowly begins to exploit its findings to select higher performing models using the -greedy strategy (Mnih et al., 2015). The agent receives the validation accuracy on the given machinelearning task as the reward for selecting an architecture. We expedite the learning process throughrepeated memory sampling using experience replay (Lin, 1993). We refer to this Q-learning basedmeta-modeling method as MetaQNN, which is summarized in Figure 1.1We conduct experiments with a space of model architectures consisting of only standard convolution,pooling, and fully connected layers using three standard image classification datasets: CIFAR-10,1For more information, model files, and code, please visit https://bowenbaker.github.io/metaqnn/1Published as a conference paper at ICLR 2017Agent SamplesNetwork TopologyAgent LearnsFrom MemoryTrain NetworkStore in Replay MemoryRQSampleMemoryUpdateQ-ValuesConvConvPoolSoftmaxTopology: C(64,5,1) C(128,3,1) P(2,2) SM(10)Performance: 93.3%RFigure 1: Designing CNN Architectures with Q-learning: The agent begins by sampling a Con-volutional Neural Network (CNN) topology conditioned on a predefined behavior distribution andthe agent’s prior experience (left block). That CNN topology is then trained on a specific task; thetopology description and performance, e.g. validation accuracy, are then stored in the agent’s mem-ory (middle block). Finally, the agent uses its memories to learn about the space of CNN topologiesthroughQ-learning (right block).SVHN, and MNIST. The learning agent discovers CNN architectures that beat all existing networksdesigned only with the same layer types (e.g., Springenberg et al. (2014); Srivastava et al. (2015)).In addition, their performance is competitive against network designs that include complex layertypes and training procedures (e.g., Clevert et al. (2015); Lee et al. (2016)). Finally, the MetaQNNselected models comfortably outperform previous automated network design methods (Stanley &Miikkulainen, 2002; Bergstra et al., 2013). The top network designs discovered by the agent onone dataset are also competitive when trained on other datasets, indicating that they are suited fortransfer learning tasks. Moreover, we can generate not just one, but several varied, well-performingnetwork designs, which can be ensembled to further boost the prediction performance.2 R ELATED WORKDesigning neural network architectures: Research on automating neural network design goesback to the 1980s when genetic algorithm-based approaches were proposed to find both architec-tures and weights (Schaffer et al., 1992). However, to the best of our knowledge, networks designedwith genetic algorithms, such as those generated with the NEAT algorithm (Stanley & Miikkulainen,2002), have been unable to match the performance of hand-crafted networks on standard bench-marks (Verbancsics & Harguess, 2013). Other biologically inspired ideas have also been explored;motivated by screening methods in genetics, Pinto et al. (2009) proposed a high-throughput networkselection approach where they randomly sample thousands of architectures and choose promisingones for further training. In recent work, Saxena & Verbeek (2016) propose to sidestep the archi-tecture selection process through densely connected networks of layers, which come closer to theperformance of hand-crafted networks.Bayesian optimization has also been used (Shahriari et al., 2016) for automatic selection of networkarchitectures (Bergstra et al., 2013; Domhan et al., 2015) and hyperparameters (Snoek et al., 2012;Swersky et al., 2013). Notably, Bergstra et al. (2013) proposed a meta-modeling approach basedon Tree of Parzen Estimators (TPE) (Bergstra et al., 2011) to choose both the type of layers andhyperparameters of feed-forward networks; however, they fail to match the performance of hand-crafted networks.Reinforcement Learning: Recently there has been much work at the intersection of reinforcementlearning and deep learning. For instance, methods using CNNs to approximate the Q-learning utilityfunction (Watkins, 1989) have been successful in game-playing agents (Mnih et al., 2015; Silveret al., 2016) and robotic control (Lillicrap et al., 2015; Levine et al., 2016). These methods rely onphases of exploration , where the agent tries to learn about its environment through sampling, andexploitation , where the agent uses what it learned about the environment to find better paths. Intraditional reinforcement learning settings, over-exploration can lead to slow convergence times, yetover-exploitation can lead to convergence to local minima (Kaelbling et al., 1996). However, in thecase of large or continuous state spaces, the -greedy strategy of learning has been empirically shownto converge (Vermorel & Mohri, 2005). Finally, when the state space is large or exploration is costly,2Published as a conference paper at ICLR 2017the experience replay technique (Lin, 1993) has proved useful in experimental settings (Adam et al.,2012; Mnih et al., 2015). We incorporate these techniques— Q-learning, the -greedy strategy andexperience replay—in our algorithm design.3 B ACKGROUNDOur method relies on Q-learning, a type of reinforcement learning. We now summarize the theoret-ical formulation of Q-learning, as adopted to our problem. Consider the task of teaching an agentto find optimal paths as a Markov Decision Process (MDP) in a finite-horizon environment. Con-straining the environment to be finite-horizon ensures that the agent will deterministically terminatein a finite number of time steps. In addition, we restrict the environment to have a discrete andfinite state spaceSas well as action space U. For any state si2S, there is a finite set of actions,U(si)U, that the agent can choose from. In an environment with stochastic transitions, an agentin statesitaking some action u2U(si)will transition to state sjwith probability ps0js;u(sjjsi;u),which may be unknown to the agent. At each time step t, the agent is given a reward rt, dependenton the transition from state stos0and actionu.rtmay also be stochastic according to a distributionprjs0;s;u. The agent’s goal is to maximize the total expected reward over all possible trajectories, i.e.,maxTi2TRTi, where the total expected reward for a trajectory TiisRTi=P(s;u;s0)2TiErjs;u;s0[rjs;u;s0]: (1)Though we limit the agent to a finite state and action space, there are still a combinatorially largenumber of trajectories, which motivates the use of reinforcement learning . We define the maximiza-tion problem recursively in terms of subproblems as follows. For any state si2S and subsequentactionu2U(si), we define the maximum total expected reward to be Q(si;u).Q()is known astheaction-value function and individual Q(si;u)are know as Q-values . The recursive maximiza-tion equation, which is known as Bellman’s Equation, can be written asQ(si;u) =Esjjsi;uErjsi;u;s j[rjsi;u;s j] +max u02U(sj)Q(sj;u0): (2)In many cases, it is impossible to analytically solve Bellman’s Equation (Bertsekas, 2015), but it canbe formulated as an iterative updateQt+1(si;u) = (1)Qt(si;u) +rt+max u02U(sj)Qt(sj;u0): (3)Equation 3 is the simplest form of Q-learning proposed by Watkins (1989). For well formulatedproblems, limt!1Qt(s;u) =Q(s;u), as long as each transition is sampled infinitely manytimes (Bertsekas, 2015). The update equation has two parameters: (i) is aQ-learning rate whichdetermines the weight given to new information over old information, and (ii) is the discount fac-torwhich determines the weight given to short-term rewards over future rewards. The Q-learningalgorithm is model-free , in that the learning agent can solve the task without ever explicitly con-structing an estimate of environmental dynamics. In addition, Q-learning is off policy , meaning itcan learn about optimal policies while exploring via a non-optimal behavioral distribution, i.e. thedistribution by which the agent explores its environment.We choose the behavior distribution using an -greedy strategy (Mnih et al., 2015). With this strat-egy, a random action is taken with probability and the greedy action, max u2U(si)Qt(si;u), ischosen with probability 1. We anneal from 1!0such that the agent begins in an explorationphase and slowly starts moving towards the exploitation phase. In addition, when the explorationcost is large (which is true for our problem setting), it is beneficial to use the experience replaytechnique for faster convergence (Lin, 1992). In experience replay, the learning agent is providedwith a memory of its past explored paths and rewards. At a given interval, the agent samples fromthe memory and updates its Q-values via Equation 3.4 D ESIGNING NEURAL NETWORK ARCHITECTURES WITH Q-LEARNINGWe consider the task of training a learning agent to sequentially choose neural network layers.Figure 2 shows feasible state and action spaces (a) and a potential trajectory the agent may take alongwith the CNN architecture defined by this trajectory (b). We model the layer selection process as aMarkov Decision Process with the assumption that a well-performing layer in one network should3Published as a conference paper at ICLR 2017Layer 1 Layer 2w11(1)w12(1)w13(1)w21(1)w22(1)w23(1)w31(1)w32(1)w33(1)Input Convolution64 Filters3x3 Receptive Field1x1 StridesMax PoolingSoftmaxInputC(64,3,1)P(2,2)C(64,3,1)GG GGP(2,2)StateActionInputC(64,3,1)P(2,2)C(64,3,1)GG GGLayer 1 Layer 2C(64,3,1) C(64,3,1)GG GGLayer N-1 Layer NP(2,2) P(2,2) P(2,2)(a) (b)Figure 2: Markov Decision Process for CNN Architecture Generation: Figure 2(a) shows thefull state and action space. In this illustration, actions are shown to be deterministic for clarity, butthey are stochastic in experiments. C(n;f;l )denotes a convolutional layer with nfilters, receptivefield sizef, and stridel.P(f;l)denotes a pooling layer with receptive field size fand stridel.Gdenotes a termination state (Softmax/Global Average Pooling). Figure 2(b) shows a path the agentmay choose, highlighted in green, and the corresponding CNN topology.also perform well in another network. We make this assumption based on the hierarchical nature ofthe feature representations learned by neural networks with many hidden layers (LeCun et al., 2015).The agent sequentially selects layers via the -greedy strategy until it reaches a termination state.The CNN architecture defined by the agent’s path is trained on the chosen learning problem, and theagent is given a reward equal to the validation accuracy. The validation accuracy and architecturedescription are stored in a replay memory, and experiences are sampled periodically from the replaymemory to update Q-values via Equation 3. The agent follows an schedule which determines itsshift from exploration to exploitation.Our method requires three main design choices: (i) reducing CNN layer definitions to simple statetuples, (ii) defining a set of actions the agent may take, i.e., the set of layers the agent may pick nextgiven its current state, and (iii) balancing the size of the state-action space—and correspondingly, themodel capacity—with the amount of exploration needed by the agent to converge. We now describethe design choices and the learning process in detail.4.1 T HESTATE SPACEEach state is defined as a tuple of all relevant layer parameters. We allow five different types of lay-ers: convolution (C), pooling (P), fully connected (FC), global average pooling (GAP), and softmax(SM), though the general method is not limited to this set. Table 1 shows the relevant parameters foreach layer type and also the discretization we chose for each parameter. Each layer has a parameterlayer depth (shown as Layer 1;2;:::in Figure 2). Adding layer depth to the state space allows usto constrict the action space such that the state-action graph is directed and acyclic (DAG) and alsoallows us to specify a maximum number of layers the agent may select before terminating.Each layer type also has a parameter called representation size (R-size). Convolutional nets pro-gressively compress the representation of the original signal through pooling and convolution. Thepresence of these layers in our state space may lead the agent on a trajectory where the intermediatesignal representation gets reduced to a size that is too small for further processing. For example, five22pooling layers each with stride 2 will reduce an image of initial size 3232to size 11. Atthis stage, further pooling, or convolution with receptive field size greater than 1, would be mean-ingless and degenerate. To avoid such scenarios, we add the R-size parameter to the state tuple s,which allows us to restrict actions from states with R-sizento those that have a receptive field sizeless than or equal to n. To further constrict the state space, we chose to bin the representation sizesinto three discrete buckets. However, binning adds uncertainty to the state transitions: depending onthe true underlying representation size, a pooling layer may or may not change the R-size bin. As aresult, the action of pooling can lead to two different states, which we model as stochasticity in statetransitions. Please see Figure A1 in appendix for an illustrated example.4Published as a conference paper at ICLR 2017Layer Type Layer Parameters Parameter ValuesConvolution (C)iLayer depthfReceptive field size`Strided# receptive fieldsnRepresentation size<12Square.2f1;3;5gSquare. Always equal to 12f64;128;256;512g2f(1;8],(8;4],(4;1]gPooling (P)iLayer depth(f;`)(Receptive field size, Strides)nRepresentation size<12Square.2(5;3);(3;2);(2;2)2f(1;8],(8;4]and(4;1]gFully Connected (FC)iLayer depthn# consecutive FC layersd# neurons<12<32f512;256;128gTermination StatesPrevious StatetType Global Avg. Pooling/SoftmaxTable 1: Experimental State Space. For each layer type, we list the relevant parameters and thevalues each parameter is allowed to take.4.2 T HEACTION SPACEWe restrict the agent from taking certain actions to both limit the state-action space and make learn-ing tractable. First, we allow the agent to terminate a path at any point, i.e. it may choose a termi-nation state from any non-termination state. In addition, we only allow transitions for a state withlayer depthito a state with layer depth i+ 1, which ensures that there are no loops in the graph.This constraint ensures that the state-action graph is always a DAG. Any state at the maximum layerdepth, as prescribed in Table 1, may only transition to a termination layer.Next, we limit the number of fully connected (FC) layers to be at maximum two, because a largenumber of FC layers can lead to too may learnable parameters. The agent at a state with type FCmay transition to another state with type FC if and only if the number of consecutive FC states isless than the maximum allowed. Furthermore, a state sof type FC with number of neurons dmayonly transition to either a termination state or a state s0of type FC with number of neurons d0d.An agent at a state of type convolution (C) may transition to a state with any other layer type. Anagent at a state with layer type pooling (P) may transition to a state with any other layer type otherthan another P state because consecutive pooling layers are equivalent to a single, larger poolinglayer which could lie outside of our chosen state space. Furthermore, only states with representationsize in bins (8;4]and(4;1]may transition to an FC layer, which ensures that the number of weightsdoes not become unreasonably huge. Note that a majority of these constraints are in place to enablefaster convergence on our limited hardware (see Section 5) and not a limitation of the method initself.4.3Q-LEARNING TRAINING PROCEDUREFor the iterative Q-learning updates (Equation 3), we set the Q-learning rate ( ) to0:01. In addition,we set the discount factor ( ) to 1 to not over-prioritize short-term rewards. We decrease from 1:0to0:1in steps, where the step-size is defined by the number of unique models trained (Table 2).At= 1:0, the agent samples CNN architecture with a random walk along a uniformly weightedMarkov chain. Every topology sampled by the agent is trained using the procedure described inSection 5, and the prediction performance of this network topology on the validation set is recorded.We train a larger number of models at = 1:0as compared to other values of to ensure that theagent has adequate time to explore before it begins to exploit . We stop the agent at = 0:1(and notat= 0) to obtain a stochastic final policy, which generates perturbations of the global minimum.2Ideally, we want to identify several well-performing model topologies, which can then be ensembledto improve prediction performance.During the entire training process (starting at = 1:0), we maintain a replay dictionary which stores(i) the network topology and (ii) prediction performance on a validation set, for all of the sampled2= 0indicates a completely deterministic policy. Because we would like to generate several good modelsfor ensembling and analysis, we stop at = 0:1, which represents a stochastic final policy.5Published as a conference paper at ICLR 2017 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1# Models Trained 1500 100 100 100 150 150 150 150 150 150Table 2:Schedule. The learning agent trains the specified number of unique models at each .models. If a model that has already been trained is re-sampled, it is not re-trained, but instead thepreviously found validation accuracy is presented to the agent. After each model is sampled andtrained, the agent randomly samples 100 models from the replay dictionary and applies the Q-valueupdate defined in Equation 3 for all transitions in each sampled sequence. The Q-value update isapplied to the transitions in temporally reversed order, which has been shown to speed up Q-valuesconvergence (Lin, 1993).5 E XPERIMENT DETAILSDuring the model exploration phase, we trained each network topology with a quick and aggressivetraining scheme. For each experiment, we created a validation set by randomly taking 5,000 samplesfrom the training set such that the resulting class distributions were unchanged. For every network,a dropout layer was added after every two layers. The ithdropout layer, out of a total ndropoutlayers, had a dropout probability ofi2n. Each model was trained for a total of 20 epochs with theAdam optimizer (Kingma & Ba, 2014) with 1= 0:9,2= 0:999,"= 108. The batch size wasset to 128, and the initial learning rate was set to 0.001. If the model failed to perform better than arandom predictor after the first epoch, we reduced the learning rate by a factor of 0.4 and restartedtraining, for a maximum of 5 restarts. For models that started learning (i.e., performed better than arandom predictor), we reduced the learning rate by a factor of 0.2 every 5 epochs. All weights wereinitialized with Xavier initialization (Glorot & Bengio, 2010). Our experiments using Caffe (Jiaet al., 2014) took 8-10 days to complete for each dataset with a hardware setup consisting of 10NVIDIA GPUs.After the agent completed the schedule (Table 2), we selected the top ten models that were foundover the course of exploration. These models were then finetuned using a much longer trainingschedule, and only the top five were used for ensembling. We now provide details of the datasetsand the finetuning process.TheStreet View House Numbers (SVHN) dataset has 10 classes with a total of 73,257 samplesin the original training set, 26,032 samples in the test set, and 531,131 additional samples in theextended training set. During the exploration phase, we only trained with the original training set,using 5,000 random samples as validation. We finetuned the top ten models with the original plusextended training set, by creating preprocessed training and validation sets as described by Lee et al.(2016). Our final learning rate schedule after tuning on validation set was 0.025 for 5 epochs, 0.0125for 5 epochs, 0.0001 for 20 epochs, and 0.00001 for 10 epochs.CIFAR-10 , the 10 class tiny image dataset, has 50,000 training samples and 10,000 testing samples.During the exploration phase, we took 5,000 random samples from the training set for validation.The maximum layer depth was increased to 18. After the experiment completed, we used the samevalidation set to tune hyperparameters, resulting in a final training scheme which we ran on theentire training set. In the final training scheme, we set a learning rate of 0.025 for 40 epochs,0.0125 for 40 epochs, 0.0001 for 160 epochs, and 0.00001 for 60 epochs, with all other parametersunchanged. During this phase, we preprocess using global contrast normalization and use moderatedata augmentation, which consists of random mirroring and random translation by up to 5 pixels.MNIST , the 10 class handwritten digits dataset, has 60,000 training samples and 10,000 testingsamples. We preprocessed each image with global mean subtraction. In the final training scheme,we trained each model for 40 epochs and decreased learning rate every 5 epochs by a factor of 0.2.For further tuning details please see Appendix C.6 R ESULTSModel Selection Analysis: FromQ-learning principles, we expect the learning agent to improvein its ability to pick network topologies as reduces and the agent enters the exploitation phase. In6Published as a conference paper at ICLR 20170 500 1000 1500 2000 2500 3000Iterations0.000.100.200.300.400.500.600.700.800.901.00AccuracyEpsilon = 1.0 .9.8.7.6.5.4.3.2.1SVHN Q-Learning PerformanceAverage Accuracy Per EpsilonRolling Mean Model Accuracy0 500 1000 1500 2000 2500 3000 3500Iterations0.000.100.200.300.400.500.600.700.800.901.00AccuracyEpsilon = 1.0 .9.8.7.6.5.4.3.2.1CIFAR10 Q-Learning PerformanceAverage Accuracy Per EpsilonRolling Mean Model AccuracyFigure 3:Q-Learning Performance. In the plots, the blue line shows a rolling mean of modelaccuracy versus iteration, where in each iteration of the algorithm the agent is sampling a model.Each bar (in light blue) marks the average accuracy over all models that were sampled during theexploration phase with the labeled . Asdecreases, the average accuracy goes up, demonstratingthat the agent learns to select better-performing CNN architectures.Method CIFAR-10 SVHN MNIST CIFAR-100Maxout (Goodfellow et al., 2013) 9.38 2.47 0.45 38.57NIN (Lin et al., 2013) 8.81 2.35 0.47 35.68FitNet (Romero et al., 2014) 8.39 2.42 0.51 35.04HighWay (Srivastava et al., 2015) 7.72 - - -VGGnet (Simonyan & Zisserman, 2014) 7.25 - - -All-CNN (Springenberg et al., 2014) 7.25 - - 33.71MetaQNN (ensemble) 7.32 2.06 0.32 -MetaQNN (top model) 6.92 2.28 0.44 27.14Table 3: Error Rate Comparison with CNNs that only use convolution, pooling, and fully con-nected layers. We report results for CIFAR-10 and CIFAR-100 with moderate data augmentationand results for MNIST and SVHN without any data augmentation.Figure 3, we plot the rolling mean of prediction accuracy over 100 models and the mean accuracyof models sampled at different values, for the CIFAR-10 and SVHN experiments. The plots showthat, while the prediction accuracy remains flat during the exploration phase ( = 1) as expected, theagent consistently improves in its ability to pick better-performing models as reduces from 1 to 0.1.For example, the mean accuracy of models in the SVHN experiment increases from 52.25% at = 1to 88.02% at = 0:1. Furthermore, we demonstrate the stability of the Q-learning procedure with10 independent runs on a subset of the SVHN dataset in Section D.1 of the Appendix. Additionalanalysis ofQ-learning results can be found in Section D.2.The top models selected by the Q-learning agent vary in the number of parameters but all demon-strate high performance (see Appendix Tables 1-3). For example, the number of parameters for thetop five CIFAR-10 models range from 11.26 million to 1.10 million, with only a 2.32% decreasein test error. We find design motifs common to the top hand-crafted network architectures as well.For example, the agent often chooses a layer of type C(N;1;1)as the first layer in the network.These layers generate Nlearnable linear transformations of the input data, which is similar in spiritto preprocessing of input data from RGB to a different color spaces such as YUV , as found in priorwork (Sermanet et al., 2012; 2013).Prediction Performance: We compare the prediction performance of the MetaQNN networks dis-covered by the Q-learning agent with state-of-the-art methods on three datasets. We report the accu-racy of our best model, along with an ensemble of top five models. First, we compare MetaQNN withsix existing architectures that are designed with standard convolution, pooling, and fully-connectedlayers alone, similar to our designs. As seen in Table 3, our top model alone, as well as the com-mittee ensemble of five models, outperforms all similar models. Next, we compare our results withsix top networks overall, which contain complex layer types and design ideas, including generalizedpooling functions, residual connections, and recurrent modules. Our results are competitive withthese methods as well (Table 4). Finally, our method outperforms existing automated network de-7Published as a conference paper at ICLR 2017Method CIFAR-10 SVHN MNIST CIFAR-100DropConnect (Wan et al., 2013) 9.32 1.94 0.57 -DSN (Lee et al., 2015) 8.22 1.92 0.39 34.57R-CNN (Liang & Hu, 2015) 7.72 1.77 0.31 31.75MetaQNN (ensemble) 7.32 2.06 0.32 -MetaQNN (top model) 6.92 2.28 0.44 27.14Resnet(110) (He et al., 2015) 6.61 - - -Resnet(1001) (He et al., 2016) 4.62 - - 22.71ELU (Clevert et al., 2015) 6.55 - - 24.28Tree+Max-Avg (Lee et al., 2016) 6.05 1.69 0.31 32.37Table 4: Error Rate Comparison with state-of-the-art methods with complex layer types. We re-port results for CIFAR-10 and CIFAR-100 with moderate data augmentation and results for MNISTand SVHN without any data augmentation.Dataset CIFAR-100 SVHN MNISTTraining from scratch 27.14 2.48 0.80Finetuning 34.93 4.00 0.81State-of-the-art 24.28 (Clevert et al., 2015) 1.69 (Lee et al., 2016) 0.31 (Lee et al., 2016)Table 5: Prediction Error for the top MetaQNN (CIFAR-10) model trained for other tasks. Fine-tuning refers to initializing training with the weights found for the optimal CIFAR-10 model.sign methods. MetaQNN obtains an error of 6.92% as compared to 21.2% reported by Bergstra et al.(2011) on CIFAR-10; and it obtains an error of 0.32% as compared to 7.9% reported by Verbancsics& Harguess (2013) on MNIST.The difference in validation error between the top 10 models for MNIST was very small, so we alsocreated an ensemble with all 10 models. This ensemble achieved a test error of 0.28% —which beatsthe current state-of-the-art on MNIST without data augmentation.The best CIFAR-10 model performs 1-2% better than the four next best models, which is why theensemble accuracy is lower than the best model’s accuracy. We posit that the CIFAR-10 MetaQNNdid not have adequate exploration time given the larger state space compared to that of the SVHNexperiment, causing it to not find more models with performance similar to the best model. Fur-thermore, the coarse training scheme could have been not as well suited for CIFAR-10 as it was forSVHN, causing some models to under perform.Transfer Learning Ability: Network designs such as VGGnet (Simonyan & Zisserman, 2014) canbe adopted to solve a variety of computer vision problems. To check if the MetaQNN networksprovide similar transfer learning ability, we use the best MetaQNN model on the CIFAR-10 datasetfor training other computer vision tasks. The model performs well (Table 5) both when trainingfrom random initializations, and finetuning from existing weights.7 C ONCLUDING REMARKSNeural networks are being used in an increasingly wide variety of domains, which calls for scalablesolutions to produce problem-specific model architectures. We take a step towards this goal andshow that a meta-modeling approach using reinforcement learning is able to generate tailored CNNdesigns for different image classification tasks. Our MetaQNN networks outperform previous meta-modeling methods as well as hand-crafted networks which use the same types of layers.While we report results for image classification problems, our method could be applied to differ-ent problem settings, including supervised (e.g., classification, regression) and unsupervised (e.g.,autoencoders). The MetaQNN method could also aid constraint-based network design, by optimiz-ing parameters such as size, speed, and accuracy. For instance, one could add a threshold in thestate-action space barring the agent from creating models larger than the desired limit. In addition,Results in this column obtained with the top MetaQNN architecture for CIFAR-10, trained from randominitialization with CIFAR-100 data.8Published as a conference paper at ICLR 2017one could modify the reward function to penalize large models for constraining memory or penalizeslow forward passes to incentivize quick inference.There are several future avenues for research in reinforcement learning-driven network design aswell. In our current implementation, we use the same set of hyperparameters to train all networktopologies during the Q-learning phase and further finetune the hyperparameters for top modelsselected by the MetaQNN agent. However, our approach could be combined with hyperparameteroptimization methods to further automate the network design process. Moreover, we constrict thestate-action space using coarse, discrete bins to accelerate convergence. It would be possible tomove to larger state-action spaces using methods for Q-function approximation (Bertsekas, 2015;Mnih et al., 2015).ACKNOWLEDGMENTSWe thank Peter Downs for creating the project website and contributing to illustrations. We ac-knowledge Center for Bits and Atoms at MIT for their help with computing resources. Finally, wethank members of Camera Culture group at MIT Media Lab for their help and support.REFERENCESSander Adam, Lucian Busoniu, and Robert Babuska. Experience replay for real-time reinforcementlearning control. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications andReviews) , 42(2):201–212, 2012.James Bergstra, Daniel Yamins, and David D Cox. Making a science of model search: Hyperpa-rameter optimization in hundreds of dimensions for vision architectures. ICML (1) , 28:115–123,2013.James S Bergstra, R ́emi Bardenet, Yoshua Bengio, and Bal ́azs K ́egl. Algorithms for hyper-parameteroptimization. NIPS , pp. 2546–2554, 2011.Dimitri P Bertsekas. Convex optimization algorithms . Athena Scientific Belmont, 2015.Djork-Arn ́e Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep networklearning by exponential linear units (ELUs). arXiv preprint arXiv:1511.07289 , 2015.Tobias Domhan, Jost Tobias Springenberg, and Frank Hutter. Speeding up automatic hyperparame-ter optimization of deep neural networks by extrapolation of learning curves. IJCAI , 2015.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neuralnetworks. AISTATS , 9:249–256, 2010.Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, and Yoshua Bengio. Max-out networks. ICML (3) , 28:1319–1327, 2013.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. arXiv preprint arXiv:1512.03385 , 2015.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residualnetworks. In European Conference on Computer Vision , pp. 630–645. Springer, 2016.Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser-gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed-ding. arXiv preprint arXiv:1408.5093 , 2014.Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement learning: Asurvey. Journal of Artificial Intelligence Research , 4:237–285, 1996.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature , 521(7553):436–444,2015.9Published as a conference paper at ICLR 2017Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets. AISTATS , 2(3):6, 2015.Chen-Yu Lee, Patrick W Gallagher, and Zhuowen Tu. Generalizing pooling functions in convolu-tional neural networks: Mixed, gated, and tree. International Conference on Artificial Intelligenceand Statistics , 2016.Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo-motor policies. JMLR , 17(39):1–40, 2016.Ming Liang and Xiaolin Hu. Recurrent convolutional neural network for object recognition. CVPR ,pp. 3367–3375, 2015.Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa,David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXivpreprint arXiv:1509.02971 , 2015.Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching.Machine Learning , 8(3-4):293–321, 1992.Long-Ji Lin. Reinforcement learning for robots using neural networks. Technical report, DTICDocument, 1993.Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400 ,2013.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle-mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-levelcontrol through deep reinforcement learning. Nature , 518(7540):529–533, 2015.Nicolas Pinto, David Doukhan, James J DiCarlo, and David D Cox. A high-throughput screeningapproach to discovering good forms of biologically inspired visual representation. PLoS Compu-tational Biology , 5(11):e1000579, 2009.Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, andYoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550 , 2014.Shreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In Advances in Neural Informa-tion Processing Systems 29 , pp. 4053–4061. 2016.J David Schaffer, Darrell Whitley, and Larry J Eshelman. Combinations of genetic algorithms andneural networks: A survey of the state of the art. International Workshop on Combinations ofGenetic Algorithms and Neural Networks , pp. 1–37, 1992.Pierre Sermanet, Soumith Chintala, and Yann LeCun. Convolutional neural networks applied tohouse numbers digit classification. ICPR , pp. 3288–3291, 2012.Pierre Sermanet, Koray Kavukcuoglu, Soumith Chintala, and Yann LeCun. Pedestrian detectionwith unsupervised multi-stage feature learning. CVPR , pp. 3626–3633, 2013.Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando de Freitas. Taking thehuman out of the loop: A review of bayesian optimization. Proceedings of the IEEE , 104(1):148–175, 2016.David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche,Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Masteringthe game of go with deep neural networks and tree search. Nature , 529(7587):484–489, 2016.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. arXiv preprint arXiv:1409.1556 , 2014.Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machinelearning algorithms. NIPS , pp. 2951–2959, 2012.10Published as a conference paper at ICLR 2017Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving forsimplicity: The all convolutional net. arXiv preprint arXiv:1412.6806 , 2014.Rupesh Kumar Srivastava, Klaus Greff, and J ̈urgen Schmidhuber. Highway networks. arXiv preprintarXiv:1505.00387 , 2015.Kenneth O Stanley and Risto Miikkulainen. Evolving neural networks through augmenting topolo-gies. Evolutionary Computation , 10(2):99–127, 2002.Kevin Swersky, Jasper Snoek, and Ryan P Adams. Multi-task bayesian optimization. NIPS , pp.2004–2012, 2013.Phillip Verbancsics and Josh Harguess. Generative neuroevolution for deep learning. arXiv preprintarXiv:1312.5355 , 2013.Joannes Vermorel and Mehryar Mohri. Multi-armed bandit algorithms and empirical evaluation.European Conference on Machine Learning , pp. 437–448, 2005.Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neuralnetworks using dropconnect. ICML , pp. 1058–1066, 2013.Christopher John Cornish Hellaby Watkins. Learning from delayed rewards . PhD thesis, Universityof Cambridge, England, 1989.11Published as a conference paper at ICLR 2017APPENDIXA A LGORITHMWe first describe the main components of the MetaQNN algorithm. Algorithm 1 shows the mainloop, where the parameter Mwould determine how many models to run for a given and theparameterKwould determine how many times to sample the replay database to update Q-values oneach iteration. The function TRAIN refers to training the specified network and returns a validationaccuracy. Algorithm 2 details the method for sampling a new network using the -greedy strategy,where we assume we have a function TRANSITION that returns the next state given a state andaction. Finally, Algorithm 3 implements the Q-value update detailed in Equation 3, with discountingfactor set to 1, for an entire state sequence in temporally reversed order.Algorithm 1 Q-learning For CNN TopologiesInitialize:replay memory [ ]Q f(s;u)8s2S;u2U(s) : 0:5gforepisode = 1 to MdoS,U SAMPLE NEW NETWORK( ,Q)accuracy TRAIN (S)replay memory.append((S, U, accuracy))formemory = 1 to KdoSSAMPLE; USAMPLE;accuracySAMPLE Uniformfreplay memorygQ UPDATE QV ALUES(Q,SSAMPLE ,USAMPLE , accuracy SAMPLE )end forend forAlgorithm 2 SAMPLE NEW NETWORK( ,Q)Initialize:state sequence S= [sSTART ]action sequence U= [ ]whileU[1]6=terminate doUniform [0;1)if> thenu= argmaxu2U(S[1])Q[(S[1];u)]s0=TRANSITION (S[1];u)elseuUniformfU(S[1])gs0=TRANSITION (S[1];u)end ifU:append (u)ifu!= terminate thenS:append (s0)end ifend whilereturnS,UAlgorithm 3 UPDATE QV ALUES(Q,S,U, accuracy)Q[S[1];U[1]] = (1)Q[S[1];U[1]] +accuracyfori = length (S)2to 0doQ[S[i];U[i]] = (1)Q[S[i];U[i]] +max u2U(S[i+1])Q[S[i+ 1];u]end forreturnQ12Published as a conference paper at ICLR 2017B R EPRESENTATION SIZEBINNINGAs mentioned in Section 4.1 of the main text, we introduce a parameter called representation sizeto prohibit the agent from taking actions that can reduce the intermediate signal representation toa size that is too small for further processing. However, this process leads to uncertainties in statetransitions, as illustrated in Figure A1, which is handled by the standard Q-learning formulation.P(2,2) R-size: 18R-size bin: 1 R-size: 9R-size bin: 1(a)P(2,2) R-size: 7R-size bin: 2 R-size: 14R-size bin: 1 (b)StatesActionsp1 2pR-size bin: 1R-size bin: 1R-size bin: 2P(2,2) (c)Figure A1: Representation size binning: In this figure, we show three example state transitions.The true representation size (R-size) parameter is included in the figure to show the true underlyingstate. Assuming there are two R-size bins,R-size Bin 1:[8;1)andR-size Bin 2:(0;7], Figure A1ashows the case where the initial state is in R-size Bin 1and true representation size is 18. After theagent chooses to pool with a 22filter with stride 2, the true representation size reduces to 9 but theR-size bin does not change. In Figure A1b, the same 22pooling layer with stride 2 reduces theactual representation size of 14 to 7, but the bin changes to R-size Bin 2. Therefore, in figures A1aand A1b, the agent ends up in different final states, despite originating in the same initial state andchoosing the same action. Figure A1c shows that in our state-action space, when the agent takes anaction that reduces the representation size , it will have uncertainty in which state it will transition to.C MNIST E XPERIMENTWe noticed that the final MNIST models were prone to overfitting, so we increased dropout anddid a small grid search for the weight regularization parameter. For both tuning and final training,we warmed the model with the learned weights from after the first epoch of initial training. Thefinal models and solvers can be found on our project website https://bowenbaker.github.io/metaqnn/ .Figure A2 shows the Q-Learning performance for the MNIST experiment.D F URTHER ANALYSIS OF Q-LEARNINGFigure 3 of the main text and Figure A2 show that as the agent begins to exploit, it improves inarchitecture selection. It is also informative to look at the distribution of models chosen at each .Figure A4 gives further insight into the performance achieved at each for both experiments.D.1Q-LEARNING STABILITYBecause the Q-learning agent explores via a random or semi-random distribution, it is natural toask whether the agent can consistently improve architecture performance. While the success of thethree independent experiments described in the main text allude to stability, here we present furtherevidence. We conduct 10 independent runs of the Q-learning procedure on 10% of the SVHNdataset (which corresponds to 7,000 training examples). We use a smaller dataset to reduce thecomputation time of each independent run to 10GPU-days, as opposed to the 100GPU-days it wouldtake on the full dataset. As can be seen in Figure A3, the Q-learning procedure with the explorationschedule detailed in Table 2 is fairly stable. The standard deviation at = 1 is notably smallerthan at other stages, which we attribute to the large difference in number of samples at each stage.13Published as a conference paper at ICLR 20170 500 1000 1500 2000 2500 3000 3500Iterations0.000.100.200.300.400.500.600.700.800.901.00AccuracyEpsilon = 1.0 .9.8.7.6.5.4.3.2.1MNIST Q-Learning PerformanceAverage Accuracy Per EpsilonRolling Mean Model AccuracyFigure A2: MNISTQ-Learning Performance. The blue line shows a rolling mean of modelaccuracy versus iteration, where in each iteration of the algorithm the agent is sampling a model.Each bar (in light blue) marks the average accuracy over all models that were sampled during theexploration phase with the labeled . Asdecreases, the average accuracy goes up, demonstratingthat the agent learns to select better-performing CNN architectures.0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0Epsilon0.450.500.550.600.650.700.750.80Mean AccuracyQ-Learning Stability (Across 10 Runs)(a)0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0Epsilon0.450.500.550.600.650.700.750.80Mean AccuracyQ-Learning Individual Runs (b)Figure A3: Figure A3a shows the mean model accuracy and standard deviation at each over 10independent runs of the Q-learning procedure on 10% of the SVHN dataset. Figure A3b shows themean model accuracy at each for each independent experiment. Despite some variance due to arandomized exploration strategy, each independent run successfully improves architecture perfor-mance.Furthermore, the best model found during each run had remarkably similar performance with a meanaccuracy of 88.25% and standard deviation of 0.58%, which shows that each run successfully foundat least one very high performing model. Note that we did not use an extended training schedule toimprove performance in this experiment.D.2Q-VALUE ANALYSISWe now analyze the actual Q-values generated by the agent during the training process. The learningagent iteratively updates the Q-values of each path during the -greedy exploration. Each Q-valueis initialized at 0.5. After the -schedule is complete, we can analyze the final Q-value associatedwith each path to gain insights into the layer selection process. In the left column of Figure A5, weplot the average Q-value for each layer type at different layer depths (for both SVHN and CIFAR-10) datasets. Roughly speaking, a higher Q-value associated with a layer type indicates a higherprobability that the agent will pick that layer type. In Figure A5, we observe that, while the averageQ-value is higher for convolution and pooling layers at lower layer depths, the Q-values for fully-connected and termination layers (softmax and global average pooling) increase as we go deeperinto the network. This observation matches with traditional network designs.We can also plot the average Q-values associated with different layer parameters for further analysis.In the right column of Figure A5, we plot the average Q-values for convolution layers with receptive14Published as a conference paper at ICLR 2017field sizes 1, 3, and 5 at different layer depths. The plots show that layers with receptive field sizeof 5 have a higher Q-value as compared to sizes 1 and 3 as we go deeper into the networks. Thisindicates that it might be beneficial to use larger receptive field sizes in deeper networks.In summary, the Q-learning method enables us to perform analysis on the relative benefits of differ-ent design parameters of our state space, and possibly gain insights for new CNN designs.E T OP TOPOLOGIES SELECTED BY ALGORITHMIn Tables A1 through A3, we present the top five model architectures selected with Q-learningfor each dataset, along with their prediction error reported on the test set, and their to-tal number of parameters. To download the Caffe solver and prototext files, please visithttps://bowenbaker.github.io/metaqnn/ .Model Architecture Test Error (%) # Params ( 106)[C(512,5,1), C(256,3,1), C(256,5,1), C(256,3,1), P(5,3), C(512,3,1),C(512,5,1), P(2,2), SM(10)]6.92 11.18[C(128,1,1), C(512,3,1), C(64,1,1), C(128,3,1), P(2,2), C(256,3,1),P(2,2), C(512,3,1), P(3,2), SM(10)]8.78 2.17[C(128,3,1), C(128,1,1), C(512,5,1), P(2,2), C(128,3,1), P(2,2),C(64,3,1), C(64,5,1), SM(10)]8.88 2.42[C(256,3,1), C(256,3,1), P(5,3), C(256,1,1), C(128,3,1), P(2,2),C(128,3,1), SM(10)]9.24 1.10[C(128,5,1), C(512,3,1), P(2,2), C(128,1,1), C(128,5,1), P(3,2),C(512,3,1), SM(10)]11.63 1.66Table A1: Top 5 model architectures: CIFAR-10.Model Architecture Test Error (%) # Params ( 106)[C(128,3,1), P(2,2), C(64,5,1), C(512,5,1), C(256,3,1), C(512,3,1),P(2,2), C(512,3,1), C(256,5,1), C(256,3,1), C(128,5,1), C(64,3,1),SM(10)]2.24 9.81[C(128,1,1), C(256,5,1), C(128,5,1), P(2,2), C(256,5,1), C(256,1,1),C(256,3,1), C(256,3,1), C(256,5,1), C(512,5,1), C(256,3,1),C(128,3,1), SM(10)]2.28 10.38[C(128,5,1), C(128,3,1), C(64,5,1), P(5,3), C(128,3,1), C(512,5,1),C(256,5,1), C(128,5,1), C(128,5,1), C(128,3,1), SM(10)]2.32 6.83[C(128,1,1), C(256,5,1), C(128,5,1), C(256,3,1), C(256,5,1), P(2,2),C(128,1,1), C(512,3,1), C(256,5,1), P(2,2), C(64,5,1), C(64,1,1),SM(10)]2.35 6.99[C(128,1,1), C(256,5,1), C(128,5,1), C(256,5,1), C(256,5,1),C(256,1,1), P(3,2), C(128,1,1), C(256,5,1), C(512,5,1), C(256,3,1),C(128,3,1), SM(10)]2.36 10.05Table A2: Top 5 model architectures: SVHN. Note that we do not report the best accuracy on testset from the above models in Tables 3 and 4 from the main text. This is because the model thatachieved 2.28% on the test set performed the best on the validation set.15Published as a conference paper at ICLR 2017Model Architecture Test Error (%) # Params ( 106)[C(64,1,1), C(256,3,1), P(2,2), C(512,3,1), C(256,1,1), P(5,3),C(256,3,1), C(512,3,1), FC(512), SM(10)]0.35 5.59[C(128,3,1), C(64,1,1), C(64,3,1), C(64,5,1), P(2,2), C(128,3,1), P(3,2),C(512,3,1), FC(512), FC(128), SM(10)]0.38 7.43[C(512,1,1), C(128,3,1), C(128,5,1), C(64,1,1), C(256,5,1), C(64,1,1),P(5,3), C(512,1,1), C(512,3,1), C(256,3,1), C(256,5,1), C(256,5,1),SM(10)]0.40 8.28[C(64,3,1), C(128,3,1), C(512,1,1), C(256,1,1), C(256,5,1), C(128,3,1),P(5,3), C(512,1,1), C(512,3,1), C(128,5,1), SM(10)]0.41 6.27[C(64,3,1), C(128,1,1), P(2,2), C(256,3,1), C(128,5,1), C(64,1,1),C(512,5,1), C(128,5,1), C(64,1,1), C(512,5,1), C(256,5,1), C(64,5,1),SM(10)]0.43 8.10[C(64,1,1), C(256,5,1), C(256,5,1), C(512,1,1), C(64,3,1), P(5,3),C(256,5,1), C(256,5,1), C(512,5,1), C(64,1,1), C(128,5,1), C(512,5,1),SM(10)]0.44 9.67[C(128,3,1), C(512,3,1), P(2,2), C(256,3,1), C(128,5,1), C(64,1,1),C(64,5,1), C(512,5,1), GAP(10), SM(10)]0.44 3.52[C(256,3,1), C(256,5,1), C(512,3,1), C(256,5,1), C(512,1,1), P(5,3),C(256,3,1), C(64,3,1), C(256,5,1), C(512,3,1), C(128,5,1), C(512,5,1),SM(10)]0.46 12.42[C(512,5,1), C(128,5,1), C(128,5,1), C(128,3,1), C(256,3,1),C(512,5,1), C(256,3,1), C(128,3,1), SM(10)]0.55 7.25[C(64,5,1), C(512,5,1), P(3,2), C(256,5,1), C(256,3,1), C(256,3,1),C(128,1,1), C(256,3,1), C(256,5,1), C(64,1,1), C(256,3,1), C(64,3,1),SM(10)]0.56 7.55Table A3: Top 10 model architectures: MNIST. We report the top 10 models for MNIST becausewe included all 10 in our final ensemble. Note that we do not report the best accuracy on test setfrom the above models in Tables 3 and 4 from the main text. This is because the model that achieved0.44% on the test set performed the best on the validation set.16Published as a conference paper at ICLR 20170.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0Validation Accuracy0102030405060% ModelsModel Accuracy Distribution(SVHN)epsilon0.10.20.30.40.50.60.70.80.91.0(a)0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0Validation Accuracy0102030405060% ModelsModel Accuracy Distribution(SVHN)epsilon0.1 1.0 (b)0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9Validation Accuracy05101520% ModelsModel Accuracy Distribution(CIFAR-10)epsilon0.10.20.30.40.50.60.70.80.91.0(c)0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9Validation Accuracy05101520% ModelsModel Accuracy Distribution(CIFAR-10)epsilon0.1 1.0 (d)0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0Validation Accuracy020406080100% ModelsModel Accuracy Distribution(MNIST)epsilon0.10.20.30.40.50.60.70.80.91.0(e)0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0Validation Accuracy020406080100% ModelsModel Accuracy Distribution(MNIST)epsilon0.1 1.0 (f)Figure A4: Accuracy Distribution versus : Figures A4a, A4c, and A4e show the accuracy dis-tribution for each for the SVHN, CIFAR-10, and MNIST experiments, respectively. Figures A4b,A4d, and A4f show the accuracy distributions for the initial = 1 and the final = 0:1. One cansee that the accuracy distribution becomes much more peaked in the high accuracy ranges at small for each experiment.17Published as a conference paper at ICLR 20170 2 4 6 8 10 12 14Layer Depth0.00.20.40.60.81.0Average Q-ValueAverage Q-Value vs. Layer Depth(SVHN)ConvolutionFully ConnectedPoolingGlobal Average PoolingSoftmax(a)0 2 4 6 8 10 12Layer Depth0.50.60.70.80.91.0Average Q-ValueAverage Q-Value vs. Layer Depthfor Convolution Layers (SVHN)Receptive Field Size 1Receptive Field Size 3Receptive Field Size 5 (b)0 5 10 15 20Layer Depth0.00.20.40.60.81.0Average Q-ValueAverage Q-Value vs. Layer Depth(CIFAR10)ConvolutionFully ConnectedPoolingGlobal Average PoolingSoftmax(c)0 2 4 6 8 10 12 14 16 18Layer Depth0.50.60.70.80.91.0Average Q-ValueAverage Q-Value vs. Layer Depthfor Convolution Layers (CIFAR10)Receptive Field Size 1Receptive Field Size 3Receptive Field Size 5 (d)0 2 4 6 8 10 12 14Layer Depth0.00.20.40.60.81.0Average Q-ValueAverage Q-Value vs. Layer Depth(MNIST)ConvolutionFully ConnectedPoolingGlobal Average PoolingSoftmax(e)0 2 4 6 8 10 12Layer Depth0.50.60.70.80.91.0Average Q-ValueAverage Q-Value vs. Layer Depthfor Convolution Layers (MNIST)Receptive Field Size 1Receptive Field Size 3Receptive Field Size 5 (f)Figure A5: Average Q-Value versus Layer Depth for different layer types are shown in the leftcolumn. Average Q-Value versus Layer Depth for different receptive field sizes of the convolutionlayer are shown in the right column.18
S1vyujVye
Under review as a conference paper at ICLR 2017DEEP UNSUPERVISED LEARNING THROUGH SPATIALCONTRASTINGElad HofferTechnion - Israel Institute of TechnologyHaifa, Israelehoffer@tx.technion.ac.ilItay HubaraTechnion - Israel Institute of TechnologyHaifa, Israelitayh@tx.technion.ac.ilNir AilonTechnion - Israel Institute of TechnologyHaifa, Israelnailon@cs.technion.ac.ilABSTRACTConvolutional networks have marked their place over the last few years as thebest performing model for various visual tasks. They are, however, most suitedfor supervised learning from large amounts of labeled data. Previous attemptshave been made to use unlabeled data to improve model performance by apply-ing unsupervised techniques. These attempts require different architectures andtraining methods. In this work we present a novel approach for unsupervisedtraining of Convolutional networks that is based on contrasting between spatialregions within images. This criterion can be employed within conventional neu-ral networks and optimized using standard techniques such as SGD and back-propagation, thus complementing supervised methods.1 I NTRODUCTIONFor the past few years convolutional networks (ConvNets, CNNs) LeCun et al. (1998) have proventhemselves as a successful model for vision related tasks Krizhevsky et al. (2012) Mnih et al. (2015)Pinheiro et al. (2015) Razavian et al. (2014). A convolutional network is composed of multipleconvolutional and pooling layers, followed by a fully-connected affine transformations. As withother neural network models, each layer is typically followed by a non-linearity transformation suchas a rectified-linear unit (ReLU).A convolutional layer is applied by cross correlating an image with a trainable weight filter. Thisstems from the assumption of stationarity in natural images, which means that parameters learnedfor one local region in an image can be shared for other regions and images.Deep learning models, including convolutional networks, are usually trained in a supervised man-ner, requiring large amounts of labeled data (ranging between thousands to millions of examplesper-class for classification tasks) in almost all modern applications. These models are optimized us-ing a variant of stochastic-gradient-descent (SGD) over batches of images sampled from the wholetraining dataset and their ground truth-labels. Gradient estimation for each one of the optimizedparameters is done by back propagating the objective error from the final layer towards the input.This is commonly known as ”backpropagation” Rumelhart et al..In early works, unsupervised training was used as a part of pre-training procedure to obtain aneffective initial state of the model. The network was later fine-tuned in a supervised manner asdisplayed by Hinton (2007). Such unsupervised pre-training procedures were later abandoned, sincethey provided no apparent benefit over other initialization heuristics in more careful fully supervisedtraining regimes. This led to the de-facto almost exclusive usage of neural networks in supervisedenvironments.In this work we will present a novel unsupervised learning criterion for convolutional network basedon comparison of features extracted from regions within images. Our experiments indicate that by1Under review as a conference paper at ICLR 2017using this criterion to pre-train networks we can improve their performance and achieve state-of-the-art results.2 P REVIOUS WORKSUsing unsupervised methods to improve performance have been the holy grail of deep learning forthe last couple of years and vast research efforts have been focused on that. We hereby give a shortoverview of the most popular and recent methods that tried to tackle this problem.AutoEncoders and reconstruction loss These are probably the most popular models for unsu-pervised learning using neural networks, and ConvNets in particular. Autoencoders are NNs whichaim to transform inputs into outputs with the least possible amount of distortion. An Autoencoderis constructed using an encoder G(x;w1)that maps an input to a hidden compressed representation,followed by a decoder F(y;w2), that maps the representation back into the input space. Mathemat-ically, this can be written in the following general form:^x=F(G(x;w1);w2)The underlying encoder and decoder contain a set of trainable parameters that can be tied togetherand optimized for a predefined criterion. The encoder and decoder can have different architectures,including fully-connected neural networks, ConvNets and others. The criterion used for training isthe reconstruction loss, usually the mean squared error (MSE) between the original input and itsreconstruction Zeiler et al. (2010)minkx^xk2This allows an efficient training procedure using the aforementioned backpropagation and SGD tech-niques. Over the years autoencoders gained fundamental role in unsupervised learning and manymodification to the classic architecture were made. Ng (2011) regularized the latent representationto be sparse, Vincent et al. (2008) substituted the input with a noisy version thereof, requiring themodel to denoise while reconstructing. Kingma et al. (2014) obtained very promising results withvariational autoencoders (V AE). A variational autoencoder model inherits typical autoencoder ar-chitecture, but makes strong assumptions concerning the distribution of latent variables. They usevariational approach for latent representation learning, which results in an additional loss componentwhich required a new training algorithm called Stochastic Gradient Variational Bayes (SGVB). V AEassumes that the data is generated by a directed graphical model p(xjz)and require the encoder tolearn an approximation qw1(zjx)to the posterior distribution pw2(zjx)wherew1andw2denote theparameters of the encoder and decoder. The objective of the variational autoencoder in that case hasthe following form:L(w1;w2;x) =DKL(qw1(zjx)jjpw2(z)) +Eqw1(zjx)logpw2(xjz)Recently, a stacked set of denoising autoencoders architectures showed promising results in bothsemi-supervised and unsupervised tasks. A stacked what-where autoencoder by Zhao et al. (2015)computes a set of complementary variables that enable reconstruction whenever a layer implementsa many-to-one mapping. Ladder networks by Rasmus et al. (2015) - use lateral connections andlayer-wise cost functions to allow the higher levels of an autoencoder to focus on invariant abstractfeatures.Exemplar Networks: The unsupervised method introduced byDosovitskiy et al. (2014) takes adifferent approach to this task and trains the network to discriminate between a set of pseudo-classes.Each pseudo-class is formed by applying multiple transformations to a randomly sampled imagepatch. The number of pseudo-classes can be as big as the size of the input samples. This criterionensures that different input samples would be distinguished while providing robustness to the appliedtransformations. In this work we will explore an alternative method with a similar motivation.2Under review as a conference paper at ICLR 2017Context prediction Another method for unsupervised learning by context was introduced by Do-ersch et al. (2015). This method uses an auxiliary criterion of predicting the location of an imagepatch given another from the same image. This is done by classification to 1 of 9 possible locations.Although the work of Doersch et al. (2015) and ours both use patches from an image to performunsupervised learning, the methods are quite different. Whereas the former used a classificationcriterion over the spatial location of each patch within a single image, our work is concerned withcomparing patches from several images to each other. We claim that this encourages discriminabilitybetween images (which we feel to be important aspect of feature learning), and was not an explicitgoal in previous work.Adversarial Generative Models: This a recently introduced model that can be used in an unsu-pervised fashion Goodfellow et al. (2014). Adversarial Generative Models uses a set of networks,one trained to discriminate between data sampled from the true underlying distribution (e.g., a setof images), and a separate generative network trained to be an adversary trying to confuse the firstnetwork. By propagating the gradient through the paired networks, the model learns to generatesamples that are distributed similarly to the source data. As shown by Radford et al. (2015),thismodel can create useful latent representations for subsequent classification tasks.Sampling Methods: Methods for training models to discriminate between a very large numberof classes often use a noise contrasting criterion . In these methods, roughly speaking, the poste-rior probability P(tjyt)of the ground-truth target tgiven the model output on an input sampledfrom the true distribution yt=F(x)is maximized, while the probability P(tjyn)given a noisemeasurement y=F(n)is minimized. This was successfully used in a language domain to learnunsupervised representation of words. The most noteworthy case is the word2vec model introducedby Mikolov et al. (2013). When using this setting in language applications, a natural contrastingnoise is a smooth approximation of the Unigram distribution. A suitable contrasting distribution isless obvious when data points are sampled from a high dimensional continuous space, such as thecase of image patches.2.1 P ROBLEMS WITH CURRENT APPROACHESOnly recently the potential of ConvNets in an unsupervised environment began to bear fruit, still webelieve it is not fully uncovered.The majority of unsupervised optimization criteria currently used are based on variations of recon-struction losses. One limitation of this fact is that a pixel level reconstruction is non-compliant withthe idea of a discriminative objective, which is expected to be agnostic to low level information in theinput. In addition, it is evident that MSE is not best suited as a measurement to compare images, forexample, viewing the possibly large square-error between an image and a single pixel shifted copyof it. Another problem with recent approaches such as Rasmus et al. (2015); Zeiler et al. (2010)is their need to extensively modify the original convolutional network model. This leads to a gapbetween unsupervised method and the state-of-the-art, supervised, models for classification - whichcan hurt future attempt to reconcile them in a unified framework, as well as efficiently leverageunlabeled data with otherwise supervised regimes.3 L EARNING BY COMPARISONSThe most common way to train NN is by defining a loss function between the target values andthe network output. Learning by comparison approaches the supervised task from a different angle.The main idea is to use distance comparisons between samples to learn useful representations. Forexample, we consider relative and qualitative examples of the form X1is closer toX2thanX1is toX3. Using a comparative measure with neural network to learn embedding space was introduced inthe “Siamese network” framework by Bromley et al. (1993) and later used in the works of Chopraet al. (2005). One use for this methods is when the number of classes is too large or expected to varyover time, as in the case of face verification, where a face contained in an image has to comparedagainst another image of a face. This problem was recently tackled by Schroff et al. (2015) fortraining a convolutional network model on triplets of examples. There, one image served as ananchorx, and an additional pair of images served as a positive example x+(containing an instance3Under review as a conference paper at ICLR 2017of the face of the same person) together with a negative example x, containing a face of a differentperson. The training objective was on the embedded distance of the input faces, where the distancebetween the anchor and positive example is adjusted to be smaller by at least some constant fromthe negative distance. More precisely, the loss function used in this case was defined asL(x;x +;x) = maxfkF(x)F(x+)k2kF(x)F(x)k2+;0g (1)whereF(x)is the embedding (the output of a convolutional neural network), and is a predefinedmargin constant. Another similar model used by Hoffer & Ailon (2015) with triplets comparisons forclassification, where examples from the same class were trained to have a lower embedded distancethan that of two images from distinct classes. This work introduced a concept of a distance ratioloss, where the defined measure amounted to:L(x;x +;x) =ekF(x)F(x+)k2ekF(x)F(x+)k2+ekF(x)F(x)k2(2)This loss has a flavor of a probability of a biased coin flip. By ‘pushing’ this probability to zero,we express the objective that pairs of samples coming from distinct classes should be less similar toeach other, compared to pairs of samples coming from the same class. It was shown empirical byBalntas et al. (2016) to provide better feature embeddings than the margin based distance loss 14 O URCONTRIBUTION : SPATIAL CONTRASTINGOne implicit assumption in convolutional networks, is that features are gradually learned hierar-chically, each level in the hierarchy corresponding to a layer in the network. Each spatial locationwithin a layer corresponds to a region in the original image. It is empirically observed that deeperlayers tend to contain more ‘abstract’ information from the image. Intuitively, features describingdifferent regions within the same image are likely to be semantically similar (e.g. different partsof an animal), and indeed the corresponding deep representations tend to be similar. Conversely,regions from two probably unrelated images (say, two images chosen at random) tend to be far fromeach other in the deep representation. This logic is commonly used in modern deep networks suchas Szegedy et al. (2015) Lin et al. (2013) He et al. (2015), where a global average pooling is used toaggregate spatial features in the final layer used for classification.Our suggestion is that this property, often observed as a side effect of supervised applications, canbe used as a desired objective when learning deep representations in an unsupervised task. Later, theresulting representation can be used, as typically done, as a starting point or a supervised learningtask. We call this idea which we formalize below Spatial contrasting . The spatial contrasting crite-rion is similar to noise contrasting estimation Gutmann & Hyv ̈arinen (2010) Mnih & Kavukcuoglu(2013), in trying to train a model by maximizing the expected probability on desired inputs, whileminimizing it on contrasting sampled measurements.4.1 F ORMULATIONWe will concern ourselves with samples of images patches ~x(m)taken from an image x. Our convo-lutional network model, denoted by F(x), extracts spatial features fso thatf(m)=F(~x(m))for animage patch ~x(m). We will also define P(f1jf2)as the probability for two features f1;f2to occurtogether in the same image.We wish to optimize our model such that for two features representing patches taken from the sameimage ~x(1)i;~x(2)i2xifor whichf(1)i=F(~x(1)i)andf(2)i=F(~x(2)i),P(f(1)ijf(2)i)will be maxi-mized.This means that features from a patch taken from a specific image can effectively predict, under ourmodel, features extracted from other patches in the same image. Conversely, we want our modelto minimize P(fijfj)fori;jbeing two patches taken from distinct images. Following the logicpresented before, we will need to sample contrasting patch ~x(1)jfrom a different image xjsuch thatP(f(1)ijf(2)i)>P(f(1)jjf(2)i), wheref(1)j=F(~x(1)j). In order to obtain contrasting samples, we useregions from two random images in the training set. We will use a distance ratio, described earlier4Under review as a conference paper at ICLR 2017in Eq. (2) for the supervised case, to represent the probability two feature vectors were taken fromthe same image. The resulting training loss for a pair of images will be defined asLSC(x1;x2) =logekf(1)1f(2)1k2ekf(1)1f(2)1k2+ekf(1)1f(1)2k2(3)Effectively minimizing a log-probability under the SoftMax measure. This formulation is portrayedin figure 4.1. Since we sample our contrasting sample from the same underlying distribution, wecan evaluate this loss considering the image patch as both patch compared (anchor) and contrastsymmetrically. The final loss will be the average between these estimations:bLSC(x1;x2) =12[LSC(x1;x2) +LSC(x2;x1)]Figure 1: Spatial contrasting depiction.4.2 M ETHODConvolutional network are usually trained using SGD over mini-batch of samples, therefore we canextract patches and contrasting patches without changing the network architecture. Each imageserves as both anchor and positive patches, for which the corresponding features should be closer,as well as contrasting samples for other images in that batch. For a batch of Nimages, two samplesfrom each image are taken, and N2different distance comparisons are made. The final loss isdefined as the average distance ratio for all images in the batch:LSC(fxgNi=1) =1NNXi=1LSC(xi;fxgj6=i) =1NNXi=1logekf(1)if(2)ik2PNj=1ekf(1)if(2)jk2(4)Since the criterion is differentiable with respect to its inputs, it is fully compliant with standardmethods for training convolutional network and specifically using backpropagation and gradientdescent. Furthermore, SC can be applied to any layer in the network hierarchy. In fact, SC canbe used at multiple layers within the same convolutional network. The spatial properties of the5Under review as a conference paper at ICLR 2017features means that we can sample directly from feature space ~f(m)2finstead of from the originalimage. Therefore SC has a simple implementation which doesn’t require substation amount ofcomputation. The complete algorithm for batch training is described in Algorithm (1). Similar tothe batch normalization (BN) layer Ioffe & Szegedy (2015), a recent usage for batch statistics inneural networks, SC also uses the batch statistics. While BN normalize the input based on the batchstatistics, SC sample from it. This can be viewed as a simple sampling from the space of possiblefeatures describing a patch of image.Algorithm 1 Calculation the spatial contrasting lossRequire:X=fxgNi=1# Training on batches of images# Get the spatial features for the whole batch of images# Size:NWfHfCffgNi=1 ConvNet (X)# Sample spatial features and calculate embedded distance between all pairs of imagesfori = 1 toNdo~f(1)i sample (fi)forj = 1 toNdo~f(2)j sample (fj)Dist(i;j) k~f(1)i~f(2)jk2end forend for# Calculate log SoftMax normalized distancesdi logeDist (i;i)PNk=1eDist (i;k)# Spatial contrasting loss is the mean of distance ratiosreturn1NPNi=1di5 E XPERIMENTSIn this section we report empirical results showing that using SC loss as an unsupervised pretrainingprocedure can improve state-of-the-art performance on subsequent classification. We experimentedwith MNIST, CIFAR-10 and STL10 datasets. We used modified versions of well studied networkssuch as those of Lin et al. (2013) and Rasmus et al. (2015). A detailed description of our architecturecan be found in 4.In each one of the experiments, we used the spatial contrasting criterion to train the network on theunlabeled images. In each usage of SC criterion, patch features were sampled from the precedinglayer in uniform. We note that spatial size of sampled patches ranged between datasets, where onSTL10 and Cifar10 it covered about 30% of the image, MNIST required the use of larger patchescovering almost the entire image.Training was done by using SGD with an initial learning rateof0:1that was decreased by a factor of 10whenever the measured loss stopped decreasing. Afterconvergence, we used the trained model as an initialization for a supervised training on the completelabeled dataset. The supervised training was done following the same regime, only starting with alower initial learning rate of 0:01. We used mild data augmentations, such as small translations andhorizontal mirroring.The datasets we used are:STL10 (Coates et al. (2011)). This dataset consists of 100;000 9696colored, unlabeledimages, together with another set of 5;000labeled training images and 8;000test images .The label space consists of 10 object classes.Cifar10 (Krizhevsky & Hinton (2009)). The well known CIFAR-10 is an image classifi-cation benchmark dataset containing 50;000training images and 10;000test images. The6Under review as a conference paper at ICLR 2017Table 1: State of the art results on STL-10 datasetModel STL-10 test accuracyZero-bias Convnets - Paine et al. (2014) 70:2%Triplet network - Hoffer & Ailon (2015) 70:7%Exemplar Convnets - Dosovitskiy et al. (2014) 72:8%Target Coding - Yang et al. (2015) 73:15%Stacked what-where AE - Zhao et al. (2015) 74:33%Spatial contrasting initialization (this work) 81:34%0:1The same model without initialization 72:6%0:1image sizes 3232pixels, with color. The classes are airplanes, automobiles, birds, cats,deer, dogs, frogs, horses, ships and trucks.MNIST (LeCun et al. (1998)). The MNIST database of handwritten digits is one of themost studied dataset benchmark for image classification. The dataset contains 60,000 ex-amples of handwritten digits from 0 to 9 for training and 10,000 additional examples fortesting. Each sample is a 28 x 28 pixel gray level image.All experiments were conducted using the Torch7 framework by Collobert et al. (2011).Code reproducing these results will by available at https://github.com/eladhoffer/SpatialContrasting .5.1 R ESULTS ON STL10Since STL10 dataset is comprised of mostly unlabeled data, it is most suitable to highlight the ben-efits of the spatial contrasting criterion. The initial training was unsupervised, as described earlier,using the entire set of 105;000samples (union of the original unlabeled set and labeled trainingset). The representation outputted by the training, was used to initialize supervised training on the5;000labeled images. Evaluation was done on a separate test set of 8;000samples. Comparingwith state of the art results, we see an improvement of 7% in test accuracy over the best modelby Zhao et al. (2015), setting the SC as best model at 81:3%test classification accuracy (see Table(1)). We note that the results of Dosovitskiy et al. (2014) are achieved with no fine-tuning overlabeled examples, which may be unfair to this work. We also compare with the same network, butwithout SC initialization, which achieves a lower classification of 72:6%. This is an indication thatindeed SC managed to leverage unlabeled examples to provide a better initialization point for thesupervised model.5.2 R ESULTS ON CIFAR 10For Cifar10 dataset, we use the same setting as Coates & Ng (2012) and Hui (2013) to test a model’sability to learn from unlabeled images. Here, only 4;000samples out of 50;000are used with theirlabel annotation, and the rest of the samples can be used only in an unsupervised manner. The finaltest accuracy is measured on the entire 10;000test set.In our experiments, we trained our model using SC criterion on the entire dataset, and then usedonly 400labeled samples per class (for a total of 4000 ) in a supervised regime over the initializednetwork. The results are compared with previous efforts in Table (2). Using the SC criterion allowedan improvement of 6.8% over a non-initialized model, and achieved a final test accuracy of 79.2%.This is a competitive result with current state-of-the-art models.5.3 R ESULTS ON MNISTThe MNIST dataset is very different in nature from the Cifar10 and STL10 datasets, we experi-mented earlier. The biggest difference, relevant to this work, is that spatial regions sampled fromMNIST images usually provide very little, or no information. Thus, SC is much less suited forMNIST dataset, and was conjured to have little benefit. We still, however, experimented with ini-tializing a model with SC criterion and continuing with a fully-supervised regime over all labeled7Under review as a conference paper at ICLR 2017Table 2: State of the art results on Cifar10 dataset with only 4000 labeled samplesModel Cifar10 (400 per class) test accuracyConvolutional K-means Network - Coates & Ng (2012) 70:7%View-Invariant K-means - Hui (2013) 72:6%DCGAN - Radford et al. (2015) 73:8%Exemplar Convnets - Dosovitskiy et al. (2014) 76:6%Ladder networks - Rasmus et al. (2015) 79:6%Conv-CatGan Springenberg (2016) 80.42% ( 0.58)ImprovedGan Salimans et al. (2016) 81.37% (2.32)Spatial contrasting initialization (this work) 79:2%(0:3)The same model without initialization 72:4%(0:1)Table 3: results on MNIST datasetModel MNIST test errorStacked what-where AE - Zhao et al. (2015) 0:71%Triplet network - Hoffer & Ailon (2015) 0:56%Jarrett et al. (2009) 0:53%Ladder networks - Rasmus et al. (2015) 0:36%DropConnect - Wan et al. (2013) 0:21%Spatial contrasting initialization (this work) 0:34%0:02The same model without initialization 0:63%0:02examples. We found again that this provided benefit over training the same network without pre-initialization, improving results from 0:63% to0:34% error on test set. As mentioned previously, theeffective compared patches of MNIST covered almost the entire image area. This can be attributedto the fact that MNIST requires global features to differentiate between digits. The results, comparedwith previous attempts are included in Table (3).6 C ONCLUSIONS AND FUTURE WORKIn this work we presented spatial contrasting - a novel unsupervised criterion for training convo-lutional networks on unlabeled data. Its is based on comparison between spatial features sampledfrom a number of images. We’ve shown empirically that using spatial contrasting as a pretrainingtechnique to initialize a ConvNet, can improve its performance on a subsequent supervised train-ing. In cases where a lot of unlabeled data is available, such as the STL10 dataset, this translates tostate-of-the-art classification accuracy in the final model.Since the spatial contrasting loss is a differentiable estimation that can be computed within a net-work parallel to supervised losses, in future work we plan to embed it as a semi-supervised model.This usage will allow to create models that can leverage both labeled an unlabeled data, and can becompared to similar semi-supervised models such as the ladder network Rasmus et al. (2015). It isis also apparent that contrasting can occur in dimensions other than the spatial, the most straight-forward is the temporal dimension. This suggests that similar training procedure can be applied onsegments of sequences to learn useful representation without explicit supervision.REFERENCESVassileios Balntas, Edward Johns, Lilian Tang, and Krystian Mikolajczyk. Pn-net: Conjoined tripledeep network for learning local image descriptors. arXiv preprint arXiv:1601.05030 , 2016.Jane Bromley, James W Bentz, L ́eon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, EduardS ̈ackinger, and Roopak Shah. Signature verification using a siamese time delay neural network.International Journal of Pattern Recognition and Artificial Intelligence , 7(04):669–688, 1993.8Under review as a conference paper at ICLR 2017Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, withapplication to face verification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005.IEEE Computer Society Conference on , volume 1, pp. 539–546. IEEE, 2005.Adam Coates and Andrew Y Ng. Learning feature representations with k-means. In Neural Net-works: Tricks of the Trade , pp. 561–580. Springer, 2012.Adam Coates, Andrew Y Ng, and Honglak Lee. An analysis of single-layer networks in unsuper-vised feature learning. In International Conference on Artificial Intelligence and Statistics , pp.215–223, 2011.Ronan Collobert, Koray Kavukcuoglu, and Cl ́ement Farabet. Torch7: A matlab-like environmentfor machine learning. In BigLearn, NIPS Workshop , number EPFL-CONF-192376, 2011.Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning bycontext prediction. In Proceedings of the IEEE International Conference on Computer Vision , pp.1422–1430, 2015.Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discrimina-tive unsupervised feature learning with convolutional neural networks. In Advances in NeuralInformation Processing Systems , pp. 766–774, 2014.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Michael Gutmann and Aapo Hyv ̈arinen. Noise-contrastive estimation: A new estimation principlefor unnormalized statistical models. In International Conference on Artificial Intelligence andStatistics , pp. 297–304, 2010.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. arXiv preprint arXiv:1512.03385 , 2015.Geoffrey E Hinton. To recognize shapes, first learn to generate images. Progress in brain research ,165:535–547, 2007.Elad Hoffer and Nir Ailon. Deep metric learning using triplet network. In Similarity-Based PatternRecognition , pp. 84–92. Springer, 2015.Ka Y Hui. Direct modeling of complex invariances for visual object features. In Proceedings of the30th International Conference on Machine Learning (ICML-13) , pp. 352–360, 2013.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by re-ducing internal covariate shift. In Proceedings of The 32nd International Conference on MachineLearning , pp. 448–456, 2015.Kevin Jarrett, Koray Kavukcuoglu, Marc’Aurelio Ranzato, and Yann LeCun. What is the best multi-stage architecture for object recognition? In Computer Vision, 2009 IEEE 12th InternationalConference on , pp. 2146–2153. IEEE, 2009.Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Com-puter Science Department, University of Toronto, Tech. Rep , 2009.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Con-volutional Neural Networks. Advances In Neural Information Processing Systems , pp. 1–9, 2012.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400 ,2013.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen-tations of words and phrases and their compositionality. In Advances in neural information pro-cessing systems , pp. 3111–3119, 2013.9Under review as a conference paper at ICLR 2017Andriy Mnih and Koray Kavukcuoglu. Learning word embeddings efficiently with noise-contrastiveestimation. In Advances in Neural Information Processing Systems , pp. 2265–2273, 2013.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle-mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-levelcontrol through deep reinforcement learning. Nature , 518(7540):529–533, 2015.Andrew Ng. Sparse autoencoder. 2011.Tom Le Paine, Pooya Khorrami, Wei Han, and Thomas S Huang. An analysis of unsupervisedpre-training in light of recent advances. arXiv preprint arXiv:1412.6597 , 2014.Pedro O Pinheiro, Ronan Collobert, and Piotr Dollar. Learning to segment object candidates. InAdvances in Neural Information Processing Systems , pp. 1981–1989, 2015.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems ,pp. 3532–3540, 2015.Ali Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE Conference on Com-puter Vision and Pattern Recognition Workshops , pp. 806–813, 2014.David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. Cognitive modeling , 5(3):1.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for facerecognition and clustering. In Proceedings of the IEEE Conference on Computer Vision andPattern Recognition , pp. 815–823, 2015.Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generativeadversarial networks. In International Conference on Learning Representations (ICLR) . 2016.URL https://arxiv.org/abs/1511.06390 .Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 1–9, 2015.Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting andcomposing robust features with denoising autoencoders. In Proceedings of the 25th internationalconference on Machine learning , pp. 1096–1103. ACM, 2008.Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neuralnetworks using dropconnect. In Proceedings of the 30th International Conference on MachineLearning (ICML-13) , pp. 1058–1066, 2013.Shuo Yang, Ping Luo, Chen Change Loy, Kenneth W Shum, and Xiaoou Tang. Deep representationlearning with target coding. 2015.Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks.InComputer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on , pp. 2528–2535.IEEE, 2010.Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders.arXiv preprint arXiv:1506.02351 , 2015.10Under review as a conference paper at ICLR 20177 A PPENDIXTable 4: Convolutional models used, based on Lin et al. (2013), Rasmus et al. (2015)ModelSTL10 CIFAR-10 MNISTInput: 9696RGB Input: 3232RGB Input: 2828monochrome55conv. 64BN ReLU 33conv. 96BN LeakyReLU 55conv. 32ReLU11conv. 160BN ReLU 33conv. 96BN LeakyReLU11conv. 96BN ReLU 33conv. 96BN LeakyReLU33max-pooling, stride 2 22max-pooling, stride 2BN 22max-pooling, stride 2BN55conv. 192BN ReLU 33conv. 192BN LeakyReLU 33conv. 64BN ReLU11conv. 192BN ReLU 33conv. 192BN LeakyReLU 33conv. 64BN ReLU11conv. 192BN ReLU 33conv. 192BN LeakyReLU33max-pooling, stride 2 22max-pooling, stride 2BN 22max-pooling, stride 2BN33conv. 192BN ReLU11conv. 192BN ReLU11conv. 192BN ReLUSpatial contrasting criterion33conv. 256ReLU 33conv. 192BN LeakyReLU 33conv. 128BN ReLU33max-pooling, stride 2 11conv. 192BN LeakyReLU 11conv. 10BN ReLUdropout, p= 0:5 11conv. 10BN LeakyReLU global average pooling33conv. 128ReLU global average poolingdropout, p= 0:5fully-connected 1010-way softmaxFigure 2: First layer convolutional filters after spatial-contrasting training11
Syfkm6cgx
Under review as a conference paper at ICLR 2017IMPROVING INVARIANCE AND EQUIVARIANCE PROP-ERTIES OF CONVOLUTIONAL NEURAL NETWORKSChristopher Tensmeyer & Tony MartinezDepartment of Computer ScienceBrigham Young UniversityProvo, UT 84602, USAtensmeyer@byu.edumartinez@cs.byu.eduABSTRACTConvolutional Neural Networks (CNNs) learn highly discriminative representa-tions from data, but how robust and structured are these representations? Howdoes the data shape the internal network representation? We shed light on thesequestions by empirically measuring the invariance and equivariance properties ofa large number of CNNs trained with various types of input transformations. Wefind that CNNs learn invariance wrt all 9 tested transformation types and that in-variance extends to transformations outside the training range. We also measurethe distance between CNN representations and show that similar input transfor-mations lead to more similar internal representations. Transforms can be groupedby the way they affect the learned representation. Additionally, we also propose aloss function that aims to improve CNN equivariance.1 I NTRODUCTIONThe overwhelming success of Convolutional Neural Networks (CNNs) is generally attributed to theirability to learn task-specific representations from large quantities of data. This has led to many stateof the art results in areas such as image classification (He et al., 2015), semantic segmentation (Liuet al., 2015), and game-playing Go agents (Silver et al., 2016). However, our current understandingof CNN representations is limited, though there is a growing body of literature on visualizationtechniques for interpreting internal representations (Mahendran & Vedaldi, 2016; Dosovitskiy &Brox, 2015; Nguyen et al., 2016).In this work, we aim to understand the effect that the training data has on three key properties ofrepresentations: invariance, equivariance, and equivalence (Lenc & Vedaldi, 2015). Invariance, withrespect to a particular transformation (e.g. rotation by = 5), is achieved when the feature vectorsfor input images at a particular layer do not change when the inputs are transformed. This can be seenas a measure of the robustness of the representation. Equivariance is a generalization of invarianceand allows the representation to change in predictable ways in response to input transformations.Equivariance is a rough measure of the structure of the representation space because we can reasonabout input space transformations in the more abstract representation space. Likewise, measuringequivalence among network representations trained on either the same or different data distributionsgives insight into how the training data shapes the representation.Data augmentation has been considered essential for top CNN performance since the seminal workof Krizhevsky et al. (2012). When input images are stochastically perturbed, less overfitting isobserved because all inputs are unique. Since then, others have experimented with various dataaugmentation strategies with great success (Howard, 2013; Wu et al., 2015; He et al., 2014). Whileothers have noted that data augmentation leads to greater representation invariance (Lenc & Vedaldi,2015; Wu et al., 2015; Peng et al., 2014), we measure the amount of invariance achieved in responseto 9 types of transforms at various magnitudes. We additionally measure the equivariance and pair-wise representation distance of the resulting CNNs.In this work we quantify the invariance and equivariance properties of 70 CNNs on two datasetstrained using different input transformations. We find the representation at the penultimate layer1Under review as a conference paper at ICLR 2017(fc7) is structured wrt almost all input transformations when no augmentation is used. Applyingdata augmentation effectively collapses the structure of the representation space, leading to inputtransformation invariance for all 9 transforms, including magnitudes of transforms that were notobserved during training.We measure the pairwise distances of the CNN representations by measuring the error of learnedmappings between CNN representations. There is a strong bias towards similarity among CNNstrained with the same type of transformation, and we can group transformations based on theirmutual average distances. Similar types of transformations (e.g. Color Jitter, Gaussian Noise) yieldmore similar representations compared to other dissimilar transform pairs.We also propose a way to increase the equivariance of a CNN by finetuning on a novel loss functionthat simultaneously minimizes classification error for both transformed and untransformed inputs.This leads to an increase in equivariance while improving or slightly decreasing performance on theuntransformed images (dataset dependent), though this trade-off can be controlled by weighting theterms of the loss function.2 R ELATED WORKSLenc & Vedaldi (2015) studied image representations and gave definitions for representation in-variance, equivariance, and equivalence. They measured these properties wrt a limited number oftransforms for the convolutional layers of the popular AlexNet architecture. We provide extensionsto the definitions of these properties to measure the relative degree to which representations pos-sess these properties. We also consider a wider variety of transformations and use the fc7layer ofAlexNet because it is the most invariant layer.Convergent learning is the idea that identical networks (differing in random initialization) convergeto the same representation (i.e.. representation equivalence). In Li et al. (2015), the authors showthat there are many corresponding convolutional filters among CNNs for one-to-one, one-to-many,and many-to-many relationships. In complementary fashion, we measure representation distancesbetween last fully connected layers in CNNs trained over different transformations of their inputs.In Krizhevsky et al. (2012), applying random crops, horizontal flips and color jittering during train-ing reduced the top-1 error by over 1%. Howard (2013) applied additional brightness, contrast, crop,and scale transformations at both training and test time to reduce validation top-1 error from 40.7%to 37.0%. Wu et al. (2015) adds rotation and photo filter effects such as lens distortion, vignetting,and color casting for further improvement. Similarly, we explore a series of 10 types of transfor-mations applied at training time; however, we focus on how they affect the learned representationinstead of optimizing classifier performance.In Peng et al. (2014), synthetic images rendered from 3D CAD models are used to explore invariancein object detection networks for factors such as pose, object texture, and foreground/backgroundcolor. Training a CNN on images that vary in these aspect results in greater invariance than trainingwith less data variety. 3D CAD models are also used in Aubry & Russell (2015) to show thatupper layers of CNNs disentangle and (locally) linearize independent object factors such as objectviewpoint, style, color, and scale.ManiTest (Fawzi & Frossard, 2015) defines classifier invariance as the average magnitude of theminimal transform that causes predictions to change. In contrast, we examine internal CNN repre-sentations and measure both representation distance and classifier performance.3 I NVARIANCE , EQUIVARIANCE ,AND EQUIVALENCEIn this section, we provide definitions and intuition on the three properties of interest, borrowingbasic definitions and notation from Lenc & Vedaldi (2015). Though we focus on CNN image repre-sentations, the following applies to any kind of data representation.3.1 E QUIVARIANCE AND INVARIANCEIf the domain of interest (e.g. natural images) is XRnxn, a function :Rnxn!Rmassignsa feature vector to every input image and thus defines an image representation. We will use Xto2Under review as a conference paper at ICLR 2017denote the input space and Zthe output space (i.e. :X!Z).may be equivariant to sometransformations, but not others, so we say that is equivariant wrt g:X!Xiff9Mg:Z!Z;8x2X:(gx)Mg(x) (1)This means that transformation by ginXcorresponds to a transformation by Mg(if it exists) inZ. Thus ifMgexists, it informs us of the structure of the abstract space Zin terms of the concretespaceX. IfMgis the identity transform, then mapsxandgxto the same point in Z, whichleads to invariance wrtg(i.e.(gx)(x)). This is related to the idea of whether CNNs collapse(invariance) or linearize (equivariance) view manifolds of 3D objects (Bakry et al., 2015; Aubry &Russell, 2015).While the previous definitions suggest equivariance and invariance are all-or-nothing properties, weuse the following definition to precisely define the used in Eq 1. We measure the equivariance ofwrt a transform gasEquivaraince(g;L) = minMg1NNXi=1L((xi);Mg(gxi)) (2)whereLis a suitable loss function such as classification error or L2 distance (Section 3.3). Invarianceis similarly measured by fixing Mgas the identity function.In practice, it is difficult to maximize Mgover the space of all functions, so we restrict Mgto bea simple parametric function (e.g. linear) and learn the parameters from data. Thus our results canbe considered a lower bound on the actual equivariance of the representation. Eq. 2 also uses theparadigm that Mgattempts to undo in Zthe transformation gperformed in X. This is also done inLenc & Vedaldi (2015) and allows us to compute classification based metrics by applying the sameclassifier to both (x)andMg(gx).3.2 E QUIVALENCE AND REPRESENTATION DISTANCEIn Lenc & Vedaldi (2015), 1is equivalent to 2if1has the same information in the sense that9E1:Z1!Z2;8x2X:2(x)E11(x) (3)However, this relation is only symmetric (as the name equivalence implies) iffE1is invertible. IfE1is invertible, then E11satisfies1(x)E112(x). In the other direction, if the relation issymmetric, then9E2such that1(x)E22(x)and such an E2is an inverse of E1.In light of this, we propose renaming Eq 3 to be the sub-representation property with the the ter-minology that 2is a sub-representation of 1. We then define 1and2to be equivalent iff aninvertibleE1exists. One example of sub-representation are two reprensetations characterized by(1) the RGB pixels of an image and (2) the corresponding grayscale intensities. The grayscale isa sub-representation of the RGB because we can recover the grayscale from the RGB, but not viceversa.As with equivariance, we are interested in measuring the extent that two representations are distinct.Thus we define the distance between two representations asD(1;2) = minE1[L(E11(x); 2)] + minE2[L(1; E22(x))] (4)As withMg, finding the optimal E1;E2is difficult, so we learn linear models from data. Choicesfor the loss function, L, are discussed in Section 3.3.3.3 D ISTANCE METRICSThe two distance metric we employ are normalized euclidean distance in the representation spaceand classification error.LL2(ri;r0i) =jjrir0ijj2jjrijj2; Lerr(ri;r0i) = 1(yi;^yi);^yi:=argmaxjC(r0i)jwhereri=(xi),r0i=Mg(gxi),C(ri)is the predicted distribution of class labels, yiis the groundtruth label of ri, andis the Kronecker delta function. For LL2, we normalize each dimension ofrto have similar magnitude, similar to the normalization of Li et al. (2015). The classifier, C, iscomposed of the remaining layers of the CNN after .3Under review as a conference paper at ICLR 2017Name DescriptionBaseline No TransformColor Casting Add a random integer to each color channelCrop Crop a 227x227 sub-window from a larger imageElastic Deformation Apply a smoothed random displacement field (Simard et al., 2003)Gaussian Blur Blur the image with a Gaussian kernelGaussian Noise Add iid Gaussian noise to each pixelMirror Flip image horizontally, vertically, or bothPerspective Warp square image to an arbitrary quadrilateralRotation Rotate the image about the centerShear Warp square image into trapezoid (horizontal and vertical)Table 1: Each CNN is trained with one of these types of transformations over parameter rangesthat differ in magnitude. Each sampled input to the CNN is stochastically perturbed by samplingtransform parameters from the range and applying the transformation to the image.4 I MPROVING EQUIVARIANCEIn this section, we propose a loss function for training a network to increase its invariance andequivariance. CNNs for classification tasks are typically trained with a cross entropy loss, i.e.Ld(X;Y ) =1NNXi=1H(yi;C((xi))) (5)whereXis the training images, Yis the training labels, and Hcomputes cross entropy.Ldcan be augmented by a term that biases the representation to be equivariant to some set oftransformsG=fg1;g2;:::;gJg. We propose the loss function, Le, which is minimized when thetransformed images are both correctly classified and spatially close to the untransformed image in Z.Le(X;Y ) =1NJNXi=1JXj=1hH(yi;C(x0ij)) +1jjx0ij(xi)jj22+2jj(gjxi)(xi)jj22i(6)wherex0ij=Mgj(gjxi). The first term of the sum is the classification loss of the transformedimages, while the second and third terms respectively enforce equivariance and invariance. In ourexperiments, we set 1=504096,2=254096, though the results do not appear highly sensitive to thisparticular setting. Each equivariance mapping Mgjconstitutes a new portion of the network thatattempts to undo the transformation gjin theZ. As such, the parameters of Mgjmust be learnedalong with the parameters of the network.Combining Eq. 5 and 6, we arrive at our combined loss:L(X;Y ) =Ld(X;Y ) +3Le(X;Y ) (7)We found that 3= 0:3provides good compromise between performance on the transformed anduntransformed data, though 3can be tuned for individual performance requirements.5 E XPERIMENTAL SETUPHere we detail the experiments we conduct including the transformations, networks, and datasets.For the first set of experiments, we simply measure the invariance and equivariance of the AlexNet1architecture trained under various data augmentation schemes. We measure the representation oflayer fc7 (denoted fc7) after applying the ReLU non-linearity and dropout2.1We used the reference CaffeNet architecture, which differs slightly from Krizhevsky et al. (2012)2We follow the test time convention of halving neuron activations instead of zeroing out half the activations.4Under review as a conference paper at ICLR 20175.1 T RANSFORMATIONSWe examine a set of 10 transformation types. See Table 1 for brief descriptions or Appendix A forfull details. We train 4 CNNs for each type of transform (only 2 for blurring, 1 for baseline) for atotal of 35 networks per dataset. Each of the 4 CNN within a transform type is trained with differentmagnitudes of transformations. Training with data augmentation is performed by stochasticallytransforming each network input with a transform taken from a specified range.After training, we measure invariance and equivariance wrt fixed transforms for each type. Forexample, the 4 CNNs trained on different ranges of rotations ( 2[5;5],2[10;10],2[15;15],2[20;20]) all have their equivariance measured wrt rotations of =f2:5;5;10;15;20;25;30;40g. As a baseline, we trained a model with no data augmentation andmeasured the invariance and equivariance of that network wrt all transforms.5.2 E QUIVARIANCE MAPPINGTo measure equivariance wrt gj, we learnMgj(Eq. 2) and measure the classification accuracyover the transformed images. Due to data augmentation during training, the network may alreadybe invariant wrt gj, so we model Mgjas a linear residual mapping to bias it towards an identitymapping (He et al., 2015). That is Mgj(r) =(r+R(r)), whereRis to be estimated from dataand(x) =max(x;0)is the ReLU non-linearity. We experimented with Ras a linear mapping andas a neural network with a single hidden layer. In nearly all cases, the linear mapping outperformedthe neural network, so we present results only for the linear mappings. Therefore, we haveMgj(r) =(r+Wjr+bj) =((Wj+I)r+bj) (8)where (Wj;bj)are parameters to be estimated from data.EachMgjis trained using fully online SGD for 10 epochs with a static learning rate of 0.001,using anL2weight decay penalty of 105. Momentum with = 0:9is used. For learning Mgj,only (Wj;bj)are updated. The rest of the network parameters are treated as fixed. The loss to beminimized isjjMgj(fc7(gjx))fc7(x)jj22.5.3 R EPRESENTATION DISTANCE MAPPINGHere we are interested in learning E1(or equivalently E2) from Eq. 4 for each ordered pair (i,j)in a set of of network representations fig. We consider linear mappings with E1(r) =(Wr+b)and estimate (W;b)from data. Hyper-parameters for learning are the same as in Section 5.2. Theloss to be minimized for each ordered pair is jji(x)E1j(x)jj22.5.4 F INETUNING EQUIVARIANCEFor this experiment, we take the CNNs trained with the most extreme parameter ranges for eachtransform type and finetune the representation using Eq. 7 as the loss function. For each CNN, weselected 4-6 transformations within the training range to be the Gin Eq. 6.We finetune for 150,000 weight updates with a mini-batch size of 10 for RVL-CDIP and 100 forILSVRC. The learning rate is 0.0005 and decays by a factor of 10 every 60,000 updates. Momentumof= 0:9was used with no weight-decay regularization.5.5 D ATASETSWe use two large datasets with distinct properties. The first is the popular ILSVRC 2012 dataset(1.2M train / 50K validation), composed of natural images. The second is the RVL Complex Docu-ment Information Processing (RVL-CDIP) dataset (Harley et al., 2015) (320K train / 40K val / 40Ktest). It is composed of scanned tobacco litigation documents in grayscale format and each documentimage is labeled with one of 16 categories (e.g. letter, memo, email, form, news article, scientificpublication). Examples images can be found in Figure 4 in Appendix A In contrast to ILSVRC,document images in RVL-CDIP are intrinsically 2D objects, have fixed zoom and location, andhave little background area. We aim to compare and contrast the invariance and equivariance of thevarious transforms for these two datasets.5Under review as a conference paper at ICLR 2017(a) (b) (c) (d)(e) (f) (g) (h)Figure 1: Invariance and equivariance accuracy measurements for ILSVRC. Each plot compares aCNN trained with a particular transformation (blue line for equivariance, green for invariance) tothe baseline model (red for equivariance, black for invariance) trained with no transformations. Thex-axis ranges over different parameters for each type of transformation. Each point represents theerror of theMglearned on that particular parameterization of the transformation.(a) ILSVRC Elastic (b) ILSVRC Elastic (c) RVL-CDIP Rotation (d) RVL-CDIP RotationFigure 2: Equivariance and Invariance measurements of CNNs trained with various parameterranges. Larger training ranges yields greater robustness for a wider range of transforms.In our experiments CNNs are trained using the training splits, using the validation split for modelselection. For learning equivariance mappings and representation distance mappings, we use a staticrandomly chosen 50K training images for ILSVRC and the validation split for RVL-CDIP. Thus thereported metrics are over the validation split for ILSVRC and test split for RVL-CDIP. We subsamplefor ILSVRC for computational reasons and use the validation split for RVL-CDIP because the CNNrepresentation may have overfit the training data.6 R ESULTS6.1 M EASURING INVARIANCE AND EQUIVARIANCEFigure 1 shows the invariance and equivariance measurements for each type of transform onILSVRC. Additional results for both ILSVRC and RVL-CDIP can be found in Appendix B.1. Foreach transform type, we compare the CNN trained with the most extreme transforms to the base-line model trained with no data augmentation. The transformations we measured range from notransform to approximately double the transformation magnitude seen during training.In many instances (e.g. Blur, Noise), the baseline model’s performance deteriorates rapidly evenfor mild transformations. The large difference between the baseline invariance (black line) andequivariance (red line) indicate that Zis structured wrt most transforms (Color is one exception).For CNNs trained with data augmentation, the equivariance mapping does not improve performancefor transforms that are observed during training. This suggests that Zmay not be (linearly) struc-6Under review as a conference paper at ICLR 2017(a) KNN Plot (b) ILSVRC (c) RVL-CDIPFigure 3: (a) shows how often transforms of the same type are K-nearest neighbors. It revealsa strong bias towards networks trained with the same transformation having more similar repre-sentations. Heatmaps (b) and (c) show average distances between representations induced by thetransforms. Lighter squares indicate greater distance, while darker squares indicate the transformsare closer together. Diagonal entries are distance 0, but were set to white for visualization purposes.tured wrt those particular transforms possibly because the structure has been collapsed due to thetraining object. Some transforms (e.g. Noise, Blur, Shear), at large magnitudes, do show a gapbetween the equivariance and invariance lines, suggesting that Zis structured wrt these transforms.Equivariance performance does drop off outside the training range, but not near as sharply as thebaseline model (e.g Figs. 7a, 7c, 7f), so CNN representations generalize somewhat to unseen trans-forms.Figure 2 shows how various magnitudes of training augmentation affect the invariance/equivarianceproperties of the network for select transforms. Other transforms behaved similarly. Larger trainingranges yields greater robustness for a wider range of transforms.6.2 R EPRESENTATION DISTANCEIn this second experiment, we measured the pairwise representation distances of 37 CNNs (34 w/transforms and 3 baseline networks). Using equation 4 ( L=LL2), we computed a pairwise distancematrix for the 37 network representations. We attempted a T-SNE embedding (Maaten & Hinton,2008) visualization, but the distances seem intrinsically difficult to embed in 2D (see Appendix B.3).Comments on the asymmetry of the distances measured can be found in Appendix B.3.There is, however, a strong bias towards networks of the same training transformation to be nearer toeach other compared to random chance. We performed a K-NN analysis by counting the percentageof same-transform pairs that are K nearest neighbors (Figure 3a). Approximately 80% of same-transform pairs are within 5 neighbors of each other for RVL-CDIP (10 neighbors for ILSVRC). Thisindicates that networks trained with the same transform end up with more similar representations,though the strength of the transformation and network initialization also play a role.We also visualize the average distance between CNNs grouped by transformation type (Figs 3c and3b). Patterns emerge in both datasets. In RVL-CDIP, Crop is the most unique transform because theCNNs learn features as a different static image scale. On the other hand, Crop is not as unique forILSVRC because objects appear at multiple sizes, so the features learned are not targeted at a singlesize. Blur, especially for ILSVRC, seems to give unique representations because it changes the localtextures from which the representation is derived in a bottom-up fashion. Refining the representationin a top-down manner may be a further avenue for improving CNN representation robustness.Rotation, Shear, and Perspective transforms are mutually similar for both datasets. This is likelybecause all three move local pixels in a rigid fashion. Another mutually similar group is Baseline,Color Jitter, and Gaussian Noise, which all operate on pixels independently. Elastic Deformationsare somewhat similar to Shear, but not to other transforms.7Under review as a conference paper at ICLR 2017Transform CDIP CDIP Transform CDIP CDIPOriginal Finetune Original FinetuneColor Jitter 88.01 88.58 Crop center from 256x256 88.83 89.07brightness +10 88.01 88.63 upper left corner 86.46 87.07brightness -15 88.04 88.60 bottom right corner 86.94 87.57Elastic Deformations 87.89 88.52 Gaussian Noise 88.15 88.67= 2:5;= 5 88.17 88.48= 8 88.16 88.61= 3;= 10 88.05 88.21= 16 88.09 88.53Gaussian Blur 86.82 87.18 Mirror 88.42 89.44= 1 86.79 87.13 Horz. 88.51 88.11= 2 86.47 86.74 Horz. + Vert. 88.43 88.17Perspective 88.55 89.13 Rotation 88.49 89.0188.69 88.96=10 87.60 87.4188.65 89.07= 15 88.59 88.70Shear 88.96 89.71Horz,=10 88.37 88.55Vert,=15 87.40 87.51Table 2: Equivariance measurements for several CNNs on RVL-CDIP before and after finetuningusing Eq. 7. The first row of each transform type shows performance over untransformed images. Inmost cases finetuning improves equivariance on the finetuning transforms and on the untransformedimages.6.3 I MPROVED EQUIVARIANCEFor RVL-CDIP, finetuning using Eq. 7 improves both equivariance and performance on untrans-formed images by 0.5% (Table 2). This may be because the original network was trained with awide range of transform parameters, so the representation caters to no particular set of transformparameters. By introducing Mgjduring finetuning, the representation can individualize to the un-transformed image, while still avoiding the overfit that occurs without data augmentation. Resultsfor ILSVRC were mixed, with finetuning improving equivariance for 3 of 9 transformations (seeTable 4 in Appendix B).7 C ONCLUSIONThis work gives the results from a large scale empirical study into the invariance and equivarianceproperties of CNNs trained under different input transformations. We quantify these properties for70 CNNs across 10 types of transforms for 2 large datasets. CNNs are able to learn invariance to alltransforms tried, and this invariance/equivariance extends somewhat to transforms outside the train-ing range. We show evidence that the CNN linearizes its representation wrt Rotation, Perspective,and Shear transforms. In general, the baseline CNN showed very little invariance to even moderatetransformations.We also measured CNN representation distance between all pairs of 37 networks. There is a biastowards CNNs trained with the same type of transformation to have more similar representations.The analysis also revealed that similar types of transforms (e.g. Rotation, Shear) lead to more similarrepresentations. We also proposed a joint loss function that moderately increases accuracy on bothuntransformed and transformed images, leading to an increase in equivariance for most transforms.REFERENCESPulkit Agrawal, Ross Girshick, and Jitendra Malik. Analyzing the performance of multilayer neuralnetworks for object recognition. In European Conference on Computer Vision , pp. 329–344.Springer, 2014.Mathieu Aubry and Bryan C Russell. Understanding deep features with computer-generated im-agery. In Proceedings of the IEEE International Conference on Computer Vision , pp. 2875–2883,2015.8Under review as a conference paper at ICLR 2017Amr Bakry, Mohamed Elhoseiny, Tarek El-Gaaly, and Ahmed Elgammal. Digging deep into thelayers of cnns: In search of how cnns achieve view invariance. arXiv preprint arXiv:1508.01983 ,2015.Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks.arXiv preprint arXiv:1506.02753 , 2015.Alhussein Fawzi and Pascal Frossard. Manitest: Are classifiers really invariant? In British MachineVision Conference (BMVC) , number EPFL-CONF-210209, 2015.Adam W Harley, Alex Ufkes, and Konstantinos G Derpanis. Evaluation of deep convolutional netsfor document image classification and retrieval. In Document Analysis and Recognition (ICDAR),2015 13th International Conference on , pp. 991–995. IEEE, 2015.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep con-volutional networks for visual recognition. In European Conference on Computer Vision , pp.346–361. Springer, 2014.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. arXiv preprint arXiv:1512.03385 , 2015.Andrew G Howard. Some improvements on deep convolutional neural network based image classi-fication. arXiv preprint arXiv:1312.5402 , 2013.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-lutional neural networks. In Advances in neural information processing systems , pp. 1097–1105,2012.Karel Lenc and Andrea Vedaldi. Understanding image representations by measuring their equiv-ariance and equivalence. In Proceedings of the IEEE conference on computer vision and patternrecognition , pp. 991–999, 2015.Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent learning: Dodifferent neural networks learn the same representations? arXiv preprint arXiv:1511.07543 , 2015.Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen-Change Loy, and Xiaoou Tang. Semantic image segmenta-tion via deep parsing network. In Proceedings of the IEEE International Conference on ComputerVision , pp. 1377–1385, 2015.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research , 9(Nov):2579–2605, 2008.Aravindh Mahendran and Andrea Vedaldi. Visualizing deep convolutional neural networks usingnatural pre-images. International Journal of Computer Vision , pp. 1–23, 2016.Anh Nguyen, Jason Yosinski, and Jeff Clune. Multifaceted feature visualization: Uncoveringthe different types of features learned by each neuron in deep neural networks. arXiv preprintarXiv:1602.03616 , 2016.Xingchao Peng, Baochen Sun, Karim Ali, and Kate Saenko. Exploring invariances in deep convo-lutional neural networks using synthetic images. CoRR, abs/1412.7122 , 2(4), 2014.David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche,Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Masteringthe game of go with deep neural networks and tree search. Nature , 529(7587):484–489, 2016.Patrice Y Simard, David Steinkraus, and John C Platt. Best practices for convolutional neuralnetworks applied to visual document analysis. In ICDAR , volume 3, pp. 958–962, 2003.Ren Wu, Shengen Yan, Yi Shan, Qingqing Dang, and Gang Sun. Deep image: Scaling up imagerecognition. arXiv preprint arXiv:1501.02876 , 7(8), 2015.9Under review as a conference paper at ICLR 2017Figure 4: Example instances of the RVL-CDIP dataset.APPENDIXA T RANSFORM DETAILSFor space reasons, exact details and parameterizations of the transforms used to train the CNNswere omitted from the main text, but are included here. Table 1 gives a brief explanation of the 9transforms, but each transform will be explained in detail hereafter. Pixels that move into the imagedimensions from outside the image (e.g. pixels rotated in) are set to be intensity 0 (see Figure 6b).A.1 C OLOR JITTERColor Jitter adds a random value to each color channel (RGB or grayscale) of the input image.The random value drawn is separate for each channel, but the same value is applied to each spatiallocation, essentially making each color either brighter or darker. If the addition of the random valuecauses a pixel to go outside the normal [0;255] range, it is truncated back to the range.During CNN training, the random values are drawn from a Gaussian Distribution parameterized bya meanand standard deviation . Four CNNs were trained, with = 0and2f5;10;15;20g.When measuring invariance/equivariance, we apply a deterministic color jitter transformation that isthe same for all input images. We simply adjust the brightness of the image (all color channels tied)by a fixed amount in f2;5;10;15;20;25;30;45g.A.2 C ROPDuring training with Crop transformations, we first resize the input images to a larger size (suchas 240x240, 256x256, 288x288, or 320x320), and then take a 227x227 crop from the larger imgae.The larger the original image, the more detail the CNN gets to see, but also a smaller percentage ofthe original image is captured in the input window. CNNs trained with other transformations havetheir input images resized to 227x227.For measuring invariance/equivariance, the baseline transform for the Crop CNNs is the center crop,and the transforms are crops at other locations. For example, for invariance to upper left cornercrops, we measured the difference in CNN activations between the network applied to the centercrop and the network applied to the upper left crop. We measured 25 spatial locations arranged in a5x5 grid.10Under review as a conference paper at ICLR 2017(a) Original Image (b)= 10 ,= 3:5 (c)= 15 ,= 3:5 (d)= 10 ,= 2(e) Original Image (f)= 10 ,= 3:5 (g)= 15 ,= 3:5 (h)= 10 ,= 2Figure 5: Example Elastic Deformations transforms for which we measured invariance and equiv-ariance in Figure 7c.A.3 E LASTIC DEFORMATIONSElastic Deformations are a way of locally distorting images by applying a smoothed random dis-placement field (Simard et al., 2003). Image transformations can be characterized by a backwardmapping of pixel locations in the output image to locations in the input image. For elastic deforma-tions, first an random displacement field is sampled that would map output pixels to random inputlocations in their local neighborhood. The size of the local neighborhood is controlled by an pa-rameter. This displacement field is then smoothed using Gaussian filtering with a specified . Thiscauses local regions of the displacement field to point in the same direction.For training CNNs, we used a fixed and sampled for each input image. The four CNNs wetrained with Elastic Deformations can be described as (2[0;5];= 2) ,(2[0;10];= 2) ,(2[0;5];= 3) ,(2[0;10];= 3) .When measuring equivariance/invariance (in contrast to training) we apply the same exact transformto every image. For elastic deformations, this means we applied the same displacement field to eachimage. We did this for a number of displacement fields of various parameter settings. As shown inFigure 7c These settings included every combination of 2f5;10;15gand2f2;2:5;3;3:5g.A.4 G AUSSIAN BLURWe used the standard Gaussian Blur transform which replaces each pixel by a weighted average ofthe neighboring pixels, where weight magnitudes are Gaussian wrt spatial distance. The shape of theGaussian weighting function is controlled by , which is measured in pixels. We trained two CNNswith Gaussian Blur, where for each input image, we sampled a sigma from a uniform distribution(i.e.2[0;1:5]and2[0;3]).For measuring equivariance/invariance, we used 2[0:5;1;1:5;2;2:5;3;3:5;4;4:5;5].A.5 G AUSSIAN NOISEGaussian Noise is similar to Color Jitter except that we sample different random values for eachspatial location. The strength of the noise is controlled by , which is measured in pixel intensity([0;255] scale). When training the four CNNs, is sampled from a uniform distribution for eachinput image. The four ranges used were [0;5];[0;10];[0;15];[0;20].11Under review as a conference paper at ICLR 2017(a) Original Image (b) Transform 4 (c) Transform 6 (d) Transform 9Figure 6: Example Perspective transforms for which we measured invariance and equivariance inFigure 7f. (a) is the original untransformed images, while (b-d) are ordered in increasing magnitudeof transformation.We measured the invariance/equivariance of Gaussian Noise transforms defined by 2[2:5;5;7:5;10;12:5;15;20;25;30;40].A.6 M IRRORMirroring refers to reflecting the input image over the horizontal or vertical image axes. We trained3 CNNs with mirroring. The first performed horizontal mirroring with probability 0.5. The secondperformed vertical mirroring with probability 0.5. The third performed both types of mirroring withindependent probability 0.5 for a total of 4 combinations of flips.We measures the equivariance/invariance of all 4 combinations of flips.A.7 P ERSPECTIVEPerspective transforms are a class of transforms with the constraint that straight lines in the inputimage remain straight in the output image. One way to parameterize (8 parameters) a perspectivetransform is to specify the output coordinates of the input unit square, so that the unit square ismapped to an arbitrary quadrilateral.When training a CNN, for each image we first sampled from a uniform distribution. Then wesampled the displacement of the coordinates of the unit square from a Gaussian Distribution withmean= 0 and standard deviation . Thus the output coordinates for the upper left corner ofthe unit square are x=N(0;);y=N(0;). For the lower right corner, they would be x=1 +N(0;);y= 1 +N(0;). Then the image is warped according to the perspective transformdefined by the output coordinates of the unit square.The ranges used for training are sigma = [0;0:001];[0;0:002];[0;0:003];[0;:004]. For measuringinvariance/equivariance, we sampled 10 perspective transforms from a range of equally spaced values in increasing magnitude along the x-axis of Figure 7f. Examples of these transforms aregiven in Figure 6.A.8 R OTATIONA Rotation transform is specified by an angle and is always performed about the center of the im-age. During training, we sample from a uniform distribution for each image. We trained four CNNswith rotations with drawn from the following ranges: [5;5];[10;10];[15;15];[20;20]. Formeasuring equivariance/invariance, we used 2f 2:5;5;10;15;20;25;30;45g.A.9 S HEARA Shear transform warps a square image into a parallelogram. We parameterize it by specifying ashear angleand an orientation that is either horizontal or vertical. While the original square imagehas corners that are 90 deg , the parallelogram that results has corners with angles of 90 +and90.12Under review as a conference paper at ICLR 2017(a) (b) (c) (d)(e) (f) (g) (h)Figure 7: Invariance and equivariance accuracy measurements for RVL-CDIP (similar to Figure 1).Each plot compares a CNN trained with a particular transformation (blue line for equivariance,green for invariance) to the baseline model (red for equivariance, black for invariance) trained withno transformations.During training, we sample from a uniform distribution for each image. We trained four CNNswith shear with drawn from the following ranges: [5;5];[10;10];[15;15];[20;20]. Eachshear has equal probability of horizontal or vertical orientation. For equivariance/invariance, wemeasured only horizontal shears with 2f 5;10;15;20;25;30g.B A DDITIONAL RESULTSB.1 I NVARIANCE AND EQUIVARIANCEIn this section, we include additional results from the experiments in Section 6.1 that did not fit inthe main text. Figure 1 presents invariance and equivariance measurements for the ILSVRC dataset(similar to Figure 7).We observe several interesting differences to the results on RVL-CDIP. The baseline model forILSVRC is significantly more invariant to vertical mirrors than horizontal mirrors, though the RVL-CDIP baseline performs equally bad on both types of mirrors. Brightness increases affect ILSVRCtrained CNNs more, though this is likely because natural images are more susceptible to imagesaturation because they occupy the full spectrum of pixel intensities, while document images tendto be bi-modal. The RVL-CDIP CNN trained on Gaussian Noise was able to learn invariance, whilethe corresponding ILSVRC CNN did not. However, both the baseline and noise CNN for ILSVRCcan correct for the noise with an equivariant mapping.Figures 8 and 9 show the invariance/equivariance for all 70 CNNs trained with data augmentation.In general, we see an overwhelming trend that training with greater variety of inputs transformsresults in greater invariance and equivariance. The exception seems to be that the magnitude ofcolor jittering does not highly affect the CNN robustness. Early CNN layers can easily becomeinvariant to overall brightness changes by extracting information based on pixel differences, andprior work has shown that later layers encode information mostly by which neurons are non-zero,rather than the magnitude of the neuron activations (Agrawal et al., 2014).For Crop transforms (Figure 8c), we examined 25 evenly spaced crops arranged in a 5x5 grid. Thex-axis of the figure shows these crops ordered in scan-line order. The periodic nature of the graphshows that crops nearer the center of the image yield higher performance. There is virtually nodifference between the invariance and equivariance measurements for the Crop CNNs, showing thatthe high level CNN features trained for classification are not predictive for image extrapolation.This is because the equivariance mapping for Crop transforms uses the CNN activations for someun-centered crop to predict the CNN activations over the center crop.13Under review as a conference paper at ICLR 2017(a) Gaussian Blur(b) Color Jitter(c) Crop(d) Elastic Deformations(e) MirrorFigure 8: Equivariance and Invariance measurements of CNNs for some transforms. The first twocolumns are for RVL-CDIP. The last two columns are for ILSVRC.B.1.1 CNN ACCURACYWe also report the accuracy of each CNN in Table 3. While all transforms help improve invariance,we see that some input transformations help performance on the untransformed images, while otherhurt performance. In particular, Crop and Shear transforms improve performance the most, whileElastic Deformations and Gaussian Blur seem to hurt performance most.B.2 C ROSS DATASET INVARIANCE /EQUIVARIANCE MEASUREMENTSIn this section, we describe and give results for a new experiment. While all previous experimentsmeasure invariance/equivariance on the same dataset that the CNN was trained on (though withdifferent splits), here we use a new dataset to measure the invariance/equivariance of CNNs trainedon RVL-CDIP. The new dataset, ANDOC, is also composed of grayscale document images, thoughwhile RVL-CDIP is composed of scanned office documents, ANDOC is composed of digitized14Under review as a conference paper at ICLR 2017(a) Gaussian Noise(b) Perspective(c) Rotation(d) ShearFigure 9: Equivariance and Invariance measurements of of CNNs for some transforms. The first twocolumns are for RVL-CDIP. The last two columns are for ILSVRC.Figure 10: Cross dataset measurements of invariance/equivariance reveal that these properties do ex-tend to data domains beside the one used to train the CNN, though the invariance is slightly weaker.These CNNs were trained on RVL-CDIP and measured using data from a dataset of historical doc-uments. Graphs show 1LL2, so higher is better.historical documents, which differ significantly from modern office documents. As the RVL-CDIPCNNs were not trained to classify ANDOC documents, we resort to measuring the LL2differencein the feature representations.15Under review as a conference paper at ICLR 2017CNN RVL-CDIP ILSVRC CNN RVL-CDIP ILSVRCColor Jitter= 5 88.37 50.31 Crop 240x240 88.82 52.10Color Jitter= 10 88.10 50.20 Crop 256x256 88.83 54.58Color Jitter= 15 88.26 50.15 Crop 288x288 88.00 55.89Color Jitter= 20 88.01 50.26 Crop 320x320 86.47 55.17Elastic= 2;= 5 87.64 48.42 Noise= 5 88.19 50.02Elastic= 2;= 10 87.89 48.37 Noise= 10 88.01 50.00Elastic= 3;= 5 87.91 48.42 Noise= 15 88.14 49.42Elastic= 3;= 10 87.94 48.85 Noise= 20 88.15 49.46Blur= 1:5 87.50 48.95 Mirror Horz. 88.51 52.33Blur= 3 86.82 47.49 Mirror Vert. 88.51 55.13Mirror Horz./Vert. 88.42 53.91Perspective= 0:001 88.51 50.48 Rotation=5 88.57 50.30Perspective= 0:002 88.45 51.12 Rotation=10 88.74 51.12Perspective= 0:003 88.60 52.05 Rotation=15 88.48 51.99Perspective= 0:004 88.55 52.59 Rotation=20 88.49 52.44Shear=5 89.02 51.43 Baseline 1 88.08 50.07Shear=10 89.23 53.62 Baseline 2 88.31 50.14Shear=15 89.33 54.89 Baseline 3 88.33 50.11Shear=20 88.96 54.97Table 3: Accuracy of all CNNs over untransformed images (center crops for Crop transforms).Bolded entries indicate best parameter setting per transform type. For RVL-CDIP, test set accuracyis reported, while validation accuracy is reported for ILSVRC. In general, Shear transforms yield thebest performance. For RVL-CDIP, it appears that accuracy on the original image is most impactedby the type of transform, rather than the particular parameter settings. For ILSVRC, the largerparameter ranges for Crop, Perspective, Rotation, and Shear worked best.(a) RVL-CDIP (b) ILSVRCFigure 11: T-SNE visualization of the pairwise distance matrix for (a) RVL-CDIP and (b) ILSVRC.Best viewed electronically with zoom. Symbols and colors indicate the training transform of thenetwork. The size of the markers indicate the relative magnitude of the training transformation.In general, the measured invariance/equivariance is lower for ANDOC than for RVL-CDIP (seeFigure 10 for examples). This makes sense because the CNNs learn to encode discriminative infor-mation about RVL-CDIP documents. However, the invariance and equivariance properties are notlost when switching to a new domain of data. In fact, for Mirror, Rotation, and Shear transforms,the invariance is higher, though this is likely due to less information about the input image beingencoded for the new domain.B.3 R EPRESENTATION DISTANCEIn this section, we include additional results from the experiments in Section 6.2 that did not fitin the main text. Figure 11 shows an example T-SNE embedding of the CNN representations.There appears to be very little cluster structure and the T-SNE embeddings appear to be sensitiveto the random initialization of the embedding vectors. The results from multiple runs of the T-SNE16Under review as a conference paper at ICLR 2017(a) RVL-CDIP (b) ILSVRCFigure 12: Absolute Differences in the one-way distances (Eq. 9) for all pairs of CNNs using LL2distance. Row/Col labels are abbreviations for the transforms used to train the CNN. For context,a typicalLL2representation distance (Eq. 4) is roughly 0.50 for RVL-CDIP and 1.00 for ILSVRC.This means that one-way distance differences of approximately 10% and 5% are observed for RVL-CDIP and ILSVRC respectively.embedding in general do not agree on either which points are outliers or on nearest neighbor pairs.This is likely because representation distances do not seem to follow the triangle inequality axiom.While Eq. 4 yields a symmetric function by averaging two one-way distances, we find that for somepairs of CNNs, there is significant difference between the magnitudes of those one-way distances.In other words,D0(1;2) =minE1[L(E11(x); 2)]minE2[L(1; E22(x))] (9)is large. Figure 12 plots a heat map of Eq. 9 applied pairwise to each CNN for each dataset. Op-timization difficulties are one possible cause of large values in Figure 12. However, a more likelyexplanation is that some CNNs encode unique information about the input that cannot be predictedfrom the representations learned from other CNNs.B.4 I MPROVED EQUIVARIANCETable 4 shows the results of finetuning using Eq. 7 for CNNs trained with different types of dataaugmentation. Overall, finetuning improves 3 of 9 transform types. We believe that performancedoes not improve for some transforms because the dataset naturally contains examples of thosetransforms due to different object poses. This is not the case for RVL-CDIP because the documentimages are always scanned in the same manner.17Under review as a conference paper at ICLR 2017Transform ILSVRC ILSVRC Transform ILSVRC ILSVRCOriginal Finetune Original FinetuneColor Jitter 50.26 50.67 Crop center from 256x256 54.58 54.18brightness +10 50.25 50.62 upper left corner 52.90 52.68brightness -15 49.95 50.63 bottom right corner 53.13 52.66Elastic Deformations 48.37 50.46 Gaussian Noise 49.46 49.88= 2:5;= 5 49.55 49.82= 8 49.26 49.52= 3;= 10 49.44 49.64= 16 48.79 48.93Gaussian Blur 47.49 46.59 Mirror 53.91 53.63= 1 47.12 46.03 Horz. 54.02 51.53= 2 45.94 44.60 Horz. + Vert. 53.70 51.49Perspective 52.59 52.44 Rotation 52.44 53.7053.31 51.99=10 53.88 53.1153.01 52.27= 15 52.66 51.76Shear 54.97 54.84Horz,=10 54.15 52.62Vert,=15 51.85 50.77Table 4: Equivariance measurements for several CNNs on ILSVRC before and after finetuning usingEq. 7. The first row of each transform type shows performance over untransformed images. Resultsare more mixed compared to RVL-CDIP.18
BJ0Ee8cxx
Under review as a conference paper at ICLR 2017HIERARCHICAL MEMORY NETWORKSSarath Chandar1, Sungjin Ahn1, Hugo Larochelle2;4, Pascal Vincent1;4,Gerald Tesauro3, Yoshua Bengio1;41Université de Montréal, Canada.2Twitter, USA.3IBM Watson Research Center, USA.4CIFAR, Canada.ABSTRACTMemory networks are neural networks with an explicit memory component thatcan be both read and written to by the network. The memory is often addressed ina soft way using a softmax function, making end-to-end training with backprop-agation possible. However, this is not computationally scalable for applicationswhich require the network to read from extremely large memories. On the otherhand, it is well known that hard attention mechanisms based on reinforcementlearning are challenging to train successfully. In this paper, we explore a form ofhierarchical memory network, which can be considered as a hybrid between hardand soft attention memory networks. The memory is organized in a hierarchicalstructure such that reading from it is done with less computation than soft attentionover a flat memory, while also being easier to train than hard attention over a flatmemory. Specifically, we propose to incorporate Maximum Inner Product Search(MIPS) in the training and inference procedures for our hierarchical memory net-work. We explore the use of various state-of-the art approximate MIPS techniquesand report results on SimpleQuestions, a challenging large scale factoid questionanswering task.1 I NTRODUCTIONUntil recently, traditional machine learning approaches for challenging tasks such as image caption-ing, object detection, or machine translation have consisted in complex pipelines of algorithms, eachbeing separately tuned for better performance. With the recent success of neural networks and deeplearning research, it has now become possible to train a single model end-to-end, using backprop-agation. Such end-to-end systems often outperform traditional approaches, since the entire modelis directly optimized with respect to the final task at hand. However, simple encode-decode styleneural networks often underperform on knowledge-based reasoning tasks like question-answeringor dialog systems. Indeed, in such cases it is nearly impossible for regular neural networks to storeall the necessary knowledge in their parameters.Neural networks with memory (Graves et al., 2014; Weston et al., 2015b) can deal with knowledgebases by having an external memory component which can be used to explicitly store knowledge.The memory is accessed by reader and writer functions, which are both made differentiable sothat the entire architecture (neural network, reader, writer and memory components) can be trainedend-to-end using backpropagation. Memory-based architectures can also be considered as general-izations of RNNs and LSTMs, where the memory is analogous to recurrent hidden states. Howeverthey are much richer in structure and can handle very long-term dependencies because once a vector(i.e., a memory) is stored, it is copied from time step to time step and can thus stay there for a verylong time (and gradients correspondingly flow back time unhampered).There exists several variants of neural networks with a memory component: Memory Networks (We-ston et al., 2015b), Neural Turing Machines (NTM) (Graves et al., 2014), Dynamic Memory Net-Corresponding author: apsarathchandar@gmail.com1Under review as a conference paper at ICLR 2017works (DMN) (Kumar et al., 2015). They all share five major components: memory, input module,reader, writer, and output module.Memory: The memory is an array of cells, each capable of storing a vector. The memory is ofteninitialized with external data (e.g. a database of facts), by filling in its cells with a pre-trained vectorrepresentations of that data.Input module: The input module is to compute a representation of the input that can be used byother modules.Writer: The writer takes the input representation and updates the memory based on it. The writercan be as simple as filling the slots in the memory with input vectors in a sequential way (as oftendone in memory networks). If the memory is bounded, instead of sequential writing, the writer hasto decide where to write and when to rewrite cells (as often done in NTMs).Reader: Given an input and the current state of the memory, the reader retrieves content from thememory, which will then be used by an output module. This often requires comparing the input’srepresentation or a function of the recurrent state with memory cells using some scoring functionsuch as a dot product.Output module: Given the content retrieved by the reader, the output module generates a prediction,which often takes the form of a conditional distribution over multiple labels for the output.For the rest of the paper, we will use the name memory network to describe any model which hasany form of these five components. We would like to highlight that all the components except thememory are learnable. Depending on the application, any of these components can also be fixed. Inthis paper, we will focus on the situation where a network does not write and only reads from thememory.In this paper, we focus on the application of memory networks to large-scale tasks. Specifically, wefocus on large scale factoid question answering. For this problem, given a large set of facts and a nat-ural language question, the goal of the system is to answer the question by retrieving the supportingfact for that question, from which the answer can be derived. Application of memory networks tothis task has been studied by Bordes et al. (2015). However, Bordes et al. (2015) depended on key-word based heuristics to filter the facts to a smaller set which is manageable for training. Howeverheuristics are invariably dataset dependent and we are interested in a more general solution whichcan be used when the facts are of any structure. One can design soft attention retrieval mechanisms,where a convex combination of all the cells is retrieved or design hard attention retrieval mecha-nisms where one or few cells from the memory are retrieved. Soft attention is achieved by usingsoftmax over the memory which makes the reader differentiable and hence learning can be doneusing gradient descent. Hard attention is achieved by using methods like REINFORCE (Williams,1992), which provides a noisy gradient estimate when discrete stochastic decisions are made by amodel.Both soft attention and hard attention have limitations. As the size of the memory grows, softattention using softmax weighting is not scalable. It is computationally very expensive, since itscomplexity is linear in the size of the memory. Also, at initialization, gradients are dispersed somuch that it can reduce the effectiveness of gradient descent. These problems can be alleviated bya hard attention mechanism, for which the training method of choice is REINFORCE. However,REINFORCE can be brittle due to its high variance and existing variance reduction techniques arecomplex. Thus, it is rarely used in memory networks (even in cases of a small memory).In this paper, we propose a new memory selection mechanism based on Maximum Inner ProductSearch (MIPS) which is both scalable and easy to train. This can be considered as a hybrid of softand hard attention mechanisms. The key idea is to structure the memory in a hierarchical way suchthat it is easy to perform MIPS, hence the name Hierarchical Memory Network (HMN). HMNs arescalable at both training and inference time. The main contributions of the paper are as follows:We explore hierarchical memory networks, where the memory is organized in a hierarchicalfashion, which allows the reader to efficiently access only a subset of the memory.While there are several ways to decide which subset to access, we propose to pose memoryaccess as a maximum inner product search (MIPS) problem.2Under review as a conference paper at ICLR 2017We empirically show that exact MIPS-based algorithms not only enjoy similar convergenceas soft attention models, but can even improve the performance of the memory network.Since exact MIPS is as computationally expensive as a full soft attention model, we proposeto train the memory networks using approximate MIPS techniques for scalable memoryaccess.We empirically show that unlike exact MIPS, approximate MIPS algorithms provide aspeedup and scalability of training, though at the cost of some performance.2 H IERARCHICAL MEMORY NETWORKSIn this section, we describe the proposed Hierarchical Memory Network (HMN). In this paper,HMNs only differ from regular memory networks in two of its components: the memory and thereader.Memory: Instead of a flat array of cells for the memory structure, HMNs leverages a hierarchicalmemory structure. Memory cells are organized into groups and the groups can further be organizedinto higher level groups. The choice for the memory structure is tightly coupled with the choiceof reader, which is essential for fast memory access. We consider three classes of approaches forthe memory’s structure: hashing-based approaches, tree-based approaches, and clustering-basedapproaches. This is explained in detail in the next section.Reader: The reader in the HMN is different from the readers in flat memory networks. Flat memory-based readers use either soft attention over the entire memory or hard attention that retrieves a singlecell. While these mechanisms might work with small memories, with HMNs we are more interestedin achieving scalability towards very large memories. So instead, HMN readers use soft attentiononly over a selected subset of the memory. Selecting memory subsets is guided by a maximum innerproduct search algorithm, which can exploit the hierarchical structure of the organized memory toretrieve the most relevant facts in sub-linear time. The MIPS-based reader is explained in moredetail in the next section.In HMNs, the reader is thus trained to create MIPS queries such that it can retrieve a sufficient setof facts. While most of the standard applications of MIPS (Ram & Gray, 2012; Bachrach et al.,2014; Shrivastava & Li, 2014) so far have focused on settings where both query vector and database(memory) vectors are precomputed and fixed, memory readers in HMNs are learning to do MIPS byupdating the input representation such that the result of MIPS retrieval contains the correct fact(s).3 M EMORY READER WITH K-MIPS ATTENTIONIn this section, we describe how the HMN memory reader uses Maximum Inner Product Search(MIPS) during learning and inference.We begin with a formal definition of K-MIPS. Given a set of points X=fx1; : : : ; xngand a queryvector q, our goal is to findargmax(K)i2Xq>xi (1)where the argmax(K)returns the indices of the top- Kmaximum values. In the case of HMNs, Xcorresponds to the memory and qcorresponds to the vector computed by the input module.A simple but inefficient solution for K-MIPS involves a linear search over the cells in memory byperforming the dot product of qwith all the memory cells. While this will return the exact resultforK-MIPS, it is too costly to perform when we deal with a large-scale memory. However, inmany practical applications, it is often sufficient to have an approximate result for K-MIPS, tradingspeed-up at the cost of the accuracy. There exist several approximate K-MIPS solutions in theliterature (Shrivastava & Li, 2014; 2015; Bachrach et al., 2014; Neyshabur & Srebro, 2015).All the approximate K-MIPS solutions add a form of hierarchical structure to the memory and visitonly a subset of the memory cells to find the maximum inner product for a given query. Hashing-based approaches (Shrivastava & Li, 2014; 2015; Neyshabur & Srebro, 2015) hash cells into multiplebins, and given a query they search for K-MIPS cell vectors only in bins that are close to the bin3Under review as a conference paper at ICLR 2017associated with the query. Tree-based approaches (Ram & Gray, 2012; Bachrach et al., 2014) createsearch trees with cells in the leaves of the tree. Given a query, a path in the tree is followed andMIPS is performed only for the leaf for the chosen path. Clustering-based approaches (Auvolatet al., 2015) cluster cells into multiple clusters (or a hierarchy of clusters) and given a query, theyperform MIPS on the centroids of the top few clusters. We refer the readers to (Auvolat et al., 2015)for an extensive comparison of various state-of-the-art approaches for approximate K-MIPS.Our proposal is to exploit this rich approximate K-MIPS literature to achieve scalable training andinference in HMNs. Instead of filtering the memory with heuristics, we propose to organize thememory based on approximate K-MIPS algorithms and then train the reader to learn to performMIPS. Specifically, consider the following softmax over the memory which the reader has to performfor every reading step to retrieve a set of relevant candidates:Rout= softmax( h(q)MT) (2)where h(q)2Rdis the representation of the query, M2RNdis the memory with Nbeing thetotal number of cells in the memory. We propose to replace this softmax with softmax(K)which isdefined as follows:C= argmax(K)h(q)MT(3)Rout= softmax(K)(h(q)MT) = softmax( h(q)M[C]T) (4)where Cis the indices of top- KMIP candidate cells and M[C]is a sub-matrix of Mwhere the rowsare indexed by C.One advantage of using the softmax(K)is that it naturally focuses on cells that would normallyreceive the strongest gradients during learning. That is, in a full softmax, the gradients are otherwisemore dispersed across cells, given the large number of cells and despite many contributing a smallgradient. As our experiments will show, this results in slower training.One problematic situation when learning with the softmax(K)is when we are at the initial stages oftraining and the K-MIPS reader is not including the correct fact candidate. To avoid this issue, wealways include the correct candidate to the top- Kcandidates retrieved by the K-MIPS algorithm,effectively performing a fully supervised form of learning.During training, the reader is updated by backpropagation from the output module, through thesubset of memory cells. Additionally, the log-likelihood of the correct fact computed using K-softmax is also maximized. This second supervision helps the reader learn to modify the querysuch that the maximum inner product of the query with respect to the memory will yield the correctsupporting fact in the top Kcandidate set.Until now, we described the exact K-MIPS-based learning framework, which still requires a linearlook-up over all memory cells and would be prohibitive for large-scale memories. In such scenarios,we can replace the exact K-MIPS in the training procedure with the approximate K-MIPS. This isachieved by deploying a suitable memory hierarchical structure. The same approximate K-MIPS-based reader can be used during inference stage as well. Of course, approximate K-MIPS algorithmsmight not return the exact MIPS candidates and will likely to hurt performance, but at the benefit ofachieving scalability.While the memory representation is fixed in this paper, updating the memory along with the queryrepresentation should improve the likelihood of choosing the correct fact. However, updating thememory will reduce the precision of the approximate K-MIPS algorithms, since all of them assumethat the vectors in the memory are static. Designing efficient dynamic K-MIPS should improve theperformance of HMNs even further, a challenge that we hope to address in future work.3.1 R EADER WITH CLUSTERING -BASED APPROXIMATE K-MIPSClustering-based approximate K-MIPS was proposed in (Auvolat et al., 2015) and it has been shownto outperform various other state-of-the-art data dependent and data independent approximate K-MIPS approaches for inference tasks. As we will show in the experiments section, clustering-basedMIPS also performs better when used to training HMNs. Hence, we focus our presentation on theclustering-based approach and propose changes that were found to be helpful for learning HMNs.4Under review as a conference paper at ICLR 2017Following most of the other approximate K-MIPS algorithms, Auvolat et al. (2015) convert MIPSto Maximum Cosine Similarity Search (MCSS) problem:argmax(K)i2XqTxijjqjj jjxijj= argmax(K)i2XqTxijjxijj(5)When all the data vectors xihave the same norm, then MCSS is equivalent to MIPS. However, it isoften restrictive to have this additional constraint. Instead, Auvolat et al. (2015) append additionaldimensions to both query and data vectors to convert MIPS to MCSS. In HMN terminology, thiswould correspond to adding a few more dimensions to the memory cells and input representations.The algorithm introduces two hyper-parameters, U < 1andm2N. The first step is to scale all thevectors in the memory by the same factor, such that maxijjxijj2=U. We then apply two mappings,PandQ, on the memory cells and on the input vector, respectively. These two mappings simplyconcatenate mnew components to the vectors and make the norms of the data points all roughly thesame (Shrivastava & Li, 2015). The mappings are defined as follows:P(x) = [ x;1=2jjxjj22;1=2jjxjj42; : : : ; 1=2jjxjj2m2] (6)Q(x) = [ x;0;0; : : : ; 0] (7)We thus have the following approximation of MIPS by MCSS for any query vector q:argmax(K)iq>xi'argmax(K)iQ(q)>P(xi)jjQ(q)jj2jjP(xi)jj2(8)Once we convert MIPS to MCSS, we can use spherical K-means (Zhong, 2005) or its hierarchicalversion to approximate and speedup the cosine similarity search. Once the memory is clustered,then every read operation requires only Kdot-products, where Kis the number of cluster centroids.Since this is an approximation, it is error-prone. As we are using this approximation for the learningprocess, this introduces some bias in gradients, which can affect the overall performance of HMN.To alleviate this bias, we propose three simple strategies.Instead of using only the top- Kcandidates for a single read query, we also add top- Kcandidates retrieved for every other read query in the mini-batch. This serves two pur-poses. First, we can do efficient matrix multiplications by leveraging GPUs since all theK-softmax in a minibatch are over the same set of elements. Second, this also helps todecrease the bias introduced by the approximation error.For every read access, instead of only using the top few clusters which has a maximumproduct with the read query, we also sample some clusters from the rest, based on a prob-ability distribution log-proportional to the dot product with the cluster centroids. This alsodecreases the bias.We can also sample random blocks of memory and add it to top- Kcandidates.We empirically investigate the effect of these variations in Section 5.5.4 R ELATED WORKMemory networks have been introduced in (Weston et al., 2015b) and have been so far applied tocomprehension-based question answering (Weston et al., 2015a; Sukhbaatar et al., 2015), large scalequestion answering (Bordes et al., 2015) and dialogue systems (Dodge et al., 2015). While (Westonet al., 2015b) considered supervised memory networks in which the correct supporting fact is givenduring the training stage, (Sukhbaatar et al., 2015) introduced semi-supervised memory networksthat can learn the supporting fact by itself. (Kumar et al., 2015; Xiong et al., 2016) introducedDynamic Memory Networks (DMNs) which can be considered as a memory network with twotypes of memory: a regular large memory and an episodic memory. Another related class of modelis the Neural Turing Machine (Graves et al., 2014), which uses softmax-based soft attention. Later(Zaremba & Sutskever, 2015) extended NTM to hard attention using reinforcement learning. (Dodgeet al., 2015; Bordes et al., 2015) alleviate the problem of the scalability of soft attention by having5Under review as a conference paper at ICLR 2017an initial keyword based filtering stage, which reduces the number of facts being considered. Ourwork generalizes this filtering by using MIPS for filtering. This is desirable because MIPS can beapplied for any modality of data or even when there is no overlap between the words in a questionand the words in facts.The softmax arises in various situations and most relevant to this work are scaling methods for largevocabulary neural language modeling. In neural language modeling, the final layer is a softmaxdistribution over the next word and there exist several approaches to achieve scalability. (Morin& Bengio, 2005) proposes a hierarchical softmax based on prior clustering of the words into abinary, or more generally n-ary tree, that serves as a fixed structure for the learning process of themodel. The complexity of training is reduced from O(n)toO(logn). Due to its clustering and treestructure, it resembles the clustering-based MIPS techniques we explore in this paper. However, theapproaches differ at a fundamental level. Hierarchical softmax defines the probability of a leaf nodeas the product of all the probabilities computed by all the intermediate softmaxes on the way to thatleaf node. By contrast, an approximate MIPS search imposes no such constraining structure on theprobabilistic model, and is better thought as efficiently searching for top winners of what amountsto be a large ordinary flat softmax. Other methods such as Noice Constrastive Estimation (Mnih& Gregor, 2014) and Negative Sampling (Mikolov et al., 2013) avoid an expensive normalizationconstant by sampling negative samples from some marginal distribution. By contrast, our approachapproximates the softmax by explicitly including in its negative samples candidates that likely wouldhave a large softmax value. Jean et al. (2015) introduces an importance sampling approach thatconsiders all the words in a mini-batch as the candidate set. This in general might also not includethe MIPS candidates with highest softmax values.(Spring & Shrivastava, 2016) is the only work that we know of, proposing to use MIPS during learn-ing. It proposes hashing-based MIPS to sort the hidden layer activations and reduce the computationin every layer. However, a small scale application was considered and data-independent methodslike hashing will likely suffer as dimensionality increases. Rae et al. (2016) have also independentlyproposed a model called SAM to use approximate search methods for memory access in NTM-likearchitectures. However, our motivation is different. While Rae et al. (2016) focus on architectureswhere the memory is written by the controller itself, we focus on handling memory access to largeexternal knowledge bases. While both the models fix the memory access mechanism (HMN usesMIPS and SAM uses NNS), our controller works in a much more constrained setting. Moreover,our experiments suggest that the performance of SAM could be improved using a clustering-basedapproach as in our work, instead of tree/hash-based approaches for memory search used by SAM.5 E XPERIMENTSIn this section, we report experiments on factoid question answering using hierarchical memorynetworks. Specifically, we use the SimpleQuestions dataset Bordes et al. (2015). The aim of theseexperiments is not to achieve state-of-the-art results on this dataset. Rather, we aim to proposeand analyze various approaches to make memory networks more scalable and explore the achievedtradeoffs between speed and accuracy.5.1 D ATASETWe use SimpleQuestions (Bordes et al., 2015) which is a large scale factoid question answeringdataset. SimpleQuestions consists of 108,442 natural language questions, each paired with a cor-responding fact from Freebase. Each fact is a triple (subject,relation,object) and the answer to thequestion is always the object. The dataset is divided into training (75910), validation (10845), andtest (21687) sets. Unlike Bordes et al. (2015) who additionally considered FB2M (10M facts) orFB5M (12M facts) with keyword-based heuristics for filtering most of the facts for each question,we only use SimpleQuestions, with no keyword-based heuristics. This allows us to do a direct com-parison with the full softmax approach in a reasonable amount of time. Moreover, we would like tohighlight that for this dataset, keyword-based filtering is a very efficient heuristic since all questionshave an appropriate source entity with a matching word. Nevertheless, our goal is to design a generalpurpose architecture without such strong assumptions on the nature of the data.6Under review as a conference paper at ICLR 20175.2 M ODELLetVqbe the vocabulary of all words in the natural language questions. Let Wqbe ajVqjmmatrix where each row is some mdimensional embedding for a word in the question vocabulary.This matrix is initialized with random values and learned during training. Given any question, werepresent it with a bag-of-words representation by summing the vector representation of each wordin the question. Let q=fwigpi=1,h(q) =pXi=1Wq[wi]Then, to find the relevant fact from the memory M, we call the K-MIPS-based reader module withh(q)as the query. This uses Equation 3 and 4 to compute the output of the reader Rout. The readeris trained by minimizing the Negative Log Likelihood (NLL) of the correct fact.J=NXi=1log(Rout[fi])where fiis the index of the correct fact in Wm. We are fixing the memory embeddings to the TransE(Bordes et al., 2013) embeddings and learning only the question embeddings.This model is simpler than the one reported in (Bordes et al., 2015) so that it is esay to analyze theeffect of various memory reading strategies.5.3 T RAINING DETAILSWe trained the model with the Adam optimizer (Kingma & Ba, 2014), with a fixed learning rateof 0.001. We used mini-batches of size 128. We used 200 dimensional embeddings for the TransEentities, yielding 600 dimensional embeddings for facts by concatenating the embeddings of thesubject, relation and object. We also experimented with summing the entities in the triple insteadof concatenating, but we found that it was difficult for the model to differentiate facts this way.The only learnable parameters by the HMN model are the question word embeddings. The entitydistribution in SimpleQuestions is extremely sparse and hence, following Bordes et al. (2015), wealso add artificial questions for all the facts for which we do not have natural language questions.Unlike Bordes et al. (2015), we do not add any other additional tasks like paraphrase detection tothe model, mainly to study the effect of the reader. We stopped training for all the models when thevalidation accuracy consistently decreased for 3 epochs.5.4 E XACT K-MIPS IMPROVES ACCURACYIn this section, we compare the performance of the full soft attention reader and exact K-MIPSattention readers. Our goal is to verify that K-MIPS attention is in fact a valid and useful attentionmechanism and see how it fares when compared to full soft attention. For K-MIPS attention, wetriedK210;50;100;1000 . We would like to emphasize that, at training time, along with Kcandidates for a particular question, we also add the K-candidates for each question in the mini-batch. So the exact size of the softmax layer would be higer than Kduring training. In Table 1,we report the test performance of memory networks using the soft attention reader and K-MIPSattention reader. We also report the average softmax size during training. From the table, it isclear that the K-MIPS attention readers improve the performance of the network compared to softattention reader. In fact, smaller the value of Kis, better the performance. This result suggests thatit is better to use a K-MIPS layer instead of softmax layer whenever possible. It is interesting to seethat the convergence of the model is not slowed down due to this change in softmax computation (asshown in Figure 1).This experiment confirms the usefulness of K-MIPS attention. However, exact K-MIPS has thesame complexity as a full softmax. Hence, to scale up the training, we need more efficient forms ofK-MIPS attention, which is the focus of next experiment.7Under review as a conference paper at ICLR 2017Model Test Acc. Avg. Softmax SizeFull-softmax 59.5 10844210-MIPS 62.2 129050-MIPS 61.2 6180100-MIPS 60.6 119281000-MIPS 59.6 70941Clustering 51.5 20006PCA-Tree 32.4 21108WTA-Hash 40.2 20008Table 1: Accuracy in SQ test-set and average sizeof memory used. 10-softmax has high performancewhile using only smaller amount of memory.0 5 10 15 20 25Epochs30405060708090Val Errorsoftmax10-softmax50-softmax100-softmax1000-softmaxFigure 1: Validation curve for various models.Convergence is not slowed down by k-softmax.5.5 A PPROXIMATE K-MIPS BASED LEARNINGAs mentioned previously, designing faster algorithms for K-MIPS is an active area of research. Au-volat et al. (2015) compared several state-of-the-art data-dependent and data-independent methodsfor faster approximate K-MIPS and it was found that clustering-based MIPS performs significantlybetter than other approaches. However the focus of the comparison was on performance during theinference stage. In HMNs, K-MIPS must be used at both training stage and inference stages. Toverify if the same trend can been seen during learning stage as well, we compared three differentapproaches:Clustering: This was explained in detail in section 3.WTA-Hash: Winner Takes All hashing (Vijayanarasimhan et al., 2014) is a hashing-based K-MIPSalgorithm which also converts MIPS to MCSS by augmenting additional dimensions to the vectors.This method used nhash functions and each hash function does pdifferent random permutationsof the vector. Then the prefix constituted by the first kelements of each permuted vector is used toconstruct the hash for the vector.PCA-Tree: PCA-Tree (Bachrach et al., 2014) is the state-of-the-art tree-based method, which con-verts MIPS to NNS by vector augmentation. It uses the principal components of the data to constructa balanced binary tree with data residing in the leaves.For a fair comparison, we varied the hyper-parameters of each algorithm in such a way that theaverage speedup is approximately the same. Table 1 shows the performance of all three methods,compared to a full softmax. From the table, it is clear that the clustering-based method performssignificantly better than the other two methods. However, performances are lower when comparedto the performance of the full softmax.As a next experiment, we analyze various the strategies proposed in Section 3.1 to reduce the ap-proximation bias of clustering-based K-MIPS:Top-K: This strategy picks the vectors in the top Kclusters as candidates.Sample-K: This strategy samples Kclusters, without replacement, based on a probability distri-bution based on the dot product of the query with the cluster centroids. When combined with theTop-Kstrategy, we ignore clusters selected by the Top- kstrategy for sampling.Rand-block: This strategy divides the memory into several blocks and uniformly samples a randomblock as candidate.We experimented with 1000 clusters and 2000 clusters. While comparing various training strategies,we made sure that the effective speedup is approximately the same. Memory access to facts perquery for all the models is approximately 20,000, hence yielding a 5X speedup.Results are given in Table 2. We observe that the best approach is to combine the Top-K and Sample-K strategies, with Rand-block not being beneficial. Interestingly, the worst performances correspondto cases where the Sample-K strategy is ignored.8Under review as a conference paper at ICLR 20171000 clusters 2000 clustersTop-K Sample-K rand-block Test Acc. epochs Test Acc. epochsYes No No 50.2 16 51.5 22No Yes No 52.5 68 52.8 63Yes Yes No 52.8 31 53.1 26Yes No Yes 51.8 32 52.3 26Yes Yes Yes 52.5 38 52.7 19Table 2: Accuracy in SQ test set and number of epochs for convergence.6 C ONCLUSIONIn this paper, we proposed a hierarchical memory network that exploits K-MIPS for its attention-based reader. Unlike soft attention readers, K-MIPS attention reader is easily scalable to largermemories. This is achieved by organizing the memory in a hierarchical way. Experiments on theSimpleQuestions dataset demonstrate that exact K-MIPS attention is better than soft attention. How-ever, existing state-of-the-art approximate K-MIPS techniques provide a speedup at the cost of someaccuracy. Future research will investigate designing efficient dynamic K-MIPS algorithms, wherethe memory can be dynamically updated during training. This should reduce the approximation biasand hence improve the overall performance.REFERENCESAlex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, and Yoshua Bengio. Clusteringis efficient for approximate maximum inner product search. arXiv preprint arXiv:1507.05910 ,2015.Yoram Bachrach et al. Speeding up the xbox recommender system using a euclidean transformationfor inner-product spaces. RecSys ’14, pp. 257–264, 2014.Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko.Translating embeddings for modeling multi-relational data. In Advances in NIPS , pp. 2787–2795.2013.Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple questionanswering with memory networks. arXiv preprint arXiv:1506.02075 , 2015.Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, ArthurSzlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog sys-tems. CoRR , abs/1511.06931, 2015.Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprintarXiv:1410.5401 , 2014.Sébastien Jean, KyungHyun Cho, Roland Memisevic, and Yoshua Bengio. On using very largetarget vocabulary for neural machine translation. In Proceedings of ACL,2015 , pp. 1–10, 2015.Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR ,abs/1412.6980, 2014.Ankit Kumar et al. Ask me anything: Dynamic memory networks for natural language processing.CoRR , abs/1506.07285, 2015.Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word repre-sentations in vector space. In International Conference on Learning Representations, WorkshopTrack , 2013.Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXivpreprint arXiv:1402.0030 , 2014.Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. InRobert G. Cowell and Zoubin Ghahramani (eds.), Proceedings of AISTATS , pp. 246–252, 2005.9Under review as a conference paper at ICLR 2017Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search.InProceedings of the 31st International Conference on Machine Learning , 2015.Jack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, AlexGraves, and Timothy P Lillicrap. Scaling memory-augmented neural networks with sparse readsand writes. In Advances in NIPS . 2016.Parikshit Ram and Alexander G. Gray. Maximum inner-product search using cone trees. KDD ’12,pp. 931–939, 2012.Anshumali Shrivastava and Ping Li. Asymmetric LSH (ALSH) for sublinear time maximum innerproduct search (MIPS). In Advances in Neural Information Processing Systems 27 , pp. 2321–2329, 2014.Anshumali Shrivastava and Ping Li. Improved asymmetric locality sensitive hashing (alsh) formaximum inner product search (mips). In Proceedings of Conference on Uncertainty in ArtificialIntelligence (UAI) , 2015.Ryan Spring and Anshumali Shrivastava. Scalable and sustainable deep learning via randomizedhashing. CoRR , abs/1602.08194, 2016.Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks.arXiv preprint arXiv:1503.08895 , 2015.Sudheendra Vijayanarasimhan, Jon Shlens, Rajat Monga, and Jay Yagnik. Deep networks with largeoutput spaces. arXiv preprint arXiv:1412.7479 , 2014.Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai-complete questionanswering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698 , 2015a.Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of TheInternational Conference on Representation Learning (ICLR 2015) , 2015b. In Press.Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcementlearning. Machine Learning , 8:229–256, 1992.Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual andtextual question answering. CoRR , abs/1603.01417, 2016.Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR ,abs/1505.00521, 2015.Shi Zhong. Efficient online spherical k-means clustering. In Neural Networks, 2005. IJCNN’05.Proceedings. 2005 IEEE International Joint Conference on , volume 5, pp. 3180–3185. IEEE,2005.10
r1YNw6sxg
Published as a conference paper at ICLR 2017LEARNING VISUAL SERVOING WITH DEEPFEATURESAND FITTED Q-I TERATIONAlex X. Leey, Sergey Leviney, Pieter AbbeelzyxyUC Berkeley, Department of Electrical Engineering and Computer ScienceszOpenAIxInternational Computer Science Institutefalexlee gk,svlevine,pabbeel g@cs.berkeley.eduABSTRACTVisual servoing involves choosing actions that move a robot in response to ob-servations from a camera, in order to reach a goal configuration in the world.Standard visual servoing approaches typically rely on manually designed fea-tures and analytical dynamics models, which limits their generalization capabilityand often requires extensive application-specific feature and model engineering.In this work, we study how learned visual features, learned predictive dynam-ics models, and reinforcement learning can be combined to learn visual servo-ing mechanisms. We focus on target following, with the goal of designing al-gorithms that can learn a visual servo using low amounts of data of the targetin question, to enable quick adaptation to new targets. Our approach is basedon servoing the camera in the space of learned visual features, rather than im-age pixels or manually-designed keypoints. We demonstrate that standard deepfeatures, in our case taken from a model trained for object classification, can beused together with a bilinear predictive model to learn an effective visual servothat is robust to visual variation, changes in viewing angle and appearance, andocclusions. A key component of our approach is to use a sample-efficient fit-ted Q-iteration algorithm to learn which features are best suited for the task athand. We show that we can learn an effective visual servo on a complex syn-thetic car following benchmark using just 20 training trajectory samples for re-inforcement learning. We demonstrate substantial improvement over a conven-tional approach based on image pixels or hand-designed keypoints, and we showan improvement in sample-efficiency of more than two orders of magnitude overstandard model-free deep reinforcement learning algorithms. Videos are availableathttp://rll.berkeley.edu/visual_servoing .1 I NTRODUCTIONVisual servoing is a classic problem in robotics that requires moving a camera or robot to match atarget configuration of visual features or image intensities. Many robot control tasks that combineperception and action can be posed as visual servoing, including navigation (DeSouza & Kak, 2002;Chen et al., 2006), where a robot must follow a desired path; manipulation, where the robot mustservo an end-effector or a camera to a target object to grasp or manipulate it (Malis et al., 1999;Corke, 1993; Hashimoto, 1993; Hosoda & Asada, 1994; Kragic & Christensen, 2002); and variousother problems, as surveyed in Hutchinson et al. (1996). Most visual servoing methods assume ac-cess to good geometric image features (Chaumette & Hutchinson, 2006; Collewet et al., 2008; Caronet al., 2013) and require knowledge of their dynamics, which are typically obtained from domainknowledge about the system. Using such hand-designed features and models prevents exploitationof statistical regularities in the world, and requires manual engineering for each new system.In this work, we study how learned visual features, learned predictive dynamics models, and re-inforcement learning can be combined to learn visual servoing mechanisms. We focus on targetfollowing, with the goal of designing algorithms that can learn a visual servo using low amounts of1Published as a conference paper at ICLR 2017data of the target in question, so as to be easy and quick to adapt to new targets. Successful targetfollowing requires the visual servo to tolerate moderate variation in the appearance of the target,including changes in viewpoint and lighting, as well as occlusions. Learning invariances to all suchdistractors typically requires a considerable amount of data. However, since a visual servo is typ-ically specific to a particular task, it is desirable to be able to learn the servoing mechanism veryquickly, using a minimum amount of data. Prior work has shown that the features learned by largeconvolutional neural networks on large image datasets, such as ImageNet classification (Deng et al.,2009), tend to be useful for a wide range of other visual tasks (Donahue et al., 2014). We explorewhether the usefulness of such features extends to visual servoing.To answer this question, we propose a visual servoing method that uses pre-trained features, inour case obtained from the VGG network (Simonyan & Zisserman, 2014) trained for ImageNetclassification. Besides the visual features, our method uses an estimate of the feature dynamics invisual space by means of a bilinear model. This allows the visual servo to predict how motion ofthe robot’s camera will affect the perceived feature values. Unfortunately, servoing directly on thehigh-dimensional features of a pre-trained network is insufficient by itself to impart robustness onthe servo: the visual servo must not only be robust to moderate visual variation, but it must alsobe able to pick out the target of interest (such as a car that the robot is tasked with following) fromirrelevant distractor objects. To that end, we propose a sample-efficient fitted Q-iteration procedurethat automatically chooses weights for the most relevant visual features. Crucially, the actual ser-voing mechanism in our approach is extremely simple, and simply seeks to minimize the Euclideandistance between the weighted feature values at the next time step and the target. The form of theservoing policy in our approach leads to an analytic and tractable linear approximator for the Q-function, which leads to a computationally efficient fitted Q-iteration algorithm. We show that wecan learn an effective visual servo on a complex synthetic car following benchmark using just 20training trajectory samples for reinforcement learning. We demonstrate substantial improvementover a conventional approach based on image pixels or hand-designed keypoints, and we show animprovement in sample-efficiency of more than two orders of magnitude over standard model-freedeep reinforcement learning algorithms.The environment for the synthetic car following benchmark is available online as the packageCitySim3D1, and the code to reproduce our method and experiments is also available online2. Sup-plementary videos of all the test executions are available on the project’s website3.2 R ELATED WORKVisual servoing is typically (but not always) performed with calibrated cameras and carefully de-signed visual features. Ideal features for servoing should be stable and discriminative, and muchof the work on visual servoing focuses on designing stable and convergent controllers under theassumption that such features are available (Espiau et al., 2002; Mohta et al., 2014; Wilson et al.,1996). Some visual servoing methods do not require camera calibration (Jagersand et al., 1997;Yoshimi & Allen, 1994), and some recent methods operate directly on image intensities (Caronet al., 2013), but generally do not use learning to exploit statistical regularities in the world andimprove robustness to distractors.Learning is a relatively recent addition to the repertoire of visual servoing tools. Several methodshave been proposed that apply ideas from reinforcement learning to directly acquire visual servoingcontrollers (Lampe & Riedmiller, 2013; Sadeghzadeh et al., 2015). However, such methods havenot been demonstrated under extensive visual variation, and do not make use of state-of-the-artconvolutional neural network visual features. Though more standard deep reinforcement learningmethods (Lange et al., 2012; Mnih et al., 2013; Levine et al., 2016; Lillicrap et al., 2015) could inprinciple be applied to directly learn visual servoing policies, such methods tend to require largenumbers of samples to learn task-specific behaviors, making them poorly suited for a flexible visualservoing algorithm that can be quickly repurposed to new tasks (e.g. to following a different object).1https://github.com/alexlee-gk/citysim3d2https://github.com/alexlee-gk/visual_dynamics3http://rll.berkeley.edu/visual_servoing2Published as a conference paper at ICLR 2017Instead, we propose an approach that combines learning of predictive models with pre-trained visualfeatures. We use visual features trained for ImageNet (Deng et al., 2009) classification, though anypre-trained features could in principle be applicable for our method, so long as they provide a suit-able degree of invariance to visual distractors such as lighting, occlusion, and changes in viewpoint.Using pre-trained features allows us to avoid the need for large amounts of experience, but we muststill learn the policy itself. To further accelerate this process, we first acquire a predictive model thatallows the visual servo to determine how the visual features will change in response to an action.General video prediction is an active research area, with a number of complex but data-hungry mod-els proposed in recent years (Oh et al., 2015; Watter et al., 2015; Mathieu et al., 2015; Xue et al.,2016; Lotter et al., 2016; Jia et al., 2016; Walker et al., 2016; V ondrick et al., 2016).However, we observe that convolutional response maps can be interpreted as images and, undermild assumptions, the dynamics of image pixels during camera motion can be well approximated bymeans of a bilinear model (Censi & Murray, 2015). We therefore train a relatively simple bilinearmodel for short-term prediction of visual feature dynamics, which we can use inside a very simplevisual servo that seeks to minimize the error between the next predicted feature values and a targetimage.Unfortunately, simply training predictive models on top of pre-trained features is insufficient toproduce an effective visual servo, since it weights the errors of distractor objects the same amount asthe object of interest. We address this challenge by using an efficient Q-iteration algorithm to trainthe weights on the features to maximize the servo’s long-horizon reward. This method draws onideas from regularized fitted Q-iteration (Gordon, 1995; Ernst et al., 2005; Farahmand et al., 2009)and neural fitted Q-iteration (Riedmiller, 2005) to develop a sample-efficient algorithm that candirectly estimate the expected return of the visual servo without the use of any additional functionapproximator.3 P ROBLEM STATEMENTLetytbe a featurization of the camera’s observations xtand let ybe some given goal feature map.For the purposes of this work, we define visual servoing as the problem of choosing controls utfora fixed number of discrete time steps tas to minimize the error kyytk.We use a relatively simple gradient-based servoing policy that uses one-step feature dynamics,f:fyt;utg!yt+1. The policy chooses the control that minimizes the distance between the goalfeature map and the one-step prediction:(xt;x) = arg minukyf(yt;u)k2: (1)Learning this policy amounts to learning the robot dynamics and the distance metric kk.To learn the robot dynamics, we assume that we have access to a dataset of paired observations andcontrols xt;ut;xt+1. This data is relatively easy to obtain as it involves collecting a stream of therobot’s observations and controls. We use this dataset to learn a general visual dynamics model thatcan be used for any task.To learn the distance metric, we assume that the robot interacts with the world and collects tuplesof the form xt;ut;ct;xt+1;x. At every time step during learning, the robot observes xtand takesaction ut. After the transition, the robot observes xt+1and receives an immediate cost ct. This costis task-specific and it quantifies how good that transition was in order to achieve the goal. At thebeginning of each trajectory, the robot is given a goal observation x, and it is the same throughoutthe trajectory. We define the goal feature map to be the featurization of the goal observation. Welearn the distance metric using reinforcement learning and we model the environment as a MarkovDecision Process (MDP). The state of the MDP is the tuple of the current observation and theepisode’s target observation, st= (xt;x), the action utis the discrete-time continuous control ofthe robot, and the cost function maps the states and action (st;ut;st+1)to a scalar cost ct.3Published as a conference paper at ICLR 2017Figure 1: Multiscale bilinear model. The function hmaps imagesxto feature maps y(0), the operator ddownsamples the featuremapsy(l1)toy(l), and the bilinear function f(l)predicts the nextfeature ^y(l). The number of channels for each feature map is nc,regardless of the scale l.Figure 2: Dilated VGG-16 network.The intermediate feature maps drawnin a lighter shade are outputs of max-pooling layers. The features maps inthe conv4 and conv5 blocks are out-puts of dilated convolutions with dila-tion factors of 2 and 4, respectively.4 V ISUAL FEATURES DYNAMICSWe learn a multiscale bilinear model to predict the visual features of the next frame given the currentimage from the robot’s camera and the action of the robot. An overview of the model is shown inFigure 1. The learned dynamics can then be used for visual servoing as described in Section 5.4.1 V ISUAL FEATURESWe consider both pixels and semantic features for the visual representation. We define the functionhto relate the image xand its feature y=h(x). Our choice of semantic features are derivedfrom the VGG-16 network (Simonyan & Zisserman, 2014), which is a convolutional neural networktrained for large-scale image recognition on the ImageNet dataset (Deng et al., 2009). Since spatialinvariance is undesirable for servoing, we remove some of the max-pooling layers and replace theconvolutions that followed them with dilated convolutions, as done by Yu & Koltun (2015). Themodified VGG network is shown in Figure 2. We use the model weights of the original VGG-16network, which are publicly available as a Caffe model (Jia et al., 2014). The features that we useare the outputs of some of the intermediate convolutional layers, that have been downsampled to a3232resolution (if necessary) and standarized with respect to our training set.We use multiple resolutions of these features for servoing. The idea is that the high-resolution repre-sentations have detailed local information about the scene, while the low-resolution representationshave more global information available through the image-space gradients. The features at level lofthe multiscale pyramid are denoted as y(l). The features at each level are obtained from the featuresbelow through a downsampling operator d(y(l1)) =y(l)that cuts the resolution in half.4.2 B ILINEAR DYNAMICSThe features y(l)tare used to predict the corresponding level’s features y(l)t+1at the next time step,conditioned on the action ut, according to a prediction function f(l)(y(l)t;ut) =^y(l)t+1. We use abilinear model to represent these dynamics, motivated by prior work (Censi & Murray, 2015). Inorder to servo at different scales, we learn a bilinear dynamics model at each scale. We consider twovariants of the bilinear model in previous work in order to reduce the number of model parameters.The first variant uses fully connected dynamics as in previous work but models the dynamics of eachchannel independently. When semantic features are used, this model interprets the feature maps as4Published as a conference paper at ICLR 2017being abstract images with spatial information within a channel and different entities or factors ofvariation across different channels. This could potentially allow the model to handle moving objects,occlusions, and other complex phenomena.The fully connected bilinear model is quite large, so we propose a bilinear dynamics that enforcessparsity in the parameters. In particular, we constrain the prediction to depend only on the featuresthat are in its local spatial neighborhood, leading to the following locally connected bilinear model:^y(l)t+1;c=y(l)t;c+XjW(l)c;jy(l)t;c+B(l)c;jut;j+W(l)c;0y(l)t;c+B(l)c;0: (2)The parameters are the 4-dimensional tensor W(l)c;jand the matrix B(l)c;jfor each channel c, scalel, and control coordinate j. The last two terms are biases that allow to model action-independentvisual changes, such as moving objects. The is the locally connected operator, which is like aconvolution but with untied filter weights4.4.3 T RAINING VISUAL FEATURE DYNAMICS MODELSThe loss that we use for training the bilinear dynamics is the sum of the losses of the predictedfeatures at each level,PLl=0`(l), where the loss for each level lis the squared `-2 norm between thepredicted features and the actual features of that level, `(l)=ky(l)t+1^y(l)t+1k2.We optimize for the dynamics while keeping the feature representation fixed. This is a supervisedlearning problem, which we solve with ADAM (Kingma & Ba, 2014). The training set, consistingof triplets xt;ut;xt+1, was obtained by executing a hand-coded policy that moves the robot aroundthe target with some Gaussian noise.5 L EARNING VISUAL SERVOING WITH REINFORCEMENT LEARNINGWe propose to use a multiscale representation of semantic features for servoing. The challenge whenintroducing multiple scales and multi-channel feature maps for servoing is that the features do notnecessarily agree on the optimal action when the goal is unattainable or the robot is far away fromthe goal. To do well, it’s important to use a good weighing of each of the terms in the objective.Since there are many weights, it would be impractically time-consuming to set them by hand, sowe resort to learning. We want the weighted one-step lookahead objective to encourage good long-term behavior, so we want this objective to correspond to the state-action value function Q. So wepropose a method for learning the weights based on fitted Q-iteration.5.1 S ERVOING WITH WEIGHTED MULTISCALE FEATURESInstead of attempting to build an accurate predictive model for multi-step planning, we use thesimple greedy servoing method in Equation (1), where we minimize the error between the target andpredicted features for all the scales. Typically, only a few objects in the scene are relevant, so theerrors of some channels should be penalized more than others. Similarly, features at different scalesmight need to be weighted differently. Thus, we use a weighting w(l)c0per channelcand scalel:(xt;x) = arg minuXcLXl=0w(l)cjy(l);cjy(l);cf(l)cy(l)t;c;u22+Xjju2j; (3)wherejjdenotes the cardinality operator and the constant 1=jy(l);cjnormalizes the feature errors by itsspatial resolution. We also use a separate weight jfor each control coordinate j. This optimizationcan be solved efficiently since the dynamics is linear in the controls (see Appendix A).4The locally connected operator, with a local neighborhood of nfnf(analogous to the filter size inconvolutions), is defined as:(Wy)kh;kw=kh+bnf=2cXih=khbnf=2ckw+bnf=2cXiw=kwbnf=2cWkh;kw;ihkh;iwkwyih;iw:5Published as a conference paper at ICLR 20175.2 Q-F UNCTION APPROXIMATION FOR THE WEIGHTED SERVOING POLICYWe choose a Q-value function approximator that can represent the servoing objective such that thegreedy policy with respect to the Q-values results in the policy of Equation (3). In particular, we usea function approximator that is linear in the weight parameters >=w>>:Q;b(st;u) =(st;u)>+b; (st;u)>="1jy(l);cjy(l);cf(l)cy(l)t;c;u22>c;lu2j>j#:We denote the state of the MDP as st= (xt;x)and add a bias bto the Q-function. The servoingpolicy is then simply (st) = arg min uQ;b(st;u). For reinforcement learning, we optimized forthe weightsbut kept the feature representation and its dynamics fixed.5.3 L EARNING THE Q-F UNCTION WITH FITTED Q-I TERATIONReinforcement learning methods that learn a Q-function do so by minimizing the Bellman error:Q(st;ut)ct+minuQ(st+1;u)22: (4)In fitted Q-iteration, the agent iteratively gathers a dataset fs(i)t;u(i)t;c(i)t;s(i)t+1gNiofNsamplesaccording to an exploration policy, and then minimizes the Bellman error using this dataset. We usethe term sampling iteration to refer to each iteration jof this procedure. At the beginning of eachsampling iteration, the current policy with added Gaussian noise is used as the exploration policy.It is typically hard or unstable to optimize for both Q-functions that appear in the Bellman errorof Equation (4), so it is usually optimized by iteratively optimizing the current Q-function whilekeeping the target Q-function constant. However, we notice that for a given state, the action thatminimizes its Q-values is the same for any non-negative scaling ofand for any bias b. Thus, tospeed up the optimization of the Q-function, we first set (k12)andb(k12)by jointly solving for andbofboth the current and target Q-function:min0;b1NNXi=1Q(k1);bs(i)t;u(i)tc(i)t+minuQ(k1);bs(i)t+1;u22+kk22:(5)This is similar to how, in policy evaluation, state values can be computed by solving a linear system.We regularize the parameters with an `-2 penalty, weighted by 0. We use the term FQI iterationto refer to each iteration kof optimizing the Bellman error, and we use the notation (k12)to denotean intermediate step between iterations (k1)and (k). The parameters can then be updated with(k12)=(k12)(k1). Then, we update (k)andb(k)by optimizing for andbof the currentQ-function while keeping the parameters of the target Q-function fixed:min0;b1NNXi=1Q;bs(i)t;u(i)tc(i)t+minuQ(k12);b(k12)s(i)t+1;u22+kk22:(6)A summary of the algorithm used to learn the feature weights is shown in Algorithm 1.Algorithm 1 FQI with initialization of policy-independent parameters1:procedure FQI((0);2exploration;)2: fors= 1;:::;S do .sampling iterations3: Gather datasetfs(i)t;u(i)t;c(i)t;s(i)t+1gNiusing exploration policy N((0);2exploration )4: fork= 1;:::;K do .FQI iterations5: Fit(k12)andb(k12)using (5)6: (k12) (k12)(k1)7: Fit(k)andb(k)using (6)8:(0) (K)6Published as a conference paper at ICLR 2017Figure 3: Cars used to learn the dynamics and thefeature weights. They were also used in some of thetest experiments.Figure 4: Novel cars used only in the test experi-ments. They were never seen during training or vali-dation.pixel,fullyconnectedpixel,locallyconnectedVGGconv1 2VGGconv2 2VGGconv3 3VGGconv4 3VGGconv5 3Feature Dynamics0123456789Average CostCosts of Executions when Following Cars Seen During Trainingpixel,fullyconnectedpixel,locallyconnectedVGGconv1 2VGGconv2 2VGGconv3 3VGGconv4 3VGGconv5 3Feature Dynamics0123456789Average CostCosts of Executions when Following Novel CarsFigure 5: Costs of test executions using various feature dynamics models, where the feature weights are op-timized with FQI. We test on cars that were used during learning (left plot) and on novel cars that were onlyused at test time (right plot). The reported values are the mean and standard error across 100 trajectories, of upto 100 time steps each. The policies based on pixel intensities use either fully connected or locally connecteddynamics, whereas all the policies based on VGG features use locally connected dynamics. The policies basedon deeper VGG features generally achieve better performance, except for the deepest feature representation,VGG conv5 3, which is not as suitable for approximating Q-values. The policies based on pixel intensities andVGG conv5 3 features perform worse on the novel cars. However, VGG features conv1 2 through conv4 3achieve some degree of generalization on the novel cars.6 E XPERIMENTSWe evaluate the performance of the model for visual servoing in a simulated environment. Thesimulated quadcopter is governed by rigid body dynamics. The robot has 4 degrees of freedom,corresponding to translation along three axis and yaw angle. This simulation is inspired by tasks inwhich an autonomous quadcopter flies above a city, with the goal of following some target object(e.g., a car).6.1 L EARNING FEATURE DYNAMICS AND WEIGHTS WITH FQIThe dynamics for each of the features were trained using a dataset of 10000 samples (correspondingto 100 trajectories) with ADAM (Kingma & Ba, 2014). A single dynamics model was learned foreach feature representation for all the training cars (Figure 3). This training set was generated byexecuting a hand-coded policy that navigates the quadcopter around a car for 100 time steps pertrajectory, while the car moves around the city.We used the proposed FQI algorithm to learn the weightings of the features and control regularizer.At every sampling iteration, the current policy was executed with Gaussian noise to gather datafrom 10 trajectories. All the trajectories in our experiments were up to 100 time steps long. Theimmediate cost received by the agent encodes the error of the target in image coordinates (detailsin Appendix B). Then, the parameters were iteratively updated by running K= 10 iterations ofFQI. We ran the overall algorithm for only S= 2 sampling iterations and chose the parametersthat achieved the best performance on 10 validation trajectories. These validation trajectories wereobtained by randomly choosing 10 cars from the set of training cars and randomly sampling initialstates, and executing the policy with the parameters of the current iteration. All the experimentsshare the same set of validation trajectories.7Published as a conference paper at ICLR 2017FeatureDynamicsObservations from Test Executions Costpixel,locallyconnected0.956.2614.49VGGconv4 30.380.481.02Table 1: Sample observations from test executions in our experiments with the novel cars, and the costs foreach trajectory, for different feature dynamics. We use the weights learned by our FQI algorithm. In each row,we show the observations of every 10 steps and the last one. The first observation of each trajectory is usedas the target observation. The trajectories shown here were chosen to reflect different types of behaviors. Theservoing policy based on pixel feature dynamics can generally follow cars that can be discriminated based onRGB pixel intensities (e.g., a yellow car with a relatively uniform background). However, it performs poorlywhen distractor objects appear throughout the execution (e.g., a lamp) or when they appear in the target image(e.g., the crosswalk markings on the road). On the other hand, VGG conv4 3 features are able to discriminatethe car from distractor objects and the background, and the feature weights learned by the FQI algorithm areable to leverage this. Additional sample executions with other feature dynamics can be found in Table 3 in theAppendix.6.2 C OMPARISON OF FEATURE REPRESENTATIONS FOR SERVOINGWe compare the servoing performance for various feature dynamics models, where the weights areoptimized with FQI. We execute the learned policies on 100 test trajectories and report the averagecost of the trajectory rollouts on Figure 5. The cost of a single trajectory is the (undiscounted) sumof costsct. We test the policies with cars that were seen during training as well as with a set of novelcars (Figure 4), to evaluate the generalization of the learned dynamics and optimized policies.The test trajectories were obtained by randomly sampling 100 cars (with replacement) from one ofthe two sets of cars, and randomly sampling initial states (which are different from the ones usedfor validation). For consistency and reproducibility, the same sampled cars and initial states wereused across all the test experiments, and the same initial states were used for both sets of cars.These test trajectories were never used during the development of the algorithm or for choosinghyperparameters.From these results, we notice that policies based on deeper VGG features, up to VGG conv4 3,generally achieve better performance. However, the deepest feature representation, VGG conv5 3,is not as suitable for approximating Q-values. We hypothesize that this feature might be too spatiallyinvariant and it might lack the necessary spatial information to differentiate among different carpositions. The policies based on pixel intensities and VGG conv5 3 features perform worse on thenovel cars. However, VGG features conv1 2 through conv4 3 achieve some degree of generalizationon the novel cars.We show sample trajectories in Table 1. The policy based on pixel-intensities is susceptible toocclusions and distractor objects that appear in the target image or during executions. This is becausedistinguishing these occlusions and distractors from the cars cannot be done using just RGB features.8Published as a conference paper at ICLR 2017ORBfeaturepointsIBVSC-COTvisualtrackerIBVSCNN+TRPO(≥20000)unweightedfeaturedynamics+CEM(1500)featuredynamics+TRPO(≥80)featuredynamics+TRPO(≥2000)ours,featuredynamics+FQI(20)Feature Representation and Optimization Method012345Average Costprior methods thatdo not use learnedfeature dynamicsmethods that use VGG conv4 3features and their learnedlocally connected feature dynamicsFigure 6: Comparison of costs on test executions of prior methods against our method based on VGG conv4 3feature dynamics. These costs are from executions with the training cars; the costs are comparable whentesting with the novel cars (Table 2). The first two methods use classical image-based visual servoing (IBVS)with feature points from an off-the-shelf keypoint detector and descriptor extractor (ORB features), and withfeature points extracted from bounding boxes predicted by a state-of-the-art visual tracker (C-COT tracker),respectively. The third method trains a convolutional neural network (CNN) policy end-to-end with TrustRegion Policy Optimization (TRPO). The other methods use the servoing policy based on VGG conv4 3 featuredynamics, either with unweighted features or weights trained with TRPO for either 2 or 50 iterations. In thecase of unweighted features, we learned the weights and a single weight wwith the cross entropy method(CEM). We report the number of training trajectories in parenthesis for the methods that require learning. ForTRPO, we use a fixed number of training samples per iteration, whereas for CEM and FQI, we use a fixednumber of training trajectories per iteration. We use a batch size of 4000 samples for TRPO, which means thatat least 40 trajectories were used per iteration (since trajectories can terminate early, i.e. in less than 100 timesteps).6.3 C OMPARISON OF WEIGHTINGS FROM OTHER OPTIMIZATION METHODSWe compare our policy using conv4 3 feature dynamics, with weights optimized by FQI, againstpolicies that use these dynamics but with either no feature weighting or weights optimized by otheralgorithms.For the case of no weighting, we use a single feature weight wbut optimize the relative weightingof the controls with the cross entropy method (CEM) (De Boer et al., 2005). For the other cases,we learn the weights with Trust Region Policy Optimization (TRPO) (Schulman et al., 2015). Sincethe servoing policy is the minimizer of a quadratic objective (Equation (3)), we represent the policyas a neural network that has a matrix inverse operation at the output. We train this network for 2and 50 sampling iterations, and use a batch size of 4000 samples per iteration. All of these methodsuse the same feature representation as ours, the only difference being how the weights wandarechosen.We report the average costs of these methods on the right of Figure 6. In 2 sampling iterations,the policy learned with TRPO does not improve by much, whereas our policy learned with FQIsignificantly outperforms the other policies. The policy learned with TRPO improves further in 50iterations; however, the cost incurred by this policy is still about one and a half times the cost of ourpolicy, despite using more than 100 times as many trajectories.6.4 C OMPARISON TO PRIOR METHODSWe also consider other methods that do not use the dynamics-based servoing policy that we propose.We report their average performance on the left of Figure 6.For one of the prior methods, we train a convolutional neural network (CNN) policy end-to-endwith TRPO. The policy is parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fully-connected layers, with ReLU activations except for the output layer; the convolutional layers use9Published as a conference paper at ICLR 201716 filters ( 44, stride 2) each and the first 2 fully-connected layers use 32 hidden units each. Thepolicy takes in raw pixel-intensities and outputs controls.This policy achieves a modest performance (although still worse than the policies based on conv4 3feature dynamics) but it requires significantly more training samples than any of the other learning-based methods. We also trained CNN policies that take in extracted VGG features (without anydynamics) as inputs, but they perform worse (see Table 4 in the Appendix). This suggests that givena policy parametrization that is expressive enough and given a large number of training samples, itis better to directly provide the raw pixel-intensity images to the policy instead of extracted VGGfeatures. This is because VGG features are not optimized for this task and their representation losessome information that is useful for servoing.The other two prior methods use classical image-based visual servoing (IBVS) (Chaumette &Hutchinson, 2006) with respect to Oriented FAST and Rotated BRIEF (ORB) feature points (Rubleeet al., 2011), or feature points extracted from a visual tracker. For the former, the target features con-sist of only the ORB feature points that belong to the car, and this specifies that the car is relevantfor the task. For the tracker-based method, we use the Continuous Convolution Operator Tracker(C-COT) (Danelljan et al., 2016) (the current state-of-the-art visual tracker) to get bounding boxesaround the car and use the four corners of the box as the feature points for servoing. We provide theground truth car’s bounding box of the first frame as an input to the C-COT tracker. For all of theIBVS methods, we provide the ground truth depth values of the feature points, which are used in thealgorithm’s interaction matrix5.The first method performs poorly, in part because ORB features are not discriminative enough forsome of the cars, and the target feature points are sometimes matched to feature points that arenot on the car. The tracker-based method achieves a relatively good performance. The gap inperformance with respect to our method is in part due to the lack of car dynamics information inthe IBVS model, whereas our method implicitly incorporates that in the learned feature dynamics.It is also worth noting that the tracker-based policy runs significantly slower than our method. Theopen-source implementation of the C-COT tracker6runs at about 1Hz whereas our policy basedon conv4 3 features runs at about 16Hz. Most of the computation time of our method is spentcomputing features from the VGG network, so there is room for speedups if we use a network thatis less computationally demanding.7 D ISCUSSIONManual design of visual features and dynamics models can limit the applicability of visual ser-voing approaches. We described an approach that combines learned visual features with learningpredictive dynamics models and reinforcement learning to learn visual servoing mechanisms. Ourexperiments demonstrate that standard deep features, in our case taken from a model trained forobject classification, can be used together with a bilinear predictive model to learn an effectivevisual servo that is robust to visual variation, changes in viewing angle and appearance, and occlu-sions. For control we propose to learn Q-values, building on fitted Q-iteration, which at executiontime allows for one-step lookahead calculations that optimize long term objectives. Our methodcan learn an effective visual servo on a complex synthetic car following benchmark using just 20training trajectory samples for reinforcement learning. We demonstrate substantial improvementover a conventional approach based on image pixels or hand-designed keypoints, and we show animprovement in sample-efficiency of more than two orders of magnitude over standard model-freedeep reinforcement learning algorithms.ACKNOWLEDGEMENTSThis research was funded in part by the Army Research Office through the MAST program and theBerkeley DeepDrive consortium. Alex Lee was also supported by the NSF GRFP.5The term interaction matrix, or feature Jacobian, is used in the visual servo literature to denote the Jacobianof the features with respect to the control.6https://github.com/martin-danelljan/Continuous-ConvOp10Published as a conference paper at ICLR 2017REFERENCESHerbert Bay, Tinne Tuytelaars, and Luc Van Gool. SURF: Speeded up robust features. In Pro-ceedings of the European Conference on Computer Vision (ECCV) , pp. 404–417. Springer, 2006,2006.Guillaume Caron, Eric Marchand, and El Mustapha Mouaddib. Photometric visual servoing foromnidirectional cameras. Autonomous Robots , 35(2-3):177–193, 2013, 2013.Andrea Censi and Richard M Murray. Bootstrapping bilinear models of simple vehicles. The Inter-national Journal of Robotics Research , 34(8):1087–1113, 2015, 2015.Francois Chaumette and Seth Hutchinson. Visual servo control. I. Basic approaches. IEEE Robotics& Automation Magazine , 13(4):82–90, 2006, 2006.Jian Chen, Warren E Dixon, M Dawson, and Michael McIntyre. Homography-based visual servotracking control of a wheeled mobile robot. IEEE Transactions on Robotics , 22(2):406–415,2006, 2006.Christophe Collewet and Eric Marchand. Photometric visual servoing. IEEE Transactions onRobotics , 27(4):828–834, 2011, 2011.Christophe Collewet, Eric Marchand, and Francois Chaumette. Visual servoing set free from imageprocessing. In Proceedings of the IEEE International Conference on Robotics and Automation(ICRA) , pp. 81–86. IEEE, 2008, 2008.Peter I Corke. Visual control of robot manipulators – A review. Visual servoing , 7:1–31, 1993, 1993.Martin Danelljan, Andreas Robinson, Fahad Shahbaz Khan, and Michael Felsberg. Beyond correla-tion filters: Learning continuous convolution operators for visual tracking. In Proceedings of theEuropean Conference on Computer Vision (ECCV) , pp. 472–488. Springer, 2016, 2016.Pieter-Tjerk De Boer, Dirk P Kroese, Shie Mannor, and Reuven Y Rubinstein. A tutorial on thecross-entropy method. Annals of operations research , 134(1):19–67, 2005, 2005.Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scalehierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009.IEEE Conference on , pp. 248–255. IEEE, 2009, 2009.Guilherme N DeSouza and Avinash C Kak. Vision for mobile robot navigation: A survey. IEEEtransactions on pattern analysis and machine intelligence , 24(2):237–267, 2002, 2002.Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and TrevorDarrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In Pro-ceedings of the International Conference on Machine Learning (ICML) , volume 32, pp. 647–655,2014, 2014.Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-based batch mode reinforcement learning.Journal of Machine Learning Research , 6(Apr):503–556, 2005, 2005.Bernard Espiau, Francois Chaumette, and Patrick Rives. A new approach to visual servoing inrobotics. IEEE Transactions on Robotics and Automation , 8(3):313–326, 2002, 2002.Amir Massoud Farahmand, Mohammad Ghavamzadeh, Csaba Szepesv ́ari, and Shie Mannor. Reg-ularized fitted Q-iteration for planning in continuous-space Markovian decision problems. InAmerican Control Conference, 2009. ACC’09. , pp. 725–730. IEEE, 2009, 2009.John T Feddema and Owen Robert Mitchell. Vision-guided servoing with feature-based trajectorygeneration (for robots). IEEE Transactions on Robotics and Automation , 5(5):691–700, 1989,1989.Geoffrey J Gordon. Stable function approximation in dynamic programming. In Proceedings of theInternational Conference on Machine Learning (ICML) , 1995, 1995.Koichi Hashimoto. Visual servoing , volume 7. World scientific, 1993, 1993.11Published as a conference paper at ICLR 2017Koh Hosoda and Minoru Asada. Versatile visual servoing without knowledge of true Jacobian. InIntelligent Robots and Systems’ 94. ’Advanced Robotic Systems and the Real World’, IROS’94.Proceedings of the IEEE/RSJ/GI International Conference on , volume 1, pp. 186–193. IEEE,1994, 1994.Seth Hutchinson, Gregory D Hager, and Peter I Corke. A tutorial on visual servo control. IEEEtransactions on robotics and automation , 12(5):651–670, 1996, 1996.Martin Jagersand, Olac Fuentes, and Randal Nelson. Experimental evaluation of uncalibrated visualservoing for precision manipulation. In Proceedings of the IEEE International Conference onRobotics and Automation (ICRA) , volume 4, pp. 2874–2880. IEEE, 1997, 1997.Xu Jia, Bert De Brabandere, Tinne Tuytelaars, and Luc V Gool. Dynamic filter networks. InAdvances in Neural Information Processing Systems (NIPS) , pp. 667–675, 2016, 2016.Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser-gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed-ding. In Proceedings of the 22nd ACM International Conference on Multimedia , pp. 675–678.ACM, 2014, 2014.Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR ,abs/1412.6980, 2014, 2014.Danica Kragic and Henrik I Christensen. Survey on visual servoing for manipulation. Computa-tional Vision and Active Perception Laboratory, Fiskartorpsv , 15, 2002, 2002.Thomas Lampe and Martin Riedmiller. Acquiring visual servoing reaching and grasping skills usingneural reinforcement learning. In Proceedings of the International Joint Conference on NeuralNetworks (IJCNN) , pp. 1–8. IEEE, 2013, 2013.Sascha Lange, Martin Riedmiller, and Arne V oigtlander. Autonomous reinforcement learning on rawvisual input data in a real world application. In Proceedings of the International Joint Conferenceon Neural Networks (IJCNN) , pp. 1–8. IEEE, 2012, 2012.Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo-motor policies. Journal of Machine Learning Research , 17(39):1–40, 2016, 2016.Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa,David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. CoRR ,abs/1509.02971, 2015, 2015.William Lotter, Gabriel Kreiman, and David Cox. Deep predictive coding networks for video pre-diction and unsupervised learning. CoRR , abs/1605.08104, 2016, 2016.David G Lowe. Distinctive image features from scale-invariant keypoints. International Journal ofComputer Vision , 60(2):91–110, 2004, 2004.Ezio Malis, Francois Chaumette, and Sylvie Boudet. 2 1/2 D visual servoing. IEEE Transactionson Robotics and Automation , 15(2):238–250, 1999, 1999.Micha ̈el Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyondmean square error. CoRR , abs/1511.05440, 2015, 2015.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, DaanWierstra, and Martin A. Riedmiller. Playing Atari with deep reinforcement learning. CoRR ,abs/1312.5602, 2013, 2013.Kartik Mohta, Vijay Kumar, and Kostas Daniilidis. Vision-based control of a quadrotor for perchingon lines. In Proceedings of the IEEE International Conference on Robotics and Automation(ICRA) , pp. 3130–3136. IEEE, 2014, 2014.Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditionalvideo prediction using deep networks in Atari games. In Advances in Neural Information Pro-cessing Systems (NIPS) , pp. 2863–2871, 2015, 2015.12Published as a conference paper at ICLR 2017Martin Riedmiller. Neural fitted Q iteration – First experiences with a data efficient neural reinforce-ment learning method. In European Conference on Machine Learning , pp. 317–328. Springer,2005, 2005.Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. ORB: An efficient alternativeto SIFT or SURF. In Proceedings of the IEEE International Conference on Computer Vision(ICCV) , pp. 2564–2571. IEEE, 2011, 2011.Mehdi Sadeghzadeh, David Calvert, and Hussein A Abdullah. Self-learning visual servoing of robotmanipulator using explanation-based fuzzy neural networks and Q-learning. Journal of Intelligent& Robotic Systems , 78(1):83–104, 2015, 2015.John Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust re-gion policy optimization. In Proceedings of the International Conference on Machine Learning(ICML) , pp. 1889–1897, 2015, 2015.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. CoRR , abs/1409.1556, 2014, 2014.Carl V ondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics.InAdvances in Neural Information Processing Systems (NIPS) , pp. 613–621, 2016, 2016.Jacob Walker, Carl Doersch, Abhinav Gupta, and Martial Hebert. An uncertain future: Forecastingfrom static images using variational autoencoders. In Proceedings of the European Conferenceon Computer Vision (ECCV) , pp. 835–851. Springer, 2016, 2016.Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control:A locally linear latent dynamics model for control from raw images. In Advances in NeuralInformation Processing Systems (NIPS) , pp. 2746–2754, 2015, 2015.Lee E Weiss, Arthur C Sanderson, and Charles P Neuman. Dynamic sensor-based control of robotswith visual feedback. IEEE Journal on Robotics and Automation , 3(5):404–417, 1987, 1987.William J Wilson, Carol C Williams Hulls, and Graham S Bell. Relative end-effector control usingcartesian position based visual servoing. IEEE Transactions on Robotics and Automation , 12(5):684–696, 1996, 1996.Tianfan Xue, Jiajun Wu, Katherine Bouman, and Bill Freeman. Visual dynamics: Probabilisticfuture frame synthesis via cross convolutional networks. In Advances in Neural InformationProcessing Systems (NIPS) , pp. 91–99, 2016, 2016.Billibon H Yoshimi and Peter K Allen. Active, uncalibrated visual servoing. In Proceedings of theIEEE International Conference on Robotics and Automation (ICRA) , pp. 156–161. IEEE, 1994,1994.Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. CoRR ,abs/1511.07122, 2015, 2015.A L INEARIZATION OF THE BILINEAR DYNAMICSThe optimization of Equation (3) can be solved efficiently by using a linearization of the dynamics,f(l)cy(l)t;c;u=f(l)cy(l)t;c; u+J(l)t;c(u u) =f(l)cy(l)t;c;0+J(l)t;cu; (7)whereJ(l)t;cis the Jacobian matrix with partial derivatives@f(l)c@u(y(l)t;c; u)and uis the linearizationpoint. Since the bilinear dynamics are linear with respect to the controls, this linearization is exactand the Jacobian matrix does not depend on u. Without loss of generality, we set u=0.Furthermore, the bilinear dynamics allows the Jacobian matrix to be computed efficiently by simplydoing a forward pass through the model. For the locally bilinear dynamics of Equation (2), the j-thcolumn of the Jacobian matrix is given byJ(l)t;c;j=@f(l)c@uj(y(l)t;c;0) =W(l)c;jy(l)t;c+B(l)c;j: (8)13Published as a conference paper at ICLR 2017B S ERVOING COST FUNCTION FOR REINFORCEMENT LEARNINGThe goal of reinforcement learning is to find a policy that maximizes the expected sum of rewards,or equivalently, a policy that minimizes the expected sum of costs. The cost should be one thatquantifies progress towards the goal. We define the cost function in terms of the position of thetarget object (in the camera’s local frame) after the action has been taken,c(st;ut;st+1) =8<:rpxt+1pzt+12+pyt+1pzt+12+1pzt+11pz2;ifjjpt+1jj2and car in FOV(Tt+ 1)c(;;st); otherwise;(9)whereTis the maximum trajectory length. The episode terminates early if the camera is too closeto the car (less than a distance ) or the car’s origin is outside the camera’s field of view (FOV). Thecar’s position at time tispt= (pxt;pyt;pzt)and the car’s target position is p= (0;0;pz), both inthe camera’s local frame (z-direction is forward). Our experiments use T= 100 and= 4m.C E XPERIMENT DETAILSC.1 T ASK SETUPThe camera is attached to the vehicle slightly in front of the robot’s origin and facing down at anangle of=6rad, similar to a commercial quadcopter drone. The robot has 4 degrees of freedom,corresponding to translation and yaw angle. Pitch and roll are held fixed.In our simulations, the quadcopter follows a car that drives at 1m s1along city roads during trainingand testing. The quadcopter’s speed is limited to within 10m s1for each translational degree offreedom, and its angular speed is limited to within =2rad s1. The simulator runs at 10Hz. For eachtrajectory, a car is chosen randomly from a set of cars, and placed randomly on one of the roads.The quadcopter is initialized right behind the car, in the desired relative position for following. Theimage observed at the beginning of the trajectory is used as the goal observation.C.2 L EARNING FEATURE DYNAMICSThe dynamics of all the features were trained using a dataset of 10000 triplets xt;ut;xt+1. Theobservations are 128128RGB images and the actions are 4-dimensional vectors of real numbersencoding the linear and angular (yaw) velocities. The actions are normalized to between 1and1.The training set was generated from 100 trajectories of a quadcopter following a car around the citywith some randomness. Each trajectory was 100 steps long. Only 5 training cars were shown duringlearning. The generation process of each trajectory is as follows: First, a car is chosen at randomfrom the set of available cars and it is randomly placed on one of the roads. Then, the quadcopteris placed at some random position relative to the car’s horizontal pose, which is the car’s pose thathas been rotated so that the vertical axis of it and the world matches. This quadcopter position isuniformly sampled in cylindrical coordinates relative to the car’s horizontal pose, with heights in theinterval 12m to 18m, and azimuthal angles in the interval =2rad to=2rad (where the origin ofthe azimuthal angle is the back of the car). The radii and yaw angles are initialized so that the caris in the middle of the image. At every time step, the robot takes an action that moves it towards atarget pose, with some additive Gaussian noise ( = 0:2). The target pose is sampled according tothe same procedure as the initial pose, and it is sampled once at the beginning of each trajectory.We try the fully and locally connected dynamics for pixel intensities to better understand the per-formance trade-offs when assuming locally connected dynamics. We do not use the latter for thesemantic features since they are too high-dimensional for the dynamics model to fit in memory. Thedynamics models were trained with ADAM using 10000 iterations, a batch size of 32, a learningrate of 0.001, and momentums of 0.9 and 0.999, and a weight decay of 0.0005.14Published as a conference paper at ICLR 2017Policy Optimization AlgorithmFeatureDynamicsunweightedfeaturedynamics+ CEM (1500)featuredynamics+ CEM(3250)featuredynamics+ TRPO(80)featuredynamics+ TRPO(2000 )ours,featuredynamics+ FQI (20)pixel, FC 8:200:66 7:770:66 9:560:62 8:030:66 7:920:67pixel, LC 8:070:74 7:130:74 10:110:60 7:970:72 7:980:77VGG conv1 22:220:38 2 :060:35 1:660:31 1:890:32VGG conv2 22:400:47 2 :420:47 1:890:40 1:400:29VGG conv3 32:910:52 2 :870:53 1:590:42 1:560:40VGG conv4 32:700:52 2 :570:49 1:690:41 1:110:29VGG conv5 33:680:47 3 :690:48 3:160:48 2:490:35(a) Costs when using the set of cars seen during learning.Policy Optimization AlgorithmFeatureDynamicsunweightedfeaturedynamics+ CEM (1500)featuredynamics+ CEM(3250)featuredynamics+ TRPO(80)featuredynamics+ TRPO(2000 )ours,featuredynamics+ FQI (20)pixel, FC 8:840:68 8:660:70 10:010:62 8:750:67 9:000:70pixel, LC 8:370:75 7:170:75 11:290:57 8:250:71 8:360:79VGG conv1 22:030:43 1 :790:36 1:420:33 1:780:37VGG conv2 22:010:44 2 :000:45 1:260:30 1:280:30VGG conv3 32:030:47 2 :080:47 1:460:37 1:040:31VGG conv4 32:400:50 2 :570:53 1:480:36 0:900:26VGG conv5 33:310:45 3 :550:50 2:760:42 2:560:41(b) Costs when using novel cars, none of which were seen during learning.Table 2: Costs on test executions of the dynamics-based servoing policies for different feature dynamics andweighting of the features. The reported numbers are the mean and standard error across 100 test trajectories, ofup to 100 time steps each. We test on executions with the training cars and the novel cars; for consistency, thenovel cars follow the same route as the training cars. We compare the performance of policies with unweightedfeatures or weights learned by other methods. For the case of unweighted feature dynamics, we use the crossentropy method (CEM) to learn the relative weights of the control and the single feature weight w. Forthe other cases, we learn the weights with CEM, Trust Region Policy Optimization (TRPO) for either 2 or 50iterations, and our proposed FQI algorithm. CEM searches over the full space of policy parameters wand, but it was only ran for pixel features since it does not scale for high-dimensional problems. We reportthe number of training trajectories in parenthesis. For TRPO, we use a fixed number of training samples periteration, whereas for CEM and FQI, we use a fixed number of training trajectories per iteration. We use abatch size of 4000 samples for TRPO, which means that at least 40 trajectories were used per iteration, sincetrajectories can terminate early, i.e. in less than 100 time steps.C.3 L EARNING WEIGHTING OF FEATURE DYNAMICS WITH REINFORCEMENT LEARNINGWe use CEM, TRPO and FQI to learn the feature weighting and report the performance of thelearned policies in Table 2. We use the cost function described in Appendix B, a discount factor of= 0:9, and trajectories of up to 100 steps. All the algorithms used initial weights of w= 1 and= 1, and a Gaussian exploration policy with the current policy as the mean and a fixed standarddeviationexploration = 0:2.For the case of unweighted features, we use CEM to optimize for a single weight wand for theweights. For the case of weighted features, we use CEM to optimize for the full space of pa-rameters, but we only do that for the pixel feature dynamics since CEM does not scale for high-dimensional problems, which is the case for all the VGG features. Each iteration of CEM performsa certain number of noisy evaluations and selects the top 20% for the elite set. The number of noisyevaluations per iteration was 3 times the number of parameters being optimized. Each noisy evalua-15Published as a conference paper at ICLR 2017FeatureDynamicsObservations from Test Executions Costpixel,fullyconnected24.7416.69pixel,locallyconnected24.9216.47VGGconv1 215.911.57VGGconv2 27.532.56VGGconv3 36.013.76VGGconv4 35.944.31VGGconv5 315.5117.39Table 3: Sample observations from test executions in our experiments, and the costs for each trajectory, fordifferent feature dynamics. We use the weights learned by our FQI algorithm. This table follows the sameformat as Table 1. Some of the trajectories were shorter than 100 steps because of the termination condition(e.g. the car is no longer in the image). The first observation of each trajectory is used as the target observation.The trajectories shown in here were chosen to reflect different types of behaviors. In the first trajectory, the bluecar turns abruptly to the right, making the view significantly different from the target observation. In the secondtrajectory, a distractor object (i.e. the lamp) shows up in the target image and an occluder object (i.e. the trafficlight) appears through the execution. The policies based on deeper VGG features, up to VGG conv4 3, aregenerally more robust to the appearance changes between the observations and the target observation, whichare typically caused by movements of the car, distractor objects, and occlusions.16Published as a conference paper at ICLR 20170 2000 4000 6000 8000 10000Number of Training Samples100101Average Costspixel, fully connectedpixel, locally connectedVGG conv1 2VGG conv2 2VGG conv3 3VGG conv4 3VGG conv5 30 40000 80000 120000 160000 200000Number of Training Samples1001010 1 2 3 4 5 6 7 8 9 10FQI Sampling Iteration0 5 10 15 20 25 30 35 40 45 50TRPO Sampling IterationFigure 7: Costs of validation executions using various feature dynamics models, where the feature weights areoptimized with FQI (left plot) or TRPO (right plot). The reported values are the mean and standard error across10 validation trajectories, of up to 100 time steps each.tion used the average sum of costs of 10 trajectory rollouts as its evaluation metric. The parametersof the last iteration were used for the final policy. The policies with unweighted features dynamicsand the policies with pixel features dynamics were trained for 10 and 25 iterations, respectively.We use TRPO to optimize for the full space of parameters for each of the feature dynamics we con-sider in this work. We use a Gaussian policy, where the mean is the servoing policy of Equation (3)and the standard deviation is fixed to exploration = 0:2(i.e. we do not learn the standard devia-tion). Since the parameters are constrained to be non-negative, we parametrize the TRPO policieswithpwandp. We use a Gaussian baseline, where the mean is a 5-layer CNN, consisting of2 convolutional and 3 fully connected layers, and a standard deviation that is initialized to 1. Theconvolutional layers use 16 filters ( 44, stride 2) each, the first 2 fully-connected layers use 32hidden units each, and all the layers except for the last one use ReLU activations. The input ofthe baseline network are the features (either pixel intensities or VGG features) corresponding to thefeature dynamics being used. The parameters of the last iteration were used for the final policy. Thepolicies are trained with TRPO for 50 iterations, a batch size of 4000 samples per iteration, and astep size of 0.01.We use our proposed FQI algorithm to optimize for the weights w;, and surpass the other methodsin terms of performance on test executions, sample efficiency, and overall computation efficiency7.The updates of the inner iteration of our algorithm are computationally efficient; since the data isfixed for a given sampling iteration, we can precompute (st;ut)and certain terms of (st+1;).The parameters that achieved the best performance on 10 validation trajectories were used for thefinal policy. The policies are trained with FQI for S= 2 sampling iterations, a batch size of 10trajectories per sampling iteration, K= 10 inner iterations per sampling iteration, and a regulariza-tion coefficient of = 0:1. We found that regularization of the parameters was important for thealgorithm to converge. We show sample trajectories of the resulting policies in Table 3.The FQI algorithm often achieved most of its performance gain after the first iteration. We ranadditional sampling iterations of FQI to see if the policies improved further. For each iteration, weevaluated the performance of the policies on 10 validation trajectories. We did the same for thepolicies trained with TRPO, and we compare the learning curves of both methods in Figure 7.7Our policy based on conv4 3 features takes around 650s to run K= 10 iterations of FQI for a given batchsize of 10 training trajectories.17Published as a conference paper at ICLR 2017Observation Modalityground truth car position 0:590:24raw pixel-intensity images 3:230:22VGG conv1 2 features 7:450:40VGG conv2 2 features 13:380:53VGG conv3 3 features 10:020:49(a) Costs when using the set of cars seen during learning.Observation Modalityground truth car position 0:590:24raw pixel-intensity images 5:200:40VGG conv1 2 features 8:350:44VGG conv2 2 features 14:010:47VGG conv3 3 features 10:510:65(b) Costs when using a new set of cars, none of which were seen during learning.Table 4: Costs on test executions of servoing policies that were trained end-to-end with TRPO. These policiestake in different observation modalities: ground truth car position or image-based observations. This tablefollows the same format as Table 2. The mean of the first policy is parametrized as a 3-layer MLP, with tanhnon-linearities except for the output layer; the first 2 fully connected layers use 32 hidden units each. For theother policies, each of their means is parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fully-connected layers, with ReLU non-linearities except for the output layer; the convolutional layers use 16 filters(44, stride 2) each and the first 2 fully-connected layers use 32 hidden units each. All the policies are trainedwith TRPO, a batch size of 4000 samples, 500 iterations, and a step size of 0.01. The car position observationsare not affected by the appearance of the cars, so the test performance for that modality is the same regardlessof which set of cars are used.C.4 L EARNING END-TO-ENDSERVOING POLICIES WITH TRPOWe use TRPO to train end-to-end servoing policies for various observation modalities and reportthe performance of the learned policies in Table 4. The policies are trained with the set of trainingcars, and tested on both this set and on the set of novel cars. The observation modalities that weconsider are ground truth car positions (relative to the quadcopter), images of pixel intensities fromthe quadcopter’s camera, and VGG features extracted from those images. Unlike our method andthe other experiments, no feature dynamics are explicitly learned for these experiments.We use a Gaussian policy, where the mean is either a multi-layer perceptron (MLP) or a convo-lutional neural net (CNN), and the standard deviation is initialized to 1. We also use a Gaussianbaseline, which is parametrized just as the corresponding Gaussian policy (but no parameters areshared between the policy and the baseline). For the policy that takes in car positions, the meanis parametrized as a 3-layer MLP, with tanh non-linearities except for the output layer; the first2 fully connected layers use 32 hidden units each. For the other policies, each of their means isparametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fully-connected layers, withReLU non-linearities except for the output layer; the convolutional layers use 16 filters ( 44, stride2) each and the first 2 fully-connected layers use 32 hidden units each.The CNN policies would often not converge for several randomly initialized parameters. Thus, at thebeginning of training, we tried multiple random seeds until we got a policy that achieved a relativelylow cost on validation trajectories, and used the best initialization for training. The MLP policydid not have this problem, so we did not have to try multiple random initializations for it. All thepolicies are trained with a batch size of 4000 samples, 500 iterations, and a step size of 0.01. Theparameters of the last iteration were used for the final policy.18Published as a conference paper at ICLR 2017Observation Modality (Feature Points)corners of bounding box from C-COT tracker (0.75) 1:700:30corners of ground truth bounding box (0.75) 0:860:25corners of next frame’s bounding box from C-COT tracker (0.65) 1:460:22corners of next frame’s ground truth bounding box (0.65) 0:530:05SIFT feature points (0.30) 14:470:75SURF feature points (0.60) 16:370:78ORB feature points (0.30) 4:410:60Table 5: Costs on test executions when using classical image-based visual servoing (IBVS) with respect tofeature points derived from bounding boxes and keypoints derived from hand-engineered features. Since thereis no learning involved in this method, we only test with one set of cars: the cars that were used for training inthe other methods. This table follows the same format as Table 2. This method has one hyperparameter, whichis the gain for the control law. For each feature type, we select the best hyperparameter (shown in parenthesis)by validating the policy on 10 validation trajectories for gains between 0.05 and 2, in increments of 0.05. Theservoing policies based on bounding box features achieve low cost, and even lower ones if ground truth cardynamics is used. However, servoing with respect to hand-crafted feature points is significantly worse than theother methods.C.5 C LASSICAL IMAGE -BASED VISUAL SERVOINGTraditional visual servoing techniques (Feddema & Mitchell, 1989; Weiss et al., 1987) use theimage-plane coordinates of a set of points for control. For comparison to our method, we evalu-ate the servoing performance of feature points derived from bounding boxes and keypoints derivedfrom hand-engineered features, and report the costs of test executions on Table 5.We use bounding boxes from the C-COT tracker (Danelljan et al., 2016) (the current state-of-the-artvisual tracker) and ground truth bounding boxes from the simulator. The latter is defined as the boxthat tightly fits around the visible portions of the car. We provide the ground truth bounding box ofthe first frame to the C-COT tracker to indicate that we want to track the car. We use the four cornersof the box as the feature points for servoing to take into account the position and scale of the car inimage coordinates.We provide the ground truth depth values of the feature points for the interaction matrices. Inclassical image-based visual servoing, the control law involves the interaction matrix (also knownas feature Jacobian), which is the Jacobian of the points in image space with respect to the camera’scontrol (see Chaumette & Hutchinson (2006) for details). The analytical feature Jacobian used inIBVS assumes that the target points are static in the world frame. This is not true for a moving car,so we consider a variant where the feature Jacobian incorporates the ground truth dynamics of thecar. This amounts to adding a non-constant translation bias to the output of the dynamics function,where the translation is the displacement due to the car’s movement of the 3-dimensional point inthe camera’s reference frame. Note that this is still not exactly equivalent to having the car beingstatic since the roads have different slopes but the pitch and roll of the quadcopter is constrained tobe fixed.For the hand-crafted features, we consider SIFT (Lowe, 2004), SURF (Bay et al., 2006) and ORB(Rublee et al., 2011) keypoints. We filter out the keypoints of the first frame that does not belong tothe car and use these as the target keypoints. However, we use all the keypoints for the subsequentobservations.The servoing policies based on bounding box features achieve low cost, and even lower ones ifground truth car dynamics is used. However, servoing with respect to hand-crafted feature points issignificantly worse than the other methods. This is, in part, because the feature extraction and match-ing process introduces compounding errors. Similar results were found by Collewet & Marchand(2011), who proposed photometric visual servoing (i.e. servoing with respect to pixel intensities)and showed that it outperforms, by an order of magnitude, classical visual servoing that uses SURFfeatures.19Published as a conference paper at ICLR 2017Policy VariantObservation Modality (Pose) Use Rotation Ignore Rotationcar pose (1.55) 0:580:25 (1.90) 0:510:25next frame’s car pose (1.00) 0:00590:0020 (1.00) 0:00250:0017Table 6: Costs on test executions when using classical position-based visual servoing (PBVS). Since there isno learning involved in this method, we only test with one set of cars: the cars that were used for training in theother methods. This table follows the same format as Table 2. This method has one hyperparameter, which isthe gain for the control law. For each condition, we select the best hyperparameter (shown in parenthesis) byvalidating the policy on 10 validation trajectories for gains between 0.05 and 2, in increments of 0.05. Theseservoing policies, which use ground truth car poses, outperforms all the other policies based on images. Inaddition, the performance is more than two orders of magnitude better if ground truth car dynamics is used.C.6 C LASSICAL POSITION -BASED VISUAL SERVOINGPosition-based visual servoing (PBVS) techniques use poses of a target object for control (seeChaumette & Hutchinson (2006) for details). We evaluate the servoing performance of a few vari-ants, and report the costs of test executions on Table 6.Similar to our IBVS experiments, we consider a variant that uses the car pose of the next time stepas a way to incorporate the ground truth car dynamics into the interaction matrix. Since the costfunction is invariant to the orientation of the car, we also consider a variant where the policy onlyminimizes the translational part of the pose error.These servoing policies, which use ground truth car poses, outperforms all the other policies basedon images. In addition, the performance is more than two orders of magnitude better if ground truthcar dynamics is used.20
ryQbbFile
Under review as a conference paper at ICLR 2017CANAI G ENERATE LOVE ADVICE ?:TOWARD NEURAL ANSWER GENERATIONFOR NON-FACTOID QUESTIONSMakoto Nakatsuji, Hisashi Ito, Naruhiro Ikeda, Shota Sagara & Akihisa FujitaNTT Resonant Inc.fnakatuji,h-ito,nikeda,s-sagara,akihisa g@nttr.co.jpABSTRACTDeep learning methods that extract answers for non-factoid questions from QAsites are seen as critical since they can assist users in reaching their next decisionsthrough conversations with AI systems. The current methods, however, have thefollowing two problems: (1) They can not understand the ambiguous use of wordsin the questions as word usage can strongly depend on the context (e.g. the word“relationship” has quite different meanings in the categories of Love advice andother categories). As a result, the accuracies of their answer selections are notgood enough. (2) The current methods can only select from among the answersheld by QA sites and can not generate new ones. Thus, they can not answer thequestions that are somewhat different with those stored in QA sites. Our solution,Neural Answer Construction Model, tackles these problems as it: (1) Incorporatesthe biases of semantics behind questions (e.g. categories assigned to questions)into word embeddings while also computing them regardless of the semantics. Asa result, it can extract answers that suit the contexts of words used in the questionas well as following the common usage of words across semantics. This improvesthe accuracy of answer selection. (2) Uses biLSTM to compute the embeddingsof questions as well as those of the sentences often used to form answers (e.g.sentences representing conclusions or those supplementing the conclusions). Itthen simultaneously learns the optimum combination of those sentences as well asthe closeness between the question and those sentences. As a result, our model canconstruct an answer that corresponds to the situation that underlies the question;it fills the gap between answer selection and generation and is the first modelto move beyond the current simple answer selection model for non-factoid QAs.Evaluations using datasets created for love advice stored in the Japanese QA site,Oshiete goo, indicate that our model achieves 20 % higher accuracy in answercreation than the strong baselines. Our model is practical and has already beenapplied to the love advice service in Oshiete goo.1 I NTRODUCTIONRecently, dialog-based natural language understanding systems such as Apple’s Siri, IBM’s Watson,Amazon’s Echo, and Wolfram Alpha have spread through the market. In those systems, QuestionAnswering (QA) modules are particularly important since people want to know many things intheir daily lives. Technically, there are two types of questions in QA systems: factoid questionsand non-factoid ones. The former are asking, for instance, for the name of a person or a locationsuch that “What/Who is X?”. The latter are more diverse questions which cannot be answered by ashort fact. They range from advice on making long distance relationships work well, to requests foropinions on some public issues. Significant progress has been made at answering factoid questions(Wang et al. (2007 );Yu et al. (2014 )), however, retrieving answers for non-factoid questions fromthe Web remains a critical challenge in improving QA modules. The QA community sites such asYahoo! Answers and Quora can be sources of training data for the non-factoid questions where thegoal is to automatically select the best of the stored candidate answers.1Under review as a conference paper at ICLR 2017!"# $%&&'()* ! !"# $%&&'()* ! !"# $%&&'()* !+&,,$-.)/0&)$-&1$,2'2/0&)$")3$/&45()"0&)$&-$,2)62)/2,7 $!+&82$"38(/2 ! 9"4(': !;(''$3(,6")/2$12'"0&),<(%$1.()$'&82=$><&.'3$?$52$/'&,2$6&$'&821,$"''$6<2$042= !!:$,&)$(,$,6"10)*$6&$*26$<&42@&1A7$><&.'3$-"4(':$<2'%$<(4$@(6<$(6$&1$'26$<(4$B*.12$(6$&.6$&)$<(,$&@)= !;(''$3(,6")/2$12'"0&),<(%$1.()$'&82$ !!"#$%&'()'%&&*$)+,"&)$+,()-*.( $!"#$%&'()'(+$%"&-/)$(#$#)/*,+)-*.( !C.2,0&) ! D&)/'.,(&) ! >.%%'242)6 !E"F$D&4%.0)*$@&13$245233()*,$5(",23$@(6<$,24")0/,7 !E5F$G2.1"'$)26@&1A$6<"6$,2'2/6,$")3$/&45()2,$,2)62)/2,$6&$*2 )21"62$"),@21,7!q ac asFigure 1: Main ideas: (a) word embeddings with semantics and (b) a neural answer construction.Recent deep learning methods have been applied to this non-factoid answer selection task usingdatasets stored in the QA sites resulting in state-of-the-art performance ( Yu et al. (2014 );Tan et al.(2015 );Qiu & Huang (2015 );Feng et al. (2015 );Wang & Nyberg (2015 );Tan et al. (2016 )). Theyusually compute closeness between questions and answers by the individual embeddings obtainedusing a convolutional model. For example, Tan et al. (2016 ) builds the embeddings of questions andthose of answers based on bidirectional long short-term memory (biLSTM) models, and measurestheir closeness by cosine similarity. It also utilizes an efficient attention mechanism to generatethe answer representation following the question context. Their results show that their model canachieve much more accurate results than the strong baseline ( Feng et al. (2015 )). The current meth-ods, however, have the following two problems when applying them to real applications:(1) They can not understand the ambiguous use of words written in the questions as words are usedin quite different ways following the context in which they appear (e.g. the word “relationship” usedin a question submitted to “Love advice” category is quite different from the same word submit-ted to “Business advice” category). This makes words important for a specific context likely to bedisregarded in the following answer selection process. As a result, the answer selection accuraciesbecome weak for real applications.(2) They can only select from among the answers stored in the QA systems and can not generatenew ones. Thus, they can not answer the questions that are somewhat different from those storedin the QA systems even though it is important to cope with such differences when answering non-factoid questions (e.g. questions in the “Love advice” category are often different due to the situ-ation and user even though they share the same topics.). Furthermore, the answers selected fromQA datasets often contain a large amount of unrelated information. Some other studies have triedto create short answers to the short questions often seen in chat systems ( Vinyals & Le (2015 );Serban et al. (2015 )). Our target, non-factoid questions in QA systems, are, however, much longerand more complicated than those in chat systems. As described in their papers, the above methods,unfortunately, create unsatisfying answers to such non-factoid questions.To solve the above problems, this paper proposes a neural answer construction model; it fills the gapbetween answer selection and generation and is the first model to move beyond the current simpleanswer selection model for non-factoid QAs. It extends the above mentioned biLSTM model sinceit is language independent and free from feature engineering, linguistic tools, or external resources.Our model takes the following two ideas:(1) Before learning answer creation, it incorporates semantic biases behind questions (e.g. titles orcategories assigned to questions) into word vectors while computing vectors by using QA documentsstored across semantics. This process emphasizes the words that are important for a certain context.As a result, it can select the answers that suit the contexts of words used in the questions as well as thecommon usage of words seen across semantics. This improves the accuracies of answer selections.2Under review as a conference paper at ICLR 2017For example, in Fig. 1-(a), there are two questions in category “Family” and “Love advice”. Wordsmarked with rectangles are category specific (i.e. “son” and “homework” are specifically observedin “Family” while “distance”, “relationship”, and “lovers” are found in “Love advice”.) Our methodcan emphasize those words. As a result, answers that include the topics, “son” and “homework”, ortopics, “distance”, “relationship”, and “lovers”, will be scored highly for the above questions in thefollowing answer selection task.(2) The QA module designer first defines the abstract scenario of answer to be created; types ofsentences that should compose the answer and their occurrence order in the answer (e.g. typicalanswers in “Love advice” are composed in the order of the sentence types “sympathy”, “conclu-sion”, “supplementary for conclusion”, and “encouragement”). The sentence candidates can be ex-tracted from the whole answers by applying sentence extraction methods or sentence type classifiers(Schmidt et al. (2014 );Zhang et al. (2008 );Nishikawa et al. (2010 );Chen et al. (2010 )). It next si-multaneously learns the closeness between questions and sentences that may include answers as wellascombinational optimization of those sentences. Our method also uses an attention mechanism togenerate sentence representations according to the prior sentence; this extracts important topics inthe sentence and tracks those topics in subsequent sentences. As a result, it can construct answersthat have natural sentence flow whose topics correspond to the questions. Fig. 1-(b) explains theproposed neural-network by using examples. Here, the QA module designer first defines the ab-stract scenario for the answer as in the order of “conclusion” and “supplement”. Thus, there arethree types of inputs “question”, “conclusion”, and “supplement”. It next runs biLSTMs over thoseinputs separately; it learns the order of word vectors such that “relationships” often appears next to“distance”. It then computes the embedding for the question, that for conclusion, and that for supple-ment by max-pooling over the hidden vectors output by biLSTMs. Finally, it computes the closenessbetween question and conclusion, that between question and supplement, and combinational opti-mization between conclusion and supplement with the attention mechanism, simultaneously (dottedlines in Fig. 1-(b) represent attention from conclusion to supplement).We evaluated our method using datasets stored in the Japanese QA site Oshiete goo1. In particular,our evaluations focus on questions stored in the “Love advice” category since they are representativenon-factoid questions: the questions are often complicated and most questions are very long. Theresults show that our method outperforms the previous methods including the method by ( Tan et al.(2016 )); our method accurately constructs answers by naturally combining key sentences that arehighly close to the question.2 R ELATED WORKPrevious works on answer selection normally require feature engineering, linguistic tools, or ex-ternal resources. Recent deep learning methods are attractive since they demonstrate superior per-formance compared to traditional machine learning methods without the above mentioned tiresomeprocedures. For example, ( Wang & Nyberg (2015 );Hu et al. (2014 )) construct a joint feature vec-tor on both question and answer and then convert the task into a classification or ranking prob-lem. ( Feng et al. (2015 );Yu et al. (2014 );dos Santos et al. (2015 );Qiu & Huang (2015 )) learn thequestion and answer representations and then match them by certain similarity metrics. Recently,Tan et al. (2016 ) took the latter approach and achieved more accurate results than the current strongbaselines ( Feng et al. (2015 );Bendersky et al. (2011 )). They, however, can only select answers andnot generate them. Other than the above, recent neural text generation methods ( Serban et al. (2015 );Vinyals & Le (2015 )) can also intrinsically be used for answer generation. Their evaluations showedthat they could generate very short answer for factoid questions, but not the longer and more com-plicated answers demanded by non-factoid questions. Our Neural Answer Construction Model fillsthe gap between answer selection and generation for non-factoid QAs. It simultaneously learns thecloseness between questions and sentences that may include answers as well as combinational op-timization of those sentences. Since the sentences themselves in the answer are short, they can begenerated by neural conversation models like ( Vinyals & Le (2015 ));As for word embeddings with semantics, some previous methods use the semantics behind wordsby using semantic lexicons such as WordNet and Freebase ( Xu et al. (2014 );Bollegala et al. (2016 );Faruqui et al. (2015 );Johansson & Nieto Pi ̃na(2015 )). They, however, do not use the semantics be-1http://oshiete.goo.ne.jp3Under review as a conference paper at ICLR 2017hind the question/answer documents; e.g. document categories. Thus, they can not well catch thecontexts in which the words appear in the QA documents. They also require external semantic re-sources other than QA datasets.3 P RELIMINARYHere, we explain QA-LSTM ( Tan et al. (2015 )), the basic discriminative framework for answer se-lection based on LSTM, since we base our ideas on its framework.We first explain the LSTM and introduce the terminologies used in this paper. Given input sequenceX=fx(1);x(2);;x(N)g, where x(t)ist-th word vector, t-th hidden vector h(t)is updated as:it=(Wix(t) +Uih(t1) +bi)ft=(Wfx(t) +Ufh(t1) +bf)ot=(Wox(t) +Uoh(t1) +bo)ect= tanh( Wcx(t) +Uch(t1) +bc)ct=itect+ftct1h(t) = ottanh(ct)There are three gates (input it, forget ft, and output ot), and a cell memory vector ct.is the sigmoidfunction. W2RHN,U2RHH, andb2RH1are the network parameters to be learned.Single-direction LSTMs are weak in that they fail to make use of the contextual information fromthe future tokens. BiLSTMs use both the previous and future context by processing the sequencein two directions, and generate two sequences of output vectors. The output for each token is theconcatenation of the two vectors from both directions, i.e.!h(t) =!h(t)∥ h(t).In the QA-LSTM framework, given input pair (q; a)where qis a question and ais a candidateanswer, it first retrieves the word embeddings (WEs) of both qanda. Next, it separately appliesa biLSTM over the two sequences of WEs. Then, it generates fixed-sized distributed vector repre-sentations oqforq(oroafora) by computing max pooling over all the output vectors and thenconcatenating the resulting vectors on both directions of the biLSTM. Finally, it uses cosine simi-larity cos(oq;oa)to score the input (q; a)pair.It then defines the training objective as the hinge loss of:L= maxf0; Mcos(oq;o+a) + cos( oq;oa)gwhere o+ais an output vector for ground truth answer, oais that for an incorrect answer randomlychosen from the entire answer space, and Mis a margin. It treats any question with more thanone ground truth as multiple training examples. Finally, batch normalization is performed on therepresentations before computing cosine similarity ( Ioffe & Szegedy (2015 )).4 M ETHODWe first explain our word embeddings with semantics.4.1 W ORD EMBEDDINGS WITH DOCUMENT SEMANTICSThis process is inspired by paragraph2vec ( Le & Mikolov (2014 )); an unsupervised algorithm thatlearns fixed-length feature representations from variable-length pieces of texts, such as sentences,paragraphs, and documents.First, we explain paragraph2vec model. It averages the paragraph vector with several word vectorsfrom a paragraph and predicts the following word in the given context. It trains both word vectorsand paragraph vectors by stochastic gradient descent and backpropagation ( Rumelhart et al. (1988 )).While paragraph vectors are unique among paragraphs, the word vectors are shared.Next, we introduce our method that incorporates the semantics behind QA documents into wordembeddings (WEs) in the training phase. The idea is simple. Please see Fig. 2. It averages the vector4Under review as a conference paper at ICLR 2017!"#$%&'()#&*$+ !,-#.$)#&*$+ ! /-..! 0-1#"+2$ !'$."3&+14-5 ! '6-+ !.&7$ !"7$'"%$ !Figure 2: Learning word vectors biased with semantics.of category token and the vectors of title tokens, which are assigned to the QA documents, withseveral of the word vectors present in those documents. It then predicts the following word in thegiven context. Here, title tokens are defined by nouns that are extracted from titles assigned to thequestion. Multiple title tokens can be extracted from a title while one category token is assigned to aquestion. Those tokens are shared among datasets in the same category. It trains the category vectorand title vectors as well as word vectors in QA documents as per paragraph2vec model. Thoseadditional vectors are used as semantic biases for learning WEs. They are useful in emphasizingthe words following the contexts of particular categories or titles. This improves the accuracies ofanswer selection described later as explained in Introduction.For example, in Fig. 2, it can incorporate semantic biases from category “Love advice” into thewords (e.g. “Will”, “distance”, “relationship”, “ruin”, “love” and so on) in the question in “Loveadvice”. Thus, it can well apply the biases from category “Love advice” to the words (e.g. “distance”and “relationship”) if they specifically appear in “Love advice”. On the other hand, words that appearin several categories (e.g. “will”) are biased with several categories and thus will not be emphasized.4.2 N EURAL ANSWER CONSTRUCTION MODELHere, we explain our model. We first explain our approach and then the algorithm.Approach It takes the following three approaches:Design the abstract scenario for the answer: The answer is constructed according to theorder of the sentence types defined by the designer. For example, there are the sentencetypes such as sentence that states sympathy with the question, sentence that states a con-clusion to the question, sentence that supplements the conclusion, and sentence that statesencouragement to the questioner. This is inspired by the automated web service composi-tion framework ( Rao & Su (2005 )) where the requester should build an abstract process be-fore the web service composition planning starts. In our setting, the process is the scenarioof answer and the service is the sentence in the scenario. Thus, our method can constructan answer by binding concrete sentences to fit the scenario.For example, the scenario for love advice can be designed as follows: it begins with asympathy sentence (e.g. “You are struggling too.”), next it states a conclusion sentence(e.g. “I think you should make a declaration of love to her as soon as possible.”), then itsupplements the conclusion by a supplemental sentence (e.g. “If you are too late, she maybefall in love with someone else.”), and finally it ends with an encouragement sentence (e.g.“Good Luck!”).Joint neural network to learn sentence selection and combination: Our model com-putes the combination optimization among sentences that may include the answer as wellasthe closeness between question and sentences within a single neural network. This im-proves answer sentence selection; our model can avoid the cases in which the combinationof sentences are not good enough though the scores of closeness between the question andeach sentence are high. It also can let the parameter tuning simpler than the model that sep-arates the network for sentence selection and that for sentence combination. The image ofthis neural-network is depicted in Fig. 1-(b). Here, it learns the closeness between sentence“Will distance relationship ruin love?” and “Distance cannot ruin true love”, the closenessbetween “Will distance relationship ruin love?” and “Distance certainly tests your love.”,and the combination between “Distance cannot ruin true love’ and “Distance certainly testsyour love.”.5Under review as a conference paper at ICLR 2017Algorithm 1 A neural answer construction modelInput: Pairs of question, conclusion, and supplement, f(q,ac, andas)g.Output: Parameters set by the algorithm.1:forn= 1,n++, while n < N do2: foreach pair (q; ac; as)do3: Computes ocqandocby biLSTMs and max pooling.4: Computes osqby biLSTM and max pooling.5: foreacht-th hidden vector for supplement do6: Computes ~hs(t)by Eq. (1).7: end for8: Computes osby max pooling.9: ComputesLby Eq. (2).10: end for11:end forAttention mechanism to improve the combination of sentences : Our method extractsimportant topics in the conclusion sentence and emphasizes those topics in the supplemen-tal sentence in the training phase; this is inspired by ( Tan et al. (2016 )) who utilizes anattention mechanism to generate the answer representation following the question context.As a result, it can combine conclusions with the supplements following the contexts writ-ten in the conclusion sentences. This makes the story in the created answers very natural.In Fig. 1-(b), our attention mechanism extracts important topics (e.g. topic that represents“distance”) in the conclusion sentence “Distance cannot ruin true love” and emphasizesthose topics in computing the representation of the supplement sentence “Distance cer-tainly tests your love.”.Procedure The core part of the answer is usually the conclusion sentence and its supplementalsentence. Thus, for simplicity, we here explain the procedure of our model in selecting and com-bining the above two types of sentences. As the reader can imagine, it can easily be applied to foursentence types. Actually, our love advice service by AI in oshiete-goo was implemented for fourtypes of sentences, sympathy, conclusion, supplement, and encouragement (see Evaluation section).The model is illustrated in Fig. 1-(b) in which the input pair is (q; ac; as)where qis the question, acis a candidate conclusion sentence, and asis a candidate supplemental sentence. The word embed-dings (WEs) for words in q,ac, andasare extracted in the way described in the previous subsection.The procedure of our model is as follows (please see the Algorithm 1also.):(1) It iterates the following procedures (2) to (7) Ntimes (line 1 in the algorithm).(2) It picks up each pair ( q,ac, andas) in the dataset (line 2 in the algorithm).In the following steps (3) and (4), the same biLSTM is applied to both qandacto compute thecloseness between qandac. Similarly, the same biLSTM is applied to both qandas. However, thebiLSTM for computing closeness between qandacdiffers from that between qandassinceacandashave different characteristics.(3) It separately applies a biLSTM over the two sequences of WEs, qandac, and computes the maxpooling over the t-th hidden vector for question hcq(t)and that for conclusion hc(t). As a result, itacquires the question embedding, ocqand the conclusion embedding, oc(line 3 in the algorithm).(4) It also separately applies a biLSTM over the two sequences of WEs, qandas, and computes themax pooling over the t-th hidden vector for question hsq(t)to acquire the question embedding, osq(line 4 in the algorithm). osqis different from ocqsince our method does not share the sub-networkused for computing closeness between qandacand that between qandasas described above.(5) It applies the attention mechanism from conclusion to supplement. Specifically, given the outputvector of biLSTM on the supplemental side at time step t,hs(t), and the conclusion embedding, oc,the updated vector ~hs(t)for each conclusion token is formulated as below (line 6 in the algorithm):6Under review as a conference paper at ICLR 2017Table 1: Comparison of AP for answer selection.QA-LSTM Attentive-LSTM Semantic-LSTM Construction Our methodK=1 0.8472 0.8196 0.8499 0.8816 0.8846K=3 0.8649 0.844566 0.8734 0.8884 0.8909K=5 0.8653 0.8418 0.8712 0.8827 0.8845K=10 0.8603 0.8358 0.8658 0.8618 0.8647Table 2: Comparison of AP for answer construction.QA-LSTM Attentive-LSTM Semantic-LSTM Construction Our methodK=1 0.3262 0.3235 0.3664 0.3813 0.3901K=3 0.3753 0.3694 0.4078 0.5278 0.5308K=5 0.3813 0.3758 0.4133 0.5196 0.5271K=10 0.3827 0.3777 0.4151 0.4838 0.4763ms;c(t) = tanh( Wsmhs(t) +Wcmoc) (1)ss;c(t) = exp( wmbTms;c(t))~hs(t) = hs(t)ss;c(t)Wsm,Wcm, andwmbare attention parameters. Conceptually, the attention mechanism gives moreweights on words that include important topics in the conclusion sentence.(6) It computes the max pooling over ~hs(t)and acquires the supplemental embedding, os(line 8 inthe algorithm).(7) It computes the closeness between question and conclusion and that between question and sup-plement as well as the optimization combination between conclusion and supplement. The trainingobjective is given as (line 9 in the algorithm):L= maxf0; M(cos(oq;[o+c;o+s])cos(oq;[o+c;os]))g (2)+ maxf0; M(cos(oq;[o+c;o+s])cos(oq;[oc;o+s]))g+ maxf0;(1 +k)M(cos(oq;[o+c;o+s])cos(oq;[oc;os]))g+ maxf0; M(cos(oq;[o+c;os])cos(oq;[oc;os]))g+ maxf0; M(cos(oq;[oc;o+s])cos(oq;[oc;os]))gwhere [y;z]is the concatenation of two vectors, yandz,oqis[ocq;osq],o+is an output vectorfor a ground truth answer, and ois that for an incorrect answer randomly chosen from the entireanswer space. In the above equation, the first (or second) term presents the loss that occurs whenboth question-conclusion pair (q-c) and question-supplemental pair (q-s) are correct while q-c (orq-s) is correct but q-s (or q-c) is incorrect. The third term computes the loss that occurs when bothq-c and q-s are correct while both q-c and q-s are incorrect. The fourth (or fifth) term computes theloss that occurs when q-c (or q-s) is correct but q-s (or q-c) is incorrect while both q-c and q-s areincorrect. Mis constant margin and k(0< k < 1)is a parameter controlling the margin. Thus, theresulting margin for the third term is larger than those for other terms. In this way, by consideringthe case when either conclutions or supplements are incorrect or not, this equation optimizes thecombinations among conclusion and supplement. In addition, it can take the closeness betweenquestion and conclusion (or supplement) in consideration by cosine similarity.The parameter sets fWi;Wf;Wo;Wc;Ui;Uf;Uo;Uc;bi;bf;bo;bcgcfor question-conclusionmatching,fWi;Wf;Wo;Wc;Ui;Uf;Uo;Uc;bi;bf;bo;bcgsfor question-supplement matching,andfWsm;Wcm;wmbgfor conclusion-supplement attention are trained during the iterations. Af-ter the model is trained, our method uses cos(oq;[oc;os])to score the input ( q,ac,as) pair andconstructs an answer that has a conclusion and its supplement.5 E VALUATIONWe used our method to select or construct answers to the questions stored in “Love advice” category.7Under review as a conference paper at ICLR 2017Table 3: Comparison of human evaluation results.QA-LSTM(1) (2) (3) (4)Our method(1) (2) (3) (4)57 66 41 36 116 51 13 20Table 4: Examples of answers created by QA-LSTM and those by Our method .Questions Answers created by QA-LSTM Answers created by Our methodI’m afraid to confess my love to her, whatshould I do?You should wait until you feel excited. If youinteract with her indifferently, it will be diffi-cult to develop any relation with her.It is better to concentrate on how to confessyour love to her. I understand you are strug-gling since you love her very much.A guy I like says to me “I like you at home”kiddingly. It may be the ordinary gentleness.Some hope?You don’t have to test his love immediately.Unless he likes you, he would not have goneto see a movie with you.Yes, there is some hope. You can understandhis reaction more easily if your understandingof each other is deeper.I seldom meet an interesting person. I worryabout how to become close to him. Should Iapproach to him positively?Try to select your words correctly. Unless youconfess your love to him, it is difficult to con-vey your emotion to him.You should confess your love to him. Unlessyou confess your love to him, it is difficult toconvey your emotion to him.5.1 D ATASETWe evaluated our method using a dataset stored in Japanese online QA service Oshiete-goo. First,the word embeddings were built by using 189,511 questions and their 771,956 answers stored in16 categories including “Love Advice”, “Traveling”, and “Health Care”. 6,250 title tokens wereextracted from the titles. Then, we evaluated answer selection and construction tasks by using a cor-pus containing about 5,000 question-conclusion-supplement sentences. Conclusions and supplementsentences were extracted by human experts from answers. The readers could use sentence extractionmethods ( Schmidt et al. (2014 );Zhang et al. (2008 );Nishikawa et al. (2010 );Chen et al. (2010 )) orneural conversation models like ( Vinyals & Le (2015 )) to semi-automatically extract/generate thosesentences.5.2 C OMPARED METHODSWe compared the accuracy of the following five methods:QA-LSTM proposed by ( Tan et al. (2015 )).Attentive LSTM : introduces an attention mechanism from question to answer and is evalu-ated as the current best answer selection method Tan et al. (2016 ).Semantic LSTM : performs answer selection by using our word embeddings biased withsemantics.Construction : performs our proposed answer construction without attention mechanism.Our method : performs our answer construction with attention mechanism from conclusionto supplement.5.3 M ETHODOLOGY AND PARAMETER SETUPWe randomly divided the dataset into two halves, training dataset and predicted one, and conductedtwo-fold cross validation. Results shown later are the average values.Both for answer selection and construction, we used Average Precision (AP) against the top-Kranked answers in the results because we consider that the most highly ranked answers are im-portant for users. If the number of ranked items is K, the number of correct answers among the top-jranked items Nj, and the number of all correct answers (paired with the questions) D, AP is definedas follows:AP=1D∑1jKNjjFor answer construction, we checked whether each method could recreate the original answers. Asthe reader easily can understand, this is a much more difficult task than answer selection and thusthe values of AP will be smaller than the results for answer selection.We tried word vectors and qa vectors of different sizes, and finally set the word vector size to 300and the LSTM output vectors for biLSTMs to 502. We also tried different margins in the hinge8Under review as a conference paper at ICLR 2017loss function, and fixed the margin, M, to0:2andkto1:0. The iteration count Nwas set to 20. Forour method, the embeddings for questions, those for conclusions, and those for supplements werepretrained by Semantic LSTM before answer construction since this enhances the overall accuracy.We did not use attention mechanism from question to answer for Semantic LSTM ,Construction andOur method . This is because, as we present in the results subsection, the lengths of questions aremuch longer than those of answer sentences, and thus the attention mechanism from question toanswer became noise for sentence selection.5.4 R ESULTSWe now present the results of the evaluations.Answer Selection We first compare the accuracy of methods for answer selection. The results areshown in Table 1.QA-LSTM andAttentive LSTM are worse than Semantic-LSTM . This indicates thatSemantic-LSTM can incorporate semantic information (titles/categories) into word embeddings; itcan emphasize words according to the context they appeared and thus the matching accuracy be-tween question vector and conclusion (supplement) vector was improved. Attentive LSTM is worsethan QA-LSTM as described above. Construction andOur method are better than Semantic-LSTM .This is because they can avoid the combinations of sentences that are not good enough even thoughthe scores of closeness between questions and sentences are high. This implies that, if the com-bination is not good, the selection of answer sentences also tends to be erroneous. Finally, Ourmethod , which provides sophisticated selection/combination strategies, yielded higher accuracy thanthe other methods. It achieved 4.4% higher accuracy than QA-LSTM (QA-LSTM marked 0.8472while Our method marked 0.8846.).Answer Construction We then compared the accuracy of the methods for answer construction.Especially for the answer construction task, the top-1 result is most important since many QA ap-plications show only the top-1 answer. The results are shown in Table 2. There is no answer con-struction mechanism in QA-LSTM ,Attentive-LSTM , and Semantic-LSTM . Thus we simply mergethe conclusion and supplement, each of which has the highest similarity with the question by eachmethod. QA-LSTM andAttentive LSTM are much worse than Semantic-LSTM . This is because thesentences output by Semantic-LSTM are selected by utilizing the words that are emphasized for acontext for “Love advice” (i.e. category and titles). Construction is better than Semantic-LSTM sinceit simultaneously learns the optimum combination of sentences as well as the closeness between thequestion and sentences. Finally, Our method is better than Construction . This is because it wellemploys the attention mechanism to link conclusion and supplement sentences and thus the com-binations of the sentences are more natural than those of Construction .Our method achieved 20%higher accuracy than QA-LSTM (QA-LSTM marked 0.3262 while Our method marked 0.3901.).The computation time for our method was less than two hours. All experiments were performed onNVIDIA TITAN X/Tesla M40 GPUs, and all methods were implemented by Python in the Chainerframework. Thus, our method well suits real applications. In fact, it is already being used in the loveadvice service of Oshiete goo2.Human evaluation The outputs of QA-LSTM andOur method were judged by two human ex-perts. The experts entered the questions, which were not included in our evaluation datasets, to theAI system and rated the created answers based on the following scale: (1) the conclusion and supple-ment sentences as well as their combination were good, (2) the sentences were good in isolation buttheir combination was not good, (3) One of the selections (conclusion or supplement) was good buttheir combination was not good, and (4) both sentences and their combination were not good. Theanswers were judged as good if they satisfied the following two points: (A) the contents of answersentences correspond to the question. (B) the story between conclusion and supplement is natural.The results are shown in Table 3. Table 4presents examples of the questions and answers constructed(they were originally Japanese and translated into English for readability. The questions are sum-marized since the original ones were very long.). The readers can also see Japanese answers fromour service URL presented the above. Those results indicate that the experts were much more satis-fied with the outputs of Our method than those by QA-LSTM ; 58 % of the answers created by Ourmethod were classified as (1). This is because, as can be see in Table 4,Our method can naturally2http://oshiete.goo.ne.jp/ai9Under review as a conference paper at ICLR 2017combine the sentences as well as select sentences that match the question. It well coped with thequestions that were somewhat different from those stored in the evaluation dataset.Actually, when the public used our love advice service, it was surprising to find that the 455 answerscreated by the AI whose name is oshi-el (uses Our method ) were judged as Good answers by usersfrom among the 1,492 questions entered from September 6th to November 5th3. The rate of gettingGood answers by oshi-el is twice that of the average human user in oshiete-goo when we focus onusers who answered more than 100 questions in love advice category. Thus, we think this is a goodresult.6 C ONCLUSIONThis is the first study that create answers for non-factoid questions. Our method incorporates thebiases of semantics behind questions into word embeddings to improve the accuracy of answerselection. It then simultaneously learns the optimum combination of answer sentences as well as thecloseness between questions and sentences. Our evaluation shows that our method achieves 20 %higher accuracy in answer construction than the method based on the current best answer selectionmethod. Our model presents an important direction for future studies on answer generation. Sincethe sentences themselves in the answer are short, they can be generated by neural conversationmodels like ( Vinyals & Le (2015 )); this means that our model can be extended to generate completeanswers once the abstract scenario is made.REFERENCESMichael Bendersky, Donald Metzler, and W. Bruce Croft. Parameterized concept weighting in ver-bose queries. In Proc. SIGIR’11 , pp. 605–614, 2011.Danushka Bollegala, Mohammed Alsuhaibani, Takanori Maehara, and Ken ichi Kawarabayashi.Joint word representation learning using a corpus and a semantic lexicon. In Proc. AAAI’16 , pp.2690– 2696, 2016.Bi Chen, Leilei Zhu, Daniel Kifer, and Dongwon Lee. What is an opinion about? exploring politicalstandpoints using opinion scoring model. In Proc. AAAI’10 , pp. 1007–1012, 2010.Cicero dos Santos, Luciano Barbosa, Dasha Bogdanova, and Bianca Zadrozny. Learning hybridrepresentations to retrieve semantically equivalent questions. In Proc. ACL-IJCNLP’15 , pp. 694–699, July 2015.Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith.Retrofitting word vectors to semantic lexicons. In Proc. NAACL HLT’15 , pp. 1606–1615, 2015.Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, and Bowen Zhou. Applying deep learn-ing to answer selection: A study and an open task. CoRR , abs/1508.01585, 2015.Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. Convolutional neural network architecturesfor matching natural language sentences. In Proc. NIPS’14 , pp. 2042–2050. 2014.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. In Proc. ICML’15 , volume 37, pp. 448–456, 2015.Richard Johansson and Luis Nieto Pi ̃na. Embedding a semantic network in a word space. In Proc.NAACL HLT’15 , pp. 1428–1433, 2015.Quoc V. Le and Tomas Mikolov. Distributed representations of sentences and documents. In Proc.ICML’14 , pp. 1188–1196, 2014.Hitoshi Nishikawa, Takaaki Hasegawa, Yoshihiro Matsuo, and Genichiro Kikui. Opinion summa-rization with integer linear programming formulation for sentence extraction and ordering. InProc. COLING’10 , pp. 910–918, 2010.3This service started on September 6th, 2016.10Under review as a conference paper at ICLR 2017Xipeng Qiu and Xuanjing Huang. Convolutional neural tensor network architecture for community-based question answering. In Proc. IJCAI’15 , pp. 1305–1311, 2015.Jinghai Rao and Xiaomeng Su. A survey of automated web service composition methods. In Proc.SWSWPC’05 , pp. 43–54, 2005.David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Neurocomputing: Foundations ofresearch. chapter Learning Representations by Back-propagating Errors, pp. 696–699. 1988.Sebastian Schmidt, Steffen Schnitzer, and Christoph Rensing. Domain-independent sentence typeclassification: Examining the scenarios of scientific abstracts and scrum protocols. In Proc. i-KNOW ’14 , pp. 5:1–5:8, 2014.Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau.Hierarchical neural network generative models for movie dialogues. CoRR , abs/1507.04808,2015.Ming Tan, Bing Xiang, and Bowen Zhou. Lstm-based deep learning models for non-factoid answerselection. CoRR , abs/1511.04108, 2015.Ming Tan, C ́ıcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. Improved representationlearning for question answer matching. In Proc. ACL’16 , pp. 464–473, 2016.Oriol Vinyals and Quoc V. Le. A neural conversational model. CoRR , abs/1506.05869, 2015.Di Wang and Eric Nyberg. A long short-term memory model for answer sentence selection inquestion answering. In Proc. ACL-IJCNLP’15 , pp. 707–712, 2015.Mengqiu Wang, Noah A. Smith, and Teruko Mitamura. What is the jeopardy model? a quasi-synchronous grammar for qa. In Proc. EMNLP-CoNLL’07 , pp. 22–32, 2007.Chang Xu, Yalong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie-Yan Liu. RC-NET: A general framework for incorporating knowledge into word representations. In Proc.CIKM’14 , pp. 1219–1228, 2014.Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. Deep learning for answer sen-tence selection. CoRR , abs/1412.1632, 2014.Jiajun Zhang, Chengqing Zong, and Shoushan Li. Sentence type based reordering model for statisti-cal machine translation. In Proceedings of the 22Nd International Conference on ComputationalLinguistics - Volume 1 , pp. 1089–1096, 2008.11
B1-q5Pqxl
Published as a conference paper at ICLR 2017MACHINE COMPREHENSION USING MATCH -LSTMAND ANSWER POINTERShuohang WangSchool of Information SystemsSingapore Management Universityshwang.2014@phdis.smu.edu.sgJing JiangSchool of Information SystemsSingapore Management Universityjingjiang@smu.edu.sgABSTRACTMachine comprehension of text is an important problem in natural language pro-cessing. A recently released dataset, the Stanford Question Answering Dataset(SQuAD), offers a large number of real questions and their answers created byhumans through crowdsourcing. SQuAD provides a challenging testbed for eval-uating machine comprehension algorithms, partly because compared with previ-ous datasets, in SQuAD the answers do not come from a small set of candidateanswers and they have variable lengths. We propose an end-to-end neural architec-ture for the task. The architecture is based on match-LSTM, a model we proposedpreviously for textual entailment, and Pointer Net, a sequence-to-sequence modelproposed by Vinyals et al. (2015) to constrain the output tokens to be from theinput sequences. We propose two ways of using Pointer Net for our tasks. Ourexperiments show that both of our two models substantially outperform the bestresults obtained by Rajpurkar et al. (2016) using logistic regression and manuallycrafted features. Besides, our boundary model also achieves the best performanceon the MSMARCO dataset (Nguyen et al., 2016).1 I NTRODUCTIONMachine comprehension of text is one of the ultimate goals of natural language processing. Whilethe ability of a machine to understand text can be assessed in many different ways, in recent years,several benchmark datasets have been created to focus on answering questions as a way to evaluatemachine comprehension (Richardson et al., 2013; Hermann et al., 2015; Hill et al., 2016; Westonet al., 2016; Rajpurkar et al., 2016; Nguyen et al., 2016). In this setup, typically the machine is firstpresented with a piece of text such as a news article or a story. The machine is then expected toanswer one or multiple questions related to the text.In most of the benchmark datasets, a question can be treated as a multiple choice question, whosecorrect answer is to be chosen from a set of provided candidate answers (Richardson et al., 2013;Hill et al., 2016). Presumably, questions with more given candidate answers are more challeng-ing. The Stanford Question Answering Dataset (SQuAD) introduced recently by Rajpurkar et al.(2016) contains such more challenging questions whose correct answers can be any sequence oftokens from the given text. Moreover, unlike some other datasets whose questions and answers werecreated automatically in Cloze style (Hermann et al., 2015; Hill et al., 2016), the questions and an-swers in SQuAD were created by humans through crowdsourcing, which makes the dataset morerealistic. Another real dataset, the Human-Generated MAchine Reading COmprehension dataset(MSMARCO) (Nguyen et al., 2016), provided a query together with several related documents col-lected from Bing Index. The answer to the query is generated by human and the answer words cannot only come from the given text.Given these advantages of the SQuAD and MSMARCO datasets, in this paper, we focus on thesenew datasets to study machine comprehension of text. A sample piece of text and three of its asso-ciated questions from SQuAD are shown in Table 1. Traditional solutions to this kind of questionanswering tasks rely on NLP pipelines that involve multiple steps of linguistic analyses and featureengineering, including syntactic parsing, named entity recognition, question classification, semanticparsing, etc. Recently, with the advances of applying neural network models in NLP, there has been1Published as a conference paper at ICLR 2017In 1870, Tesla moved to Karlovac, to attend school at the Higher Real Gymnasium , where he wasprofoundly influenced by a math teacher Martin Sekuli ́c. The classes were held in German , as it was aschool within the Austro-Hungarian Military Frontier. Tesla was able to perform integral calculus in hishead, which prompted his teachers to believe that he was cheating. He finished a four-year term in threeyears, graduating in 1873.1. In what language were the classes given? German2. Who was Tesla’s main influence in Karlovac? Martin Sekuli ́c3. Why did Tesla go to Karlovac? attend school at the Higher Real GymnasiumTable 1: A paragraph from Wikipedia and three associated questions together with their answers,taken from the SQuAD dataset. The tokens in bold in the paragraph are our predicted answers whilethe texts next to the questions are the ground truth answers.much interest in building end-to-end neural architectures for various NLP tasks, including severalpieces of work on machine comprehension (Hermann et al., 2015; Hill et al., 2016; Yin et al., 2016;Kadlec et al., 2016; Cui et al., 2016). However, given the properties of previous machine compre-hension datasets, existing end-to-end neural architectures for the task either rely on the candidateanswers (Hill et al., 2016; Yin et al., 2016) or assume that the answer is a single token (Hermannet al., 2015; Kadlec et al., 2016; Cui et al., 2016), which make these methods unsuitable for theSQuAD/MSMARCO dataset. In this paper, we propose a new end-to-end neural architecture toaddress the machine comprehension problem as defined in the SQuAD/MSMARCO dataset. Andfor the MSMARCO dataset, we will only make use of the words in the given text to generate theanswer.Specifically, observing that in the SQuAD/MSMARCO dataset many questions could be entailedfrom some sentences in the original text, we adopt a match-LSTM model that we developed earlierfor textual entailment (Wang & Jiang, 2016) as one layer of our model. We build a bi-directionalmatch-LSTM on the given passage with attentions on the question for each word so that each posi-tion in the paragraph will have a hidden representation reflecting its relation to the question. Thenwe further adopt the Pointer Net (Ptr-Net) model developed by Vinyals et al. (2015) to select thewords in these positions based on the hidden representations built by match-LSTM as an answer.We propose two ways to apply the Ptr-Net model for our task: a sequence model which selects theanswer word by word, and a boundary model which only selects the start and end points of theanswer span. Experiments on the SQuAD dataset show that our two models both outperform thebest performance reported by Rajpurkar et al. (2016). Moreover, using an ensemble of several of ourmodels, we can achieve very competitive performance on SQuAD. For the MSMARCO dataset, areal query based problem, our boundary model outperforms our sequence model with a big margin.It also outperforms the golden passage baseline.Our contributions can be summarized as follows: (1) We propose two new end-to-end neural networkmodels for machine comprehension, which combine match-LSTM and Ptr-Net to handle the specialproperties of the SQuAD dataset. To the best of our knowledge, we are the first to propose theboundary model which is more suitable to the SQuAD/MSMARCO tasks. And we are the firstto integrate the attention-based word pair matching into machine comprehension tasks. (2) Wehave achieved the performance of an exact match score of 71.3% and an F1 score of 80.8% on theunseen SQuAD test dataset, which is much better than the feature-engineered solution (Rajpurkaret al., 2016). Our performance is also close to the state of the art on SQuAD, which is 74.8% interms of exact match and 82.2% in terms of F1 collected from the SQuAD Leaderboard1. Besides,our boundary model achieves the state-of-art performance on the MSMARCO dataset with BLUE-1/2/3/4 40.7/33.9/30.6/28.7 and Rouge-L 37.32. (3) Our further visualization of the models revealssome useful insights of the attention mechanism for reasoning the questions. And we also showthat the boundary model can overcome the early stop prediction problem in the sequence model.Besides, we also made our code available online3.1https://rajpurkar.github.io/SQuAD-explorer/2http://www.msmarco.org/leaders.aspx3https://github.com/shuohangwang/SeqMatchSeq2Published as a conference paper at ICLR 2017Figure 1: An overview of our two models. Both models consist of an LSTM preprocessing layer,a match-LSTM layer and an Answer Pointer layer. For each match-LSTM in a particular direction,hqi, which is defined as Hq|i, is computed using the in the corresponding direction, as describedin Eqn. (2)2 M ETHODIn this section, we first briefly review match-LSTM and Pointer Net. These two pieces of existingwork lay the foundation of our method. We then present our end-to-end neural architecture formachine comprehension.2.1 M ATCH -LSTMIn a recent work on learning natural language inference, we proposed a match-LSTM model forpredicting textual entailment (Wang & Jiang, 2016). In textual entailment, two sentences are givenwhere one is a premise and the other is a hypothesis. To predict whether the premise entails thehypothesis, the match-LSTM model goes through the tokens of the hypothesis sequentially. At eachposition of the hypothesis, attention mechanism is used to obtain a weighted vector representationof the premise. This weighted premise is then to be combined with a vector representation of thecurrent token of the hypothesis and fed into an LSTM, which we call the match-LSTM. The match-LSTM essentially sequentially aggregates the matching of the attention-weighted premise to eachtoken of the hypothesis and uses the aggregated matching result to make a final prediction.2.2 P OINTER NETVinyals et al. (2015) proposed a Pointer Network (Ptr-Net) model to solve a special kind of problemswhere we want to generate an output sequence whose tokens must come from the input sequence.Instead of picking an output token from a fixed vocabulary, Ptr-Net uses attention mechanism as apointer to select a position from the input sequence as an output symbol. The pointer mechanismhas inspired some recent work on language processing (Gu et al., 2016; Kadlec et al., 2016). Herewe adopt Ptr-Net in order to construct answers using tokens from the input text.2.3 O URMETHODFormally, the problem we are trying to solve can be formulated as follows. We are given a pieceof text, which we refer to as a passage, and a question related to the passage. The passage is3Published as a conference paper at ICLR 2017represented by matrix P2RdP, wherePis the length (number of tokens) of the passage and disthe dimensionality of word embeddings. Similarly, the question is represented by matrix Q2RdQwhereQis the length of the question. Our goal is to identify a subsequence from the passage as theanswer to the question.As pointed out earlier, since the output tokens are from the input, we would like to adopt the PointerNet for this problem. A straightforward way of applying Ptr-Net here is to treat an answer as asequence of tokens from the input passage but ignore the fact that these tokens are consecutive inthe original passage, because Ptr-Net does not make the consecutivity assumption. Specifically, werepresent the answer as a sequence of integers a= (a1;a2;:::), where each aiis an integer between1 andP, indicating a certain position in the passage.Alternatively, if we want to ensure consecutivity, that is, if we want to ensure that we indeed select asubsequence from the passage as an answer, we can use the Ptr-Net to predict only the start and theend of an answer. In this case, the Ptr-Net only needs to select two tokens from the input passage,and all the tokens between these two tokens in the passage are treated as the answer. Specifically,we can represent the answer to be predicted as two integers a= (as;ae), whereasanaeare integersbetween 1 and P.We refer to the first setting above as a sequence model and the second setting above as a bound-arymodel. For either model, we assume that a set of training examples in the form of tripletsf(Pn;Qn;an)gNn=1are given.An overview of the two neural network models are shown in Figure 1. Both models consist ofthree layers: (1) An LSTM preprocessing layer that preprocesses the passage and the question usingLSTMs. (2) A match-LSTM layer that tries to match the passage against the question. (3) AnAnswer Pointer (Ans-Ptr) layer that uses Ptr-Net to select a set of tokens from the passage as theanswer. The difference between the two models only lies in the third layer.LSTM Preprocessing LayerThe purpose for the LSTM preprocessing layer is to incorporate contextual information into therepresentation of each token in the passage and the question. We use a standard one-directionalLSTM (Hochreiter & Schmidhuber, 1997) to process the passage4and the question separately, asshown below:Hp=!LSTM (P);Hq=!LSTM (Q): (1)The resulting matrices Hp2RlPandHq2RlQare hidden representations of the passage andthe question, where lis the dimensionality of the hidden vectors. In other words, the ithcolumnvector hpi(orhqi) inHp(orHq) represents the ithtoken in the passage (or the question) togetherwith some contextual information from the left.Match-LSTM LayerWe apply the match-LSTM model (Wang & Jiang, 2016) proposed for textual entailment to our ma-chine comprehension problem by treating the question as a premise and the passage as a hypothesis.The match-LSTM sequentially goes through the passage. At position iof the passage, it first usesthe standard word-by-word attention mechanism to obtain attention weight vector !i2R1Qasfollows: !Gi=tanh(WqHq+ (Wphpi+Wr !hri1+bp)eQ); !i=softmax (w| !Gi+beQ); (2)where Wq;Wp;Wr2Rll,bp;w2Rl1andb2Rare parameters to be learned, !Gi2RlQisthe intermediate result, !hri12Rl1is the hidden vector of the one-directional match-LSTM (tobe explained below) at position i1, and the outer product (eQ)produces a matrix or row vectorby repeating the vector or scalar on the left for Qtimes.Essentially, the resulting attention weight !i;jabove indicates the degree of matching between theithtoken in the passage with the jthtoken in the question. Next, we use the attention weight vector4For the MSMARCO dataset, Pis actually consisted of several unrelated documents. The previous state ofpre-processing LSTM and match-LSTM to compute the first state of each document is set to zero.4Published as a conference paper at ICLR 2017 !ito obtain a weighted version of the question and combine it with the current token of the passageto form a vector !zi: !zi=hpiHq !|i; (3)where Hq2RlQ, !i2R1Qandhpi2Rl1. This vector !ziis fed into a standard one-directional LSTM to form our so-called match-LSTM: !hri=!LSTM ( !zi; !hri1); (4)where !hri2Rl1.We further build a similar match-LSTM in the reverse direction. The purpose is to obtain a repre-sentation that encodes the contexts from both directions for each token in the passage.Let !Hr2RlPrepresent the hidden states [ !hr1; !hr2;:::; !hrP]and Hr2RlPrepresent[ hr1; hr2;:::; hrP], the hidden states of match-LSTM in the reverse direction. We define Hr2R2lPas the concatenation of the two:Hr=" !Hr Hr#: (5)Answer Pointer LayerThe top layer, the Answer Pointer (Ans-Ptr) layer, is motivated by the Pointer Net introduced byVinyals et al. (2015). This layer uses the sequence Hras input. Recall that we have two differentmodels: The sequence model produces a sequence of answer tokens but these tokens may not beconsecutive in the original passage. The boundary model produces only the start token and the endtoken of the answer, and then all the tokens between these two in the original passage are consideredto be the answer. We now explain the two models separately.The Sequence Model: Recall that in the sequence model, the answer is represented by a sequenceof integers a= (a1;a2;:::)indicating the positions of the selected tokens in the original passage.The Ans-Ptr layer models the generation of these integers in a sequential manner. Because the lengthof an answer is not fixed, in order to stop generating answer tokens at certain point, we allow eachakto take up an integer value between 1 and P+ 1, whereP+ 1is a special value indicating theend of the answer. Once akis set to beP+ 1, the generation of the answer stops.In order to generate the kthanswer token indicated by ak, first, the attention mechanism is usedagain to obtain an attention weight vector k2R1(P+1), wherek;j(1jP+ 1) is theprobability of selecting the jthtoken from the passage as the kthtoken in the answer, and k;(P+1)is the probability of stopping the answer generation at position k.kis modeled as follows:Fk=tanh(VeHr+ (Wahak1+ba)e(P+1)); (6)k=softmax (v|Fk+ce(P+1)); (7)whereeHr2R2l(P+1)is the concatenation of Hrwith a zero vector, defined as eHr= [Hr;0],V2Rl2l;Wa2Rll,ba;v2Rl1andc2Rare parameters to be learned, Fk2Rl(P+1)isthe intermediate result, (e(P+1))follows the same definition as before, and hak12Rl1is thehidden vector at position k1of an answer LSTM as defined below:hak=!LSTM (eHr|k;hak1): (8)We can then model the probability of generating the answer sequence asp(ajHr) =Ykp(akja1;a2;:::;a k1;Hr); (9)andp(ak=jja1;a2;:::;a k1;Hr) =k;j: (10)5Published as a conference paper at ICLR 2017To train the model, we minimize the following loss function based on the training examples:NXn=1logp(anjPn;Qn): (11)The Boundary Model: The boundary model works in a way very similar to the sequence modelabove, except that instead of predicting a sequence of indices a1;a2;:::, we only need to predicttwo indicesasandae. So the main difference from the sequence model above is that in the boundarymodel we do not need to add the zero padding to Hr, and the probability of generating an answer issimply modeled asp(ajHr) =p(asjHr)p(aejas;Hr): (12)As this boundary model could point to a span covering too many tokens without any restriction,we try to manually limit the length of the predicted span and then search the span with the highestprobability computed by p(as)p(aejas)as the answer.3 E XPERIMENTSIn this section, we present our experiment results and perform some analyses to better understandhow our models works.3.1 D ATAWe use the Stanford Question Answering Dataset (SQuAD) v1.1 and the human-generated Mi-crosoft MAchine Reading COmprehension (MSMARCO) dataset v1.1 to conduct our experiments.Passages in SQuAD come from 536 articles in Wikipedia covering a wide range of topics. Eachpassage is a single paragraph from a Wikipedia article, and each passage has around 5 questionsassociated with it. In total, there are 23,215 passages and 107,785 questions. The data has been splitinto a training set (with 87,599 question-answer pairs), a development set (with 10,570 question-answer pairs) and a hidden test set.For the MSMARCO dataset, the questions are user queries issued to the Bing search engine, thecontext passages are real Web documents and the answers are human-generated. We select thespan that has the highest F1 score with the gold standard answer for training and only predict thespan in the passages during evaluation. The data has been split into a training set (82326 pairs), adevelopment set (10047 pairs) and a test set (9650 pairs).3.2 E XPERIMENT SETTINGSWe first tokenize all the passages, questions and answers. We use word embeddings fromGloVe (Pennington et al., 2014) to initialize the model. Words not found in GloVe are initialized aszero vectors. The word embeddings are not updated during the training of the model.The dimensionality lof the hidden layers is set to be 150. We use ADAMAX (Kingma & Ba, 2015)with the coefficients 1= 0:9and2= 0:999to optimize the model. Each update is computedthrough a minibatch of 30 instances. We do not use L2-regularization.For the SQuAD dataset, the performance is measured by two metrics: percentage of exact matchwith the ground truth answers and word-level F1 score when comparing the tokens in the predictedanswers with the tokens in the ground truth answers. Note that in the development set and the test seteach question has around three ground truth answers. F1 scores with the best matching answers areused to compute the average F1 score. For the MSMARCO dataset, the metrics in the official tool ofMSMARCO evaluation are BLEU-1/2/3/4 and Rouge-L, which are widely used in many domains.3.3 R ESULTSThe SQuAD and MSMARCO results of our models as well as the results of the baselines (Rajpurkaret al., 2016; Yu et al., 2016) are shown in Table 2. For the “LSTM with Ans-Ptr” models, they are6Published as a conference paper at ICLR 2017SQuAD MSMARCOExact Match F1 BLEU1/2/3/4 / Rouge-LDev Test Dev Test Dev & TestHuman 80.3 77.0 90.5 86.8 - & 46/-/-/- / 47Golden Passage - - - - 19.6 / 18.8 / 18.1 / 17.5 / 32.3 & -LR (Rajpurkar et al., 2016) 40.0 40.4 51.0 51.0 -DCR (Yu et al., 2016) 62.5 62.5 71.2 71.0 -LSTM with Ans-Ptr (Sequence) 37.7 - 48.5 - 10.3 / 7.2 / 5.6 / 4.6 / 21.6 & -LSTM with Ans-Ptr (Boundary) 45.2 - 55.3 - 32.0 / 25.3 / 22.2 / 20.4 / 32.3 & -mLSTM with Ans-Ptr (Sequence) 54.4 - 68.2 - 12.5 / 9.2 / 7.5 / 6.5 / 22.5 & -mLSTM with Ans-Ptr (Boundary) 63.0 - 72.7 - 32.9 / 26.4 / 23.2 / 21.6 / 33.8 & -Our best boundary model 67.0 66.9 77.2 77.1 40.1 / 33.3 / 30.1 / 28.2 / 37.2 &40.7 / 33.9 / 30.6 / 28.7 / 37.3mLSTM with Ans-Ptr (Boundary+en) 67.6 67.9 76.8 77.0 -Our best boundary model (en) 71.3 72.6 80.0 80.8 -Table 2: Experiment Results on SQuAD and MSMARCO datasets. Here “LSTM with Ans-Ptr”removes the attention mechanism in match-LSTM (mLSTM) by using the final state of the LSTMfor the question to replace the weighted sum of all the states. Our best boundary model is the furthertuned model and its ablation study is shown in Table 4. “en” refers to ensemble method.SQuAD MSMARCO#w in A/Q/P #w in A/Q/Praw 3.1 / 11 / 141 16.3 / 6 / 667seq 2.4 / - / - 6.7 / - / -bou 3.0 / - / - 15.7 / - / -Table 3: Statistical analysis on thedevelopment datasets. #w: num-ber of words on average; P: passage;Q: question; A: answer; raw: rawdata from the development dataset;seq/bou: the answers generated bythe sequence/boundary models withmatch-LSTM.SQuAD MSMARCOEM & F1 BLEU1/2/3/4 & Rouge-LBest model 67.0 & 77.2 40.1/33.3/30.1/28.2 & 37.2-bi-Ans-Ptr 66.5 & 76.8 39.9/32.8/29.6/27.9 & 36.7-deep 65.9 & 75.8 39.6/32.6/29.4/27.4 & 35.9-elem 65.2 & 75.4 38.1/31.4/28.3/26.5 & 35.5-pre-LSTM 64.0 & 72.9 39.6/32.8/29.8/27.7 & 36.3Table 4: Ablation study for our best boundary model onthe development datasets. Our best model is a further tunedboundary model by considering “bi-Ans-Ptr” which adds bi-directional answer pointer, “deep” which adds another two-layer bi-directional LSTMs between the match-LSTM andthe Answer Pointer layers, and “elem” which adds element-wise comparison , (hpiHq|i)and(hpiHq|i), into Eqn 3.“-pre-LSTM” refers to removing the pre-processing layer.the experiments with the ablation of attention mechanism in match-LSTM. Specifically, we use thefinal representation of the question to replace the weighted sum of the question representations. Forthe MSMARCO dataset, as the context for each question is consisted of around 10 documents, the“Golden Passage” is to directly use the human labeled document which could answer the questionas the prediction.From the results in Table 2, we can see that the boundary model could clearly outperform the se-quence model in a big margin on both datasets. We hypothesis that the sequence model is more likelyto stop word generation earlier, and the boundary model can somehow overcome this problem. Wehave a statistical analysis on the answers generated by our sequence and boundary models shownin Table 3. We can see that the length of the answers generated by the sequence model is muchshorter than the ground truth. Especially for the MSMARCO task where the answers are usuallymuch longer, the sequence model could only generate 7 words on average, while the ground truthanswers are 16 on average and the boundary model could generate nearly the same number of wordswith the ground truth. Several answers generated by our models are shown in Appendix A. FromTable 2, we can also see that the performance gets poorer by removing the attention mechanism inmatch-LSTM, while for the MSMARCO dataset, the attention mechanism effects less, with no morethan 2 percent reduction in BLEU and Rouge-L scores by attention mechanism ablation.7Published as a conference paper at ICLR 2017Based on the effectiveness of boundary pointer and match-LSTM, we conduct further exploration ofthe boundary model by adding element-wise comparison (hpiHq|i)and(hpiHq|i)into Eqn 3 inmatch-LSTM layer, adding 2 more bi-directional LSTM layers between match-LSTM and Ans-Ptrlayers, and adding bi-directional Ans-Ptr. We show the ablation study of this further tuned model inTable 4. We can see that adding element-wise matching could make the biggest improvement for ourboundary model. We also try to remove the phrase-level representation by removing the pre-processLSTM and using the word-level representations as the inputs of match-LSTM. Interestingly, we findthe phrase-level representation effects little on the MSMARCO task.Overall, we can see that both of our match-LSTM models have clearly outperformed the logis-tic regression model by Rajpurkar et al. (2016), which relies on carefully designed features. Theimprovement of our models over the logistic regression model shows that our end-to-end neural net-work models without much feature engineering are very effective on these tasks and datasets. Ourboundary model also outperformed the DCR model (Yu et al., 2016), which maximizes the proba-bility of the gold standard span from all the candidate spans through a neural network structure.3.4 F URTHER ANALYSESFigure 2: Performance breakdown by answer lengths and question types on SQuAD developmentdataset. Top: Plot (1) shows the performance of our two models (where srefers to the sequencemodel , brefers to the boundary model, and erefers to the ensemble boundary model) over answerswith different lengths. Plot (2) shows the numbers of answers with different lengths. Bottom:Plot (3) shows the performance our the two models on different types of questions. Plot (4) showsthe numbers of different types of questions.To better understand the strengths and weaknesses of our models, we perform some further analysesof the results below.First, we suspect that longer answers are harder to predict. To verify this hypothesis, we analysedthe performance in terms of both exact match and F1 score with respect to the answer length on thedevelopment set, as shown in Figure 2. For example, for questions whose answers contain morethan 9 tokens, the F1 score of the boundary model drops to around 55% and the exact match scoredrops to only around 30%, compared to the F1 score and exact match score of close to 72% and67%, respectively, for questions with single-token answers. And that supports our hypothesis.8Published as a conference paper at ICLR 2017Figure 3: Visualization of the attention weights for four questions. The first three questions sharethe same paragraph. The title is the answer predicted by our model.Next, we analyze the performance of our models on different groups of questions, as shown in Fig-ure 2. We use a crude way to split the questions into different groups based on a set of questionwords we have defined, including “what,” “how,” “who,” “when,” “which,” “where,” and “why.”These different question words roughly refer to questions with different types of answers. For ex-ample, “when” questions look for temporal expressions as answers, whereas “where” questionslook for locations as answers. According to the performance on the development dataset, our mod-els work the best for “when” questions. This may be because in this dataset temporal expressionsare relatively easier to recognize. Other groups of questions whose answers are noun phrases, suchas “what” questions, “which” questions and “where” questions, also get relatively better results. Onthe other hand, “why” questions are the hardest to answer. This is not surprising because the answersto “why” questions can be very diverse, and they are not restricted to any certain type of phrases.Finally, we would like to check whether the attention mechanism used in the match-LSTM layeris effective in helping the model locate the answer. We show the attention weights in Figure 3.In the figure the darker the color is the higher the weight is. We can see that some words havebeen well aligned based on the attention weights. For example, the word “German” in the passageis aligned well to the word “language” in the first question, and the model successfully predicts“German” as the answer to the question. For the question word “who” in the second question, the9Published as a conference paper at ICLR 2017word “teacher” actually receives relatively higher attention weight, and the model has predictedthe phrase “Martin Sekulic” after that as the answer, which is correct. For the third question thatstarts with “why”, the attention weights are more evenly distributed and it is not clear which wordshave been aligned to “why”. For the last question, we can see that the word knowledge needed forgenerating the answer can also be detected by match-LSTM. For example, the words “European”,“Parliament”, “Council”, “European” and “Union” have higher attention weights on “governing” inthe question. Even though our models can solve this type of questions, they are still not able to solvethe questions that need multi-sentences reasoning. More answers generated by our models for thequestions related to different kinds of reasoning are shown in Appendix B.4 R ELATED WORKMachine comprehension of text has gained much attention in recent years, and increasingly re-searchers are building data-drive, end-to-end neural network models for the task. We will firstreview the recently released datasets and then some end-to-end models on this task.4.1 D ATASETSA number of datasets for studying machine comprehension were created in Cloze style by removinga single token from a sentence in the original corpus, and the task is to predict the missing word.For example, Hermann et al. (2015) created questions in Cloze style from CNN and Daily Mailhighlights. Hill et al. (2016) created the Children’s Book Test dataset, which is based on children’sstories. Cui et al. (2016) released two similar datasets in Chinese, the People Daily dataset and theChildren’s Fairy Tale dataset.Instead of creating questions in Cloze style, a number of other datasets rely on human annotators tocreate real questions. Richardson et al. (2013) created the well-known MCTest dataset and Tapaswiet al. (2016) created the MovieQA dataset. In these datasets, candidate answers are provided foreach question. Similar to these two datasets, the SQuAD dataset (Rajpurkar et al., 2016) was alsocreated by human annotators. Different from the previous two, however, the SQuAD dataset doesnot provide candidate answers, and thus all possible subsequences from the given passage have tobe considered as candidate answers.Besides the datasets above, there are also a few other datasets created for machine comprehension,such as WikiReading dataset (Hewlett et al., 2016) and bAbI dataset (Weston et al., 2016), but theyare quite different from the datasets above in nature.4.2 E ND-TO-END NEURAL NETWORK MODELS FOR MACHINE COMPREHENSIONThere have been a number of studies proposing end-to-end neural network models for machinecomprehension. A common approach is to use recurrent neural networks (RNNs) to process thegiven text and the question in order to predict or generate the answers (Hermann et al., 2015).Attention mechanism is also widely used on top of RNNs in order to match the question with thegiven passage (Hermann et al., 2015; Chen et al., 2016). Given that answers often come from thegiven passage, Pointer Network has been adopted in a few studies in order to copy tokens fromthe given passage as answers (Kadlec et al., 2016; Trischler et al., 2016). Compared with existingwork, we use match-LSTM to match a question and a given passage, and we use Pointer Networkin a different way such that we can generate answers that contain multiple tokens from the givenpassage.Memory Networks (Weston et al., 2015) have also been applied to machine comprehen-sion (Sukhbaatar et al., 2015; Kumar et al., 2016; Hill et al., 2016), but its scalability when ap-plied to a large dataset is still an issue. In this work, we did not consider memory networks for theSQuAD/MSMARCO datasets.The setting of visual question answering (Antol et al., 2015) is quite similar to machine comprehen-sion, while their answers are usually very short. So the sequence order of the word-level attentionrepresentation used to align the figure and the question(Xu & Saenko, 2016; Fukui et al., 2016; Luet al., 2016), are not used in VQA. While our model focus on the word-by-word attention and use10Published as a conference paper at ICLR 2017LSTM to concatenate the aligned pairs and that would be helpful to generate a longer sequence asanswer.5 C ONCLUSIONSIn this paper, We developed two models for the machine comprehension problem defined in theStanford Question Answering (SQuAD) and A Human-Generated MAchine Reading COmprehen-sion (MSMARCO) datasets, both making use of match-LSTM and Pointer Network. Experimentson the SQuAD and MSMARCO datasets showed that our second model, the boundary model, couldachieve a performance close to the state-of-the-art performance on the SQuAD dataset and achievedthe state-of-the-art on the MSMARCO dataset. We also show the boundary model could overcomethe early stop prediction problem of the sequence model.In the future, we plan to look further into the different types of questions and focus on those questionswhich currently have low performance, such as the “why’ questions and multi-sentences relatedquestions. We also plan to test how our models could be applied to other machine comprehensiondatasets.6 A CKNOWLEDGMENTSThis research is supported by the National Research Foundation, Prime Ministers Office, Singaporeunder its International Research Centres in Singapore Funding Initiative.We thank Pranav Rajpurkar for testing our model on the hidden test dataset and Percy Liang forhelping us with the Dockerfile for Codalab.REFERENCESStanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zit-nick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE InternationalConference on Computer Vision , pp. 2425–2433, 2015.Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the CNN/DailyMail reading comprehension task. In Proceedings of the Conference on Association for Compu-tational Linguistics , 2016.Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. Consensus attention-basedneural networks for chinese reading comprehension. In arXiv preprint arXiv:1607.02250 , 2016.Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach.Multimodal compact bilinear pooling for visual question answering and visual grounding. InProceedings of the Conference on Empirical Methods in Natural Language Processing , 2016.Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. Incorporating copying mechanism insequence-to-sequence learning. In Proceedings of the Conference on Association for Computa-tional Linguistics , 2016.Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, MustafaSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of theConference on Advances in Neural Information Processing Systems , pp. 1693–1701, 2015.Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han,Matthew Kelcey, and David Berthelot. WIKIREADING: A novel large-scale language under-standing task over wikipedia. In Proceedings of the Conference on Association for ComputationalLinguistics , 2016.Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The Goldilocks principle: Read-ing children’s books with explicit memory representations. In Proceedings of the InternationalConference on Learning Representations , 2016.11Published as a conference paper at ICLR 2017Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with theattention sum reader network. In Proceedings of the Conference on Association for ComputationalLinguistics , 2016.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings ofthe International Conference on Learning Representations , 2015.Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter On-druska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networksfor natural language processing. In Proceedings of the International Conference on MachineLearning , 2016.Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attentionfor visual question answering. In Proceedings of the Conference on Advances in Neural Informa-tion Processing Systems , 2016.Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, andLi Deng. MS MARCO: a human generated machine reading comprehension dataset. arXivpreprint arXiv:1611.09268 , 2016.Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global vectors for wordrepresentation. In Proceedings of the Conference on Empirical Methods in Natural LanguageProcessing , 2014.Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questionsfor machine comprehension of text. In Proceedings of the Conference on Empirical Methods inNatural Language Processing , 2016.Matthew Richardson, Christopher JC Burges, and Erin Renshaw. MCTest: A challenge dataset forthe open-domain machine comprehension of text. In Proceedings of the Conference on EmpiricalMethods in Natural Language Processing , 2013.Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Proceed-ings of the Conference on Advances in neural information processing systems , 2015.Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and SanjaFidler. MovieQA: Understanding stories in movies through question-answering. In Proceedingsof IEEE Conference on Computer Vision and Pattern Recognition , 2016.Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehensionwith the EpiReader. In Proceedings of the Conference on Empirical Methods in Natural LanguageProcessing , 2016.Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Proceedings of the Con-ference on Advances in Neural Information Processing Systems , 2015.Shuohang Wang and Jing Jiang. Learning natural language inference with LSTM. In Proceedings ofthe Conference on the North American Chapter of the Association for Computational Linguistics ,2016.Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings of the Inter-national Conference on Learning Representations , 2015.Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri ̈enboer, ArmandJoulin, and Tomas Mikolov. Towards AI-complete question answering: A set of prerequisite toytasks. In Proceedings of the International Conference on Learning Representations , 2016.Huijuan Xu and Kate Saenko. Ask, attend and answer: Exploring question-guided spatial attentionfor visual question answering. In Proceedings of the IEEE International Conference on ComputerVision , 2016.12Published as a conference paper at ICLR 2017Wenpeng Yin, Sebastian Ebert, and Hinrich Sch ̈utze. Attention-based convolutional neural networkfor machine comprehension. arXiv preprint arXiv:1602.04341 , 2016.Yang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end answer chunkextraction and ranking for reading comprehension. arXiv preprint arXiv:1610.09996 , 2016.13Published as a conference paper at ICLR 2017A A PPENDIXWe show the predictions our boundary and sequence models on two cases from two datasets inTable 5. It can be seen that the sequence model is more likely to predict a shorter sequence which isthe problem of early stop prediction.(1) Context Asopposed to broadcasts of primetime series, CBS broadcast specialepisodes of its late night talk shows as its lead-out programs for SuperBowl 50, beginning with a special episode of The Late Show withStephen Colbert following the game.Question (Syntactic) What CBS show followed the Super Bowl?Golden Anwser The Late Show with Stephen Colbertmatch-LSTM (Sequence) The Late Showmatch-LSTM (Boundary) The Late Show with Stephen Colbert(2) Context Urinalysis is a test that evaluates a sample of your urine. Urinalysisis used to detect and assess a wide range of disorders, such as uri-nary tract infection, kidney disease and diabetes. Urinalysis involvesexamining the appearance, concentration and content of urine. Abnor-mal urinalysis results may point to a disease or illness. For example,a urinary tract infection can make urine look cloudy instead of clear.Increased levels of protein in urine can be a sign of kidney disease.Query what can urinalysis detect?Golden Anwser Detect and assess a wide range of disorders, such as urinary tract infec-tion, kidney disease and diabetes.match-LSTM (Sequence) Urinalysismatch-LSTM (Boundary) Urinalysis is used to detect and assess a wide range of disorders, suchas urinary tract infection, kidney disease and diabetesTable 5: Prediction samples for sequence and boundary models. The first case is sampled fromSQuAD dataset and the second is sampled from MSMARCO dataset.14Published as a conference paper at ICLR 2017B A PPENDIXWe show how four different models work on different type of questions in SQuAD dataset throughTable 6. After the analysis of a hundred cases, we see that our models are not able to solve thequestions that need multi-sentences reasoning. And the model without attention mechanism has lesspower to identify the important key word like the third case shown in Table 6.(1) Context The Rankine cycle is sometimes referred to as a practical Carnot cycle be-cause, when an efficient turbine is used, the TS diagram begins to resemble theCarnot cycle.Question (Synonymy) What is the Rankine cycle sometimes called?Golden Anwser practical Carnot cycleLSTM (Sequence) Carnot cyclematch-LSTM (Sequence) Carnot cycleLSTM (Boundary) practical Carnot cyclematch-LSTM (Boundary) Carnot cycle(2) Context While the Commission has a monopoly on initiating legislation, the EuropeanParliament and the Council of the European Union have powers of amend-ment and veto during the legislative process.Question (Knowledge) Which two governing bodies have legislative veto power?Golden Anwser the European Parliament and the Council of the European UnionLSTM (Sequence) European Parliament and the Council of the European Unionmatch-LSTM (Sequence) European Parliament and the Council of the European UnionLSTM (Boundary) European Parliament and the Council of the European Unionmatch-LSTM (Boundary) European Parliament and the Council of the European Union(3) Context Current faculty include the anthropologist Marshall Sahlins, historian DipeshChakrabarty, ... Shakespeare scholar David Bevington , and renowned politicalscientists John Mearsheimer and Robert Pape.Question (Syntactic) What Shakespeare scholar is currently on the university’s faculty?Golden Anwser David BevingtonLSTM (Sequence) Marshall Sahlinsmatch-LSTM (Sequence) David BevingtonLSTM (Boundary) Marshall Sahlinsmatch-LSTM (Boundary) David Bevington(4) Context The V&A Theatre & Performance galleries, formerly the Theatre Museum,opened in March 2009. The collections are stored by the V&A, and are availablefor research, exhibitions and other shows. They hold the UK’s biggest nationalcollection of material about live performance in the UK since Shakespeare’sday, covering drama, dance, musical theatre, circus, music hall, rock and pop,and most other forms of live entertainment.Question (Reasoning) What collection does the V&A Theatre & Performance galleries hold?Golden Anwser material about live performanceLSTM (Sequence) Theatrematch-LSTM (Sequence) the Theatre MuseumLSTM (Boundary) research, exhibitions and other showsmatch-LSTM (Boundary) Theatre Museum(5) Context Along with giving the offender his ”just deserts”, achieving crime control viaincapacitation and deterrence is a major goal of criminal punishment.Question (Ambiguous) What is the main goal of criminal punishment of civil disobedients?Golden Anwser achieving crime control via incapacitation and deterrenceLSTM (Sequence) deterrencematch-LSTM (Sequence) just desertsLSTM (Boundary) incapacitation and deterrencematch-LSTM (Boundary) incapacitation and deterrenceTable 6: Different types of reasoning samples in SQuAD dataset. “match-LSTM” refers to the“match-LSTM with Ans-Ptr” and “LSTM” refers to the “LSTM with Ans-Ptr” which is the ablationof attention mechanism in match-LSTM.15
B1GOWV5eg
Published as a conference paper at ICLR 2017LEARNING TO REPEAT :FINEGRAINED ACTION REPETITION FORDEEPREINFORCEMENT LEARNINGSahil Sharma, Aravind S. Lakshminarayanan, Balaraman RavindranIndian Institute of Technology, MadrasChennai, 600036, Indiafsahil, ravig@cse.iitm.ac.inaravindsrinivas@gmail.comABSTRACTReinforcement Learning algorithms can learn complex behavioral patterns for se-quential decision making tasks wherein an agent interacts with an environmentand acquires feedback in the form of rewards sampled from it. Traditionally, suchalgorithms make decisions, i.e., select actions to execute, at every single time stepof the agent-environment interactions. In this paper, we propose a novel frame-work, Fine Grained Action Repetition (FiGAR), which enables the agent to decidethe action as well as the time scale of repeating it. FiGAR can be used for im-proving any Deep Reinforcement Learning algorithm which maintains an explicitpolicy estimate by enabling temporal abstractions in the action space. We em-pirically demonstrate the efficacy of our framework by showing performance im-provements on top of three policy search algorithms in different domains: Asyn-chronous Advantage Actor Critic in the Atari 2600 domain, Trust Region PolicyOptimization in Mujoco domain and Deep Deterministic Policy Gradients in theTORCS car racing domain.1 I NTRODUCTIONReinforcement learning (RL) is used to solve goal-directed sequential decision making problemswherein explicit supervision in the form of correct decisions is not provided to the agent, but onlyevaluative feedback in the form of the rewards sampled from the environment. RL algorithms modelgoal-directed sequential decision making problems as Markov Decision Processes (MDP) [Sutton &Barto (1998)]. However, for problems with an exponential or continuous state space, tabular RL al-gorithms that maintain value or policy estimates for every state become infeasible. Therefore, thereis a need to be able to generalize decision making to unseen states. Recent advances in representa-tion learning through deep neural networks provide an efficient mechanism for such generalization[LeCun et al. (2015)]. Such a combination of representation learning through deep neural networkswith reinforcement learning objectives has shown promising results in many sequential decisionmaking domains such as the Atari 2600 domain [Bellemare et al. (2013); Mnih et al. (2015); Schaulet al. (2015); Mnih et al. (2016)], Mujoco simulated physics tasks domain [Todorov et al. (2012);Lillicrap et al. (2015)], the Robosoccer domain [Hausknecht et al. (2016)] and the TORCS domain[Wymann et al. (2000); Mnih et al. (2016)]. Often, MDP settings consist of an agent interactingwith the environment at discrete time steps. A common feature shared by all the Deep Reinforce-ment Learning (DRL) algorithms above is that they repeatedly execute a chosen action for a fixednumber of time steps k. Ifatrepresents the action taken at time step t, then for the said algorithms,a1=a2==ak,ak+1=ak+2==a2kand in general aik+1=aik+2==a(i+1)k,i0. Action repetition allows these algorithms to compute the action once every ktime steps andhence operate at higher speeds, thus achieving real-time performance. This also offers other advan-tages such as smooth action policies. More importantly, as shown in Lakshminarayanan et al. (2017)and Durugkar et al. (2016), macro-actions constituting the same action repeated ktimes could beinterpreted as introducing temporal abstractions in the induced policies thereby enabling transitionsbetween temporally distant advantageous states.1Published as a conference paper at ICLR 2017(a) Freeway(b) Sea QuestFigure 1: FiGAR induces temporal abstractions in learnt policies. The arrows indicate the actionexecuted between the frames and the numbers depict the number of time steps for which the actionwas repeated. The thunder bolt corresponds to the firing action. An arrow alongside a thunderboltcorresponds to the action (arrow+fire). In the figure (a), the agent learns to execute down operation(which is equivalent to a no-op in this particular state, in this game) until a traveling car passes byand then executes temporally elongated actions to complete the task, skillfully avoiding the red carin the 7thframe. In figure (b) the agent catches a glimpse of a pink opponent towards bottom rightin the 2ndframe and executes temporally elongated actions to intercept and kill it (in the 6thframe).The time scale for action repetition has largely been static in DRL algorithms until now [Mnih et al.(2015; 2016); Schaul et al. (2015)]. Lakshminarayanan et al. (2017) are the first to explore dynamictime scales for action repetition in the DRL setting and show that it leads to significant improvementin performance on a few Atari 2600 games. However, they choose only two time scales and theexperiments are limited to a few representative games. Moreover the method is limited to tasks witha discrete action space.We propose FiGAR, a framework that enables any DRL algorithm regardless of whether its actionspace is continuous or discrete, to learn temporal abstractions in the form of temporally extendedmacro-actions. FiGAR uses a structured and factored representation of the policy whereby the pol-icy for choosing the action is decoupled from that for the action repetition selection. Note thatdeciding actions and the action repetitions independently enables us to find temporal abstractionswithout blowing up the action space, unlike Vezhnevets et al. (2016) and Lakshminarayanan et al.(2017). The contribution of this work is twofold. First, we propose a generic extension to DRL algo-rithms by coming up with a factored policy representation for temporal abstractions (see figure 1 forsequences of macro actions learnt in 2Atari 2600 games). Second, we empirically demonstrate Fi-GAR’s efficiency in improving policy gradient DRL algorithms with improvements in performanceover several domains: 31Atari 2600 games with Asynchronous Advantage Actor Critic [Mnih et al.(2016)], 5tasks in MuJoCo Simulated physics tasks domain with Trust Region Policy Optimiza-tion [Schulman et al. (2015)] and the TORCS domain with Deep Deterministic Policy Gradients[Lillicrap et al. (2015)].2 R ELATED WORKOur framework is centered on a very general idea of only deciding when necessary . There havebeen similar ideas outside the RL domains. For instance, Gu et al. (2016) and Satija & Pineau(2016) explore Real Time Neural Machine Translation where the action at every time step is todecide whether to output a new token in the target language or not based on current context.2Published as a conference paper at ICLR 2017Transition Point Dynamic Programming (TPDP) [Buckland & Lawrence (1994)] algorithm is a mod-ification to the tabular dynamic programming paradigm that can reduce the learning time and mem-ory required for control of continuous stochastic dynamic systems. This is done by determininga set of transition points in the underlying MDP. The policy changes only at these transition pointstates. The algorithm learns an optimal set of transition point states by using a variant of Q-Learningto evaluate whether or not to add/delete a particular state from the set of transition points. FiGARlearns the transition points in the underlying MDP on the fly with generalization across the statespace unlike TPDP which is tabular and infeasible for large problems.The Dynamic Frameskip Deep Q-network [Lakshminarayanan et al. (2017)] proposes to use multi-ple time scales of action repetition by augmenting the Deep Q Network (DQN) [Mnih et al. (2015)]with separate streams of the same primitive actions corresponding to each time scale. This way, thetime scale of action repetition is dynamically learned. Although this framework leads to a significantimprovement in the performance on a few Atari 2600 games, it suffers from not being able to supportmultiple time scales due to potential explosion of the action space and is restricted to discrete actionspaces. Durugkar et al. (2016) also explore learning macro-actions composed using the same actionrepeated for different time scales. However, their framework is limited to discrete action spaces andperformance improvements are not significant.Learning temporally extended actions and abstractions have been of interest in RL for a long time.Vezhnevets et al. (2016) propose Strategic Attentive Writer (STRAW) for learning macro-actionsand building dynamic action-plans directly from reinforcement learning signals. Instead of out-putting a single action after each observation, STRAW maintains a multi-step action plan. Theagent periodically updates the plan based on observations and commits to the plan between the re-planning steps. Although the STRAW framework represents a more general temporal abstractionthan FiGAR, FiGAR should be seen as a framework that can compliment STRAW whereby thedecision to repeat could now be hierarchical at plan and base action levels.FiGAR is a framework that has a structured policy representation where the time scale of executioncould be thought as parameterizing the chosen action. The only other work that explores param-eterized policies in DRL is Hausknecht & Stone (2016) where discrete actions are parameterizedby continuous values. In our case, discrete/continuous actions are parameterized by discrete values.The state spaces in Atari are also more sophisticated than the kind explored in Hausknecht et al.(2016).FiGAR is also very naturally connected to the Semi-MDPs (SMDPs) framework. SMDPs are MDPswith durative actions. The assumption in SMDPs is that actions take some holding time to com-plete [Duff (1995); Mahadevan et al. (1997); Dietterich (2000)]. Typically, they are modeled withtwo distributions, one corresponding to the next state transition and the other corresponding to theholding time which denotes the number of time steps between the current action from the policyuntil the next action from the policy. The rewards over the entire holding time of an action is thecredit assigned for picking the action. In our framework, we naturally have durative actions due tothe policy structure where the decision consists of both the choice of the action and the time scale ofits execution. Therefore, we convert the original MDP to an SMDP trivially. In fact, we give morestructure to the SMDP because we are clear that we repeat the chosen action during the holdingtime, while what happens during the holding time is not specified in the SMDP framework. Onecan think of the part of the policy that outputs the probability distribution over the time scales as aholding time distribution. Therefore, our framework naturally fits into the SMDP definition with theaction repetition rate characterizing the holding time. We also sum up the rewards over the holdingtime with an an appropriate discounting factor as in an SMDP framework.3 B ACKGROUND3.1 A SYNCHRONOUS ADVANTAGE ACTOR CRITICActor critic algorithms execute policy gradient updates by maintaining parametric estimates for thepolicya(ajs)and the value function Vc(s)[Sutton & Barto (1998)]. The value function estimatesare used to reduce the variance in the policy gradient updates.3Published as a conference paper at ICLR 2017Asynchronous Advantage Actor Critic (A3C) [Mnih et al. (2016)] learns policies based on an asyn-chronousn-step returns. The klearner threads execute kcopies of the policy asynchronously andthe parameter updates are sent to a central parameter server at regular intervals. This ensures thattemporal correlations are broken between subsequent updates since the different threads possiblyexplore different parts of the state space in parallel. The objective function for policy improvementin A3C is:L(a) = loga(atjst) (GtV(st))whereGtis an estimate for the return at time step t. The A3C algorithm uses n-step returns forestimatingGtwhich is a biased estimate for Q(st;at). Hence one can think of GtV(st)as anestimate for A(st;at)which represents the advantage of taking action atin statest. The valuefunctionVc(st)is updated by using n-step TD error as: L(c) =^V(st)Vc(st)2where ^V(st)is an estimate of the n-step return from the current state. In A3C j-step returns are used where jnandnis a fixed hyper-parameter. For simplicity assume that tn. Then the definition for ^V(st)is:^V(st) =n1Xj=ttjrj+ntV(sn)The policy and value functions are parameterized by Deep Neural Networks.3.2 T RUST REGION POLICY OPTIMIZATIONTRPO [Schulman et al. (2015)] is a policy optimization algorithm. Constrained optimization of asurrogate loss function is proposed, with theoretical guarantees for monotonic policy improvement.The TRPO surrogate loss function Lfor potential next policies ( ~) is:Lold(~) =() +Xs(s)Xa~(ajs)A(s;a)whereoldare the parameters of policy and~are parameters of ~. This surrogate loss function isoptimized subject to the constraint:DmaxKL(;~)which ensures that the policy improvement can be done in non-trivial step sizes and at the same timethe new policy does not deviate much from the current policy due to the KL-divergence constraint.3.3 D EEPDETERMINISTIC POLICY GRADIENTSAccording to the Deterministic Policy Gradient (DPG) Theorem [Lever (2014)], the gradient of theperformance objective ( J) of the deterministic policy ( ) in continuous action spaces with respectto the policy parameters ( ) is given by:rJ() =ZS(s)r(s)raQ(s;a)ja=(s)ds=Es[r(s)raQ(s;a)ja=(s)](1)for an appropriately defined performance objective J. The DPG model built according to this theo-rem consists of an actor which outputs an action vector in the continuous action space and a criticmodelQ(s;a)which evaluates the action chosen at a state. The DDPG algorithm [Lillicrap et al.(2015)] extends the DPG algorithm by introducing non-linear neural network based function ap-proximators for the actor and critic.4 F IGAR: F INEGRAINED ACTION REPETITIONFiGAR provides a DRL algorithm with the ability to model temporal abstractions by augmenting itwith the ability to predict the number of time steps for which an action chosen for execution is to berepeated. This prediction is conditioned on the current state in the environment.The FiGAR framework can be used to extend any DRL algorithm (say Z) which maintains anexplicit policy. Let Z0denote the extension of Zunder FiGAR. Z0has two independent decoupled4Published as a conference paper at ICLR 2017Algorithm 1 CreateFiGARZ1:function MAKEFIGAR (DRLAlgorithm Z, ActionRepetitionSet W)2:st state at time t3:at action taken in s tat time t4:a action policy of Z5:fa(st) action network for realizing action policy a6:L(a;st;at) A’s objective function for improving a7:x construct action repetition policy for FiGAR-Z.8:fx(st) repetition network with output of size jWjfor action repetition policy x.9:L(x;st;at) L evaluated at x10:T(st;at) L(x;st;at)L(a;st;at)// Total Loss11: return T,fa,fxpolicy components. The policy afor choosing actions and the policy xfor choosing actionrepetitions. Algorithm 1 describes the generic framework for deriving DRL algorithm Z0fromalgorithmZ. LetWstand for the set of all action repetitions that Z0would be able to perform. Intradition DRL algorithms, W=fcg, wherecis a constant. This implies that the action repetitionis static and fixed. In FiGAR, The set of action repetitions from which Z0can choose is W=fw1;w2;;wjWjg. The central idea behind FiGAR is that the objective function used to updatethe parameters aofamaintained by Zwill be used to update the parameters xof the actionrepetition policy xofZ0as well (illustrated by the sharing of Lin Algorithm 1). In the first sub-section, we desribe how Z0operates. In the next two sub-sections, we describe the instantiations ofFiGAR extensions for 3policy gradient DRL algorithms: A3C, TRPO and DDPG.4.1 H OWFIGAR OPERATESThe following procedure describes how FiGAR variant Z0navigates the MDP that it is solving:1. In the very first state s0seen byZ0, it predicts a tuple (a0;x0)of action to execute and numberof time steps for which to execute it. a0is decided based on a(s0)whereasx0is decided basedonx(s0). Each such tuple is known as an action decision.2. We denote by sjthe state of the agent after jsuch action decisions have been made. Similarlyxjandajdenote the action repetition and the action chosen after jsuch action decisions. Notethatxj2fw1;w2;;wjWjg, the set of all allowed action repetitions.3. From time step 0untilx0,Z0executesa0.4. At time step x0,Z0again decides, based on current state s1and policy components(a(s1);x(s1)), the tuple of action to execute and the number of times for which to executeit,(a1;x1).5. It can seen that in general if Z0executes action akforxksuccessive time steps, the next action isdecided at time step t=kPi=0xion the basis of (a(sk+1);x(sk+1)), wheresk+1is the stateseen at time step t.4.2 F IGAR-A3CA3C usesfa(sj)andfc(sj)which represent the policy (ajsj)and the value function V(sj)respectively. (ajsj)is a vector of size equal to the action space of the underlying MDP whileV(sj)is a scalar. FiGAR extends the A3C algorithm as follows:1. Withsjdefined as in the previous sub-section, in addition to fa(sj)andfc(sj), FiGAR-A3C defines a neural network fx(sj). This neural network outputs a jWj-dimensional vectorrepresenting the probability distribution over the elements of the set W. The sampled time scalefrom this multinomial distribution decides how long the action decided with fa(sj)is repeated.The actor is now composed of both fa(sj)(action network) and fx(sj)(repetition network).5Published as a conference paper at ICLR 20172. The objective function for the actor is modified to be:L(a;x) = (logfa(ajsj) + logfx(xjsj))A(sj;a;x)whereA(sj;a;x)represents the advantage of executing action aforxtime steps at state sj. Thisimplies that for FiGAR-A3C the combination operator defined in Algorithm 1 is in fact scalaraddition.3. The objective function for the critic is the same except that estimated value function used in thetarget for the critic is changed as:^V(sj) =n1Xk=jykjrk+ynjV(sn)where we define y0= 0;yk=yk1+xk;k1and actionakwas repeated xktimes whenstateskwas encountered. Note that the return used in target is based on ndecision steps , stepsat which a potential change in actions executed takes place. It is not based on ntime steps.Note that point 2above implies that the action space has been extended by jWjand has a dimensionofjAj+jWj. It is only because of this factored representation of the FiGAR policy that the numberof parameters do not blow up. If one were to extend the action space in a naive way by couplingthe actions and the action repetitions, one would end up suffering the kind of action-space blow-up as seen in [Lakshminarayanan et al. (2017); Vezhnevets et al. (2016)] wherein for being able tocontrol with respect to jWjdifferent action repetition levels (or jWj-length policy plans in the caseof STRAW) , one would need to model jAjjWjactions or action-values which would blow up thefinal layer sizejWjtimes.4.3 F IGAR-TRPOAlthoughfa(sj)in A3C is generic enough to output continuous or discrete actions, we considerA3C only for discrete action spaces. Preserving the notation from the previous subsection, wedescribe FiGAR-TRPO where we consider the case of the output generated by the network fa(sj)to beAdimensional with each dimension being independent and describing a continuous valuedaction. The stochastic policy is hence modeled as a multi-variate Gaussian with diagonal co-variancematrix. The parameters of the mean as well as the co-variance matrix are together represented by aand the concatenated mean-covariance vector is represented by the function fa(sj). FiGAR-TRPOis constructed as follows:1. In TRPO,the objective function Lold(~)is constructed based on trajectories drawn according tothe current policy. Hence, for FiGAR-TRPO the objective function is modified to be:La;old;x;old(~a)La;old;x;old(~x)arwherexare the parameters of sub-network fxwhich computes the action repetition distribu-tion. This implies that for FiGAR-TRPO the combination operator defined in Algorithm 1 isin some sense the scalar multiplication. arcontrols the relative learning rate of the core-policyparameters and the action repetition parameters.2. The constraint in TRPO corresponding to the KL divergence between old and new policies ismodified to be:DmaxKL(a;~a) +KLDmaxKL(x;~x)whereadenotes the Gaussian distribution for the action to be executed and xdenotes themultinomial softmax-based action repetition probability distribution. KLcontrols the relativedivergence of xandafrom the new corresponding policies. See Appendix Cfor an explana-tion of the loss function used.4.4 F IGAR-DDPGIn this subsection, we present an extension of DDPG under the FiGAR framework. DDPG consistsoffa(sj)which denotes a deterministic policy (s)and is a vector of size equal to the action spaceof the underlying MDP; and fc(sj;aj)which denotes the critic network whose output is a singlenumber, the estimated state-action value function Q(sj;aj). FiGAR framework extends the DDPGalgorithm as follows:6Published as a conference paper at ICLR 20171.fxis introduced, similar to FiGAR-A3C. This implies that the complete policy for FiGAR-DDPG (a;x)is computed by the tuple of neural networks: (fa;fx). Similar to DDPG[Lillicrap et al. (2015)], FiGAR-DDPG has no loss function for the actor. The actor receivesgradients from the critic. This is because the actors proposed policy is directly fed to the critic andthe critic provides the actor with gradients which the proposed policy follows for improvement.In FiGAR-DDPG the total policy is a concatenation of vectors aandx. Hence the gradientsfor the total policy are also simply the concatenation of the gradients for the policies aandx.2. To ensure sufficient exploration, the exploration policy for action repetition is an -greedy versionof the behavioral action repetition policy. The action part of the policy, ( fa(sj)), continues touse temporally correlated noise for exploration, generated by an Ornstein-Uhlenbeck process (seeLillicrap et al. (2015) for details).3. The critic is modeled by the equationf(sj;aj;xj) =fc(sj;fa(sj);fx(sj))As stated above, fxis learnt by back-propagating the gradients produced by the critic withrespect tofx, in exactly the same way that fais learnt.5 E XPERIMENTAL SETUP AND RESULTSThe experiments are designed to understand the answers to the following questions:1. For different DRL algorithms, can FiGAR extensions learn to use the dynamic action repetition?2. How does FiGAR impact the performance of the different algorithms on various tasks?3. Is FiGAR able to learn control on several different kinds of Action Repetition sets W?Figure 2: Percentage Improvement of FiGAR-A3C over A3C for Atari 2600In the next three sub-sections, we experiment with the simplest possible action repetition setW=f1;2;;jWjg. In the fourth sub-section, we understand the effects that changing the actionrepetition set Whas on the policies learnt.7Published as a conference paper at ICLR 20175.1 F IGAR-A3C ONATARI 2600This set of experiments was performed with FiGAR-A3C on the Atari 2600 domain. The hyper-parameters were tuned on a subset of games (Beamrider, Breakout, Pong, Seaquest and Space In-vaders) and kept constant across all games.Wis perhaps the most important hyper-parameter and depicts our confidence in the ability of a DRLagent to predict the future. Such a choice has to depend on the domain in which the DRL agent isoperating. We only wanted to demonstrate the ability of FiGAR to learn temporal abstractions andhence instead of tuning for an optimal jWj, it was chosen to be 30, arbitrarily. The specific set oftime scales we choose is 1;2;3;;30. FiGAR-A3C as well as A3C were trained for 100milliondecision steps. They were evaluated in terms of the final policy learnt. Treating the score obtained bythe A3C algorithm as baseline (b), we calculated the percentage improvement (i) offered by FiGAR-A3C (f) as: i=fbb. Figure 2 plots this metric versus the game names. The improvement forEnduro and Atlantis is staggering and more than 900and35respectively. Figure 2’s y-axis hasbeen clipped at 1000% to make it more presentable. Appendix Acontains the experimental details,the raw scores obtained by both the methods. Appendix Bcontains experiments on validating oursetup.Figure 3: Evaluation of Action Repetition Control for Atari 2600 . See Appendix B(Table 7) for anexpanded version of figure.To answer the first question we posed, experiments were conducted to record the percentage oftimes that a particular action repetition was chosen. Figure 3 presents the action repetition distri-bution across a selection of games, chosen arbitrarily. The values have been rounded to 2decimalplaces and hence do not sum to 1in each game. Each game was played for 10episodes using thesame policy used to calculate average scores in Figure 2.The two tables together show that FiGAR-A3C generally prefers lower action repetition but doescome up with temporal abstractions in policy space (specially in games like Pong and CrazyClimber). Some such abstractions have been demonstrated in Figure 1. Such temporal abstrac-tions do not always help general gameplay (Demon Attack). However, as can be seen from Figure2, FiGAR-A3C outperforms A3C in 26out of 33games.One could potentially think of FiGAR as a deep exploration framework by using the learnt policyafor predicting actions at every time step and completely discarding the action-repetition policy8Published as a conference paper at ICLR 2017x, at evaluation time. Appendix Fcontains an empirical argument against such a usage of FiGARand demonstrates that the temporal abstractions encoded by xare indeed important for game playperformance.5.2 F IGAR-TRPO ONMUJOCO TASKSIn this sub-section we demonstrate that FiGAR-TRPO can learn to solve the Mujoco simulatedphysics tasks reasonably successfully. Similar to FiGAR-A3C, jWjis chosen to be 30arbitrarily.Table 1: Evaluation of FiGAR on MujocoDomain FiGAR-TRPO TRPOAnt 947.06 (28.35) -161.93 (1.00)Hopper 3038.63 (1.00) 3397.58 (1.00)Inverted Pendulum 1000.00 (1.00) 971.66 (1.00)Inverted Double Pendulum 8712.46 (1.01) 8327.75 (1.00)Swimmer 337.48 (10.51) 364.55 (1.00)The full policy (fa;fx)is trained jointly. The policies learnt after each TRPO optimization step(details in Appendix C) are compared to current best known policy to arrive at the overall bestpolicy. The results in this sub-section are for this best policy. Table 1 compares the performanceof TRPO and FiGAR-TRPO. The number in the brackets is the average action repetition chosen.As can be seen from the table, FiGAR learns either policies which are much faster to execute albeitat cost of slight loss in optimality or it learns policies similar to non-repetition case, performancebeing competitive with the baseline algorithm. This best policy was then evaluated on 100episodesto arrive at average scores which are contained in Table 1. TRPO is a difficult baseline on theMuJoCo tasks domain. On the whole, FiGAR outperforms TRPO in 3out of 5domains, althoughthe gains are marginal in most tasks. Appendix Ccontains experimental details. A video showingFiGAR-TRPO’s learned behavior policies can be found at http://youtu.be/JiaO2tBtH-k .5.3 F IGAR-DDPG ONTORCSFiGAR-DDPG was trained and tested on the TORCS domain. jWjwas chosen to be 15arbitrarily.FIGAR-DDPG manages to complete the race task flawlessly and manages to finish 20laps of thecircuit, after which the simulator stops. The total reward obtained by FiGAR-DDPG was 557929:68as against 59519:70obtained by DDPG. We also observed that FiGAR-DDPG learnt policies whichwere smoother than those learnt by DDPG. A video showing the learned driving behavior of theFiGAR-DDPG agent can be found at https://youtu.be/dX8J-sF-WX4 . See Appendix Dfor experimental and architectural details.5.4 E FFECT OF ACTION REPETITION SET ON FIGARThis sub-section answers the third question raised at the beginning of this section in affirmative. Wedemonstrate that there is nothing sacrosanct about the set of action repetitions W=f1;2;;30gon which FiGAR-A3C performed well, and that the good performance carries over to other actionrepetition sets.To demonstrate the generality of FiGAR with respect to W, we chose a wide variety of actionrepetition sets W, trained and evaluated FiGAR-A3C variants which learn to repeat with respect totheir respective Action Repetition sets. Table 3 describes the various FiGAR-variants considered forthese experiments in terms of their action repetition set W.Note that the hyper-parameters of the various variants of FiGAR-A3C were not tuned but rather thesame ones obtained by tuning for FiGAR- 30were used. Table 2 contains a comparison of the rawscores obtained by the various FiGAR-A3C variants in comparison to the A3C baseline. It is clearthat FiGAR is able to learn over any action repetition set Wand the performance does not fall bya lot even when hyper-parameters tuned for FiGAR- 30are used for other variants. Appendix E9Published as a conference paper at ICLR 2017Table 2: Comparison of FiGAR-A3C variants to the A3C baseline for 3 games: Sea Quest, SpaceInvaders and Asterix. See Appendix E (Figure 7) for a bar graph visualization of this table.Variant Seaquest Space Invaders AsterixFiGAR-50 22904.50 1929.50 7730.00FiGAR-30-50 17103.60 1828.90 11090.00FiGAR-P 20005.40 2047.40 10937.00FiGAR-30 18076.90 2251.95 11949.00FiGAR-20-30 14683.00 2310.70 8182.00FiGAR-20 19148.50 1929.50 7730.00Baseline 2769.40 1268.75 2364.00Table 3: Description of FiGAR-A3C variants in terms of action repetition set W.Name Description in terms of WFiGAR-20 W=f1;2;;19;20gFiGAR-30 W=f1;2;;29;30gFiGAR-50 W=f1;2;;49;50gFiGAR-30-50 W=f30numbers drawn randomly from W=f1;2;;50gw/o replacementgFiGAR-20-30 W=f20numbers drawn randomly from W=f1;2;;30gw/o replacementgFiGAR-P W=fpjp<50; p2P(Set of all Primes)gcontains additional graphs showing the evolution of average game scores against number of trainingsteps as well as a bar graph visualization of Table 2.6 C ONCLUSION , SHORTCOMINGS AND FUTURE WORKWe propose a light-weight framework (FiGAR) for improving current Deep Reinforcement Learningalgorithms for policy optimization whereby temporal abstractions are learned in the policy space.The framework is generic and applicable to DRL algorithms concerned with policy gradients forcontinuous as well as discrete action spaces such as A3C, TRPO and DDPG. FiGAR maintains astructured policy wherein the action probability distribution is augmented with a probability dis-tribution for choosing the time scale of repeating the chosen action. Our results demonstrate thatFiGAR can be used to significantly improve the current policy gradient and Actor-Critic algorithmsthereby learning better control policies across several domains by discovering optimal sequences oftemporally elongated macro-actions.Atari, TORCS and MuJoCo represent environments which are largely deterministic with a minimaldegree of stochasticity in environment dynamics. In such highly deterministic environments wewould expect FiGAR agents to build a latent model of the environment dynamics and hence beable to execute large action repetitions without dying. This is exactly what we see in a highlydeterministic environment like the game Freeway. Figure 1 (a) demonstrates that the chicken is ableto judge the speed of the approaching cars appropriately and cross the road in a manner which takesit to the goal without colliding with the cars and at the same time avoiding them narrowly.Having said that, certainly the ability to stop an action repetition (or a macro-action) in generalwould be very important, especially in stochastic environments. In our setup, we do not consider theability to stop executing a macro-action that the agent has committed to. However, this is a necessaryskill in the event of unexpected changes in the environment while executing a chosen macro-action.Thus, stop and start actions for stopping and committing to macro-actions can be added to the basicdynamic time scale setup for more robust policies. We believe the modification could work for moregeneral stochastic worlds like Minecraft and leave it for future work.10Published as a conference paper at ICLR 2017ACKNOWLEDGMENTSWe used the open source implementation of A3C at https://github.com/miyosuda/async_deep_reinforce . We thank V olodymr Mnih for giving valuable hyper-parameter in-formation. We thank Aravind Rajeswaran (University of Washington) for very helpful discussionsregarding and feedback on the MuJoCo domain tasks. The TRPO implementation was a modifica-tion of https://github.com/aravindr93/robustRL . The DDPG implementation wasa modification of https://github.com/yanpanlau/DDPG-Keras-Torcs . We thankILDS ( http://web.iitm.ac.in/ilds/ ) for the compute resources we used for runningA3C experiments.REFERENCESMarc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi-ronment: An evaluation platform for general agents. Journal of Artificial Intelligence Research ,pp. 253–279, June 2013.Kenneth M Buckland and Peter D Lawrence. Transition point dynamic programming. Advances inneural information processing systems , pp. 639–639, 1994.Thomas G Dietterich. Hierarchical reinforcement learning with the maxq value function decompo-sition. 2000.Steven J Duff. Reinforcement learning methods for continuous-time markov decision problems.1995.Ishan P Durugkar, Clemens Rosenbaum, Stefan Dernbach, and Sridhar Mahadevan. Deep reinforce-ment learning with macro-actions. arXiv preprint arXiv:1606.04615 , 2016.Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor OK Li. Learning to translate in real-timewith neural machine translation. arXiv preprint arXiv:1610.00388 , 2016.Matthew Hausknecht and Peter Stone. Deep reinforcement learning in parametrized action space.4th International Conference on Learning Representations , 2016.Matthew Hausknecht, Prannoy Mupparaju, Sandeep Subramanian, Shivaram Kalyanakrishnan, andPeter Stone. Half field offense: An environment for multiagent learning and ad hoc teamwork. InAAMAS Adaptive Learning Agents (ALA) Workshop , May 2016.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassinghuman-level performance on imagenet classification. In Proceedings of the IEEE InternationalConference on Computer Vision , pp. 1026–1034, 2015.Aravind S. Lakshminarayanan, Sahil Sharma, and Balaraman Ravindran. Dynamic action repetitionfor deep reinforcement learning. AAAI , 2017.Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature , 521(7553):436–444,2015.Guy Lever. Deterministic policy gradient algorithms. 2014.Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa,David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXivpreprint arXiv:1509.02971 , 2015.Sridhar Mahadevan, Nicholas Marchalleck, Tapas K Das, and Abhijit Gosavi. Self-improving fac-tory simulation using continuous-time average-reward reinforcement learning. 1997.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Belle-mare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen,Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wier-stra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning.Nature , February 2015.11Published as a conference paper at ICLR 2017V olodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, TimHarley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcementlearning. In International Conference on Machine Learning , 2016.Harsh Satija and Joelle Pineau. Simultaneous machine translation using deep reinforcement learn-ing. ICML 2016 Workshop on Abstraction in Reinforcement Learning , 2016.Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. 4thInternational Conference on Learning Representations , 2015.John Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel. Trust regionpolicy optimization. CoRR, abs/1502.05477 , 2015.Richard S. Sutton and Andrew G. Barto. Introduction to reinforcement learning. MIT Press , 1998.Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control.In2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pp. 5026–5033.IEEE, 2012.Alexander Vezhnevets, V olodymyr Mnih, Simon Osindero, Alex Graves, Oriol Vinyals, John Aga-piou, et al. Strategic attentive writer for learning macro-actions. In Advances in Neural Informa-tion Processing Systems , pp. 3486–3494, 2016.Bernhard Wymann, E Espi ́e, C Guionneau, C Dimitrakakis, R Coulom, and A Sumner. Torcs, theopen racing car simulator. Software available at http://torcs. sourceforge. net , 2000.12Published as a conference paper at ICLR 2017APPENDIX A: E XPERIMENTAL DETAILS FOR FIGAR-A3CEXPERIMENTAL DETAILS AND RESULTSWe used the LSTM-variant of A3C [Mnih et al. (2016)] algorithm for FiGAR-A3C experiments.The async-rmsprop algorithm [Mnih et al. (2016)] was used for updating parameters with the samehyper-parameters as in Mnih et al. (2016). The initial learning rate used was 103and it was linearlyannealed to 0over 100 million steps. The nused inn-step returns was 20. Entropy regularizationwas used to encourage exploration, similar to Mnih et al. (2016). The for entropy regularizationwas found to be 0:02after hyper-parameter tuning, both for the action-policy faand the actionrepetition policy fx.Table 4: Game Playing Experiments on Atari 2600Name FiGAR-A3C A3CAlien 3138.50 (2864.91, 3412.08) 2709.20 (2499.41, 2918.98)Amidar 1465.70 (1406.18, 1525.21) 1028.34 (1003.11, 1053.56)Assault 1936.37 (1855.85, 2016.88) 1857.61 (1787.19, 1928.02)Asterix 11949.00 (11095.62, 12802.37) 2364.00 (2188.12, 2539.87)Atlantis 6330600.00 (6330600.00, 6330600.00) 163660.00 (-46665.38, 373985.38)Bank Heist 3364.60 (3342.10, 3387.09) 1731.40 (1727.94, 1734.85)Beam Rider 2348.78 (2152.19, 2545.36) 2189.96 (2062.89, 2317.02)Bowling 30.09 (29.74, 30.43) 16.88 (15.23, 18.52)Breakout 814.50 (789.97, 839.02) 555.05 (474.89, 635.20)Centipede 3340.35 (3071.70, 3608.99) 3293.33 (2973.14, 3613.51)Chopper command 3147.00 (2851.02, 3442.97) 4969.00 (4513.12, 5424.87)Crazy Climber 154177.00 (148042.35, 160311.64) 166875.00 (161560.18, 172189.81)Demon Attack 7499.30 (7127.85, 7870.74) 26742.75 (22665.02, 30820.47)Enduro 707.80 (599.16, 816.43) 0.77 (0.45, 1.09)Freeway 33.14 (33.01, 33.26) 17.68 (17.41, 17.94)Frostbite 309.60 (308.81, 310.38) 306.80 (304.67, 308.92)Gopher 12845.40 (11641.88, 14048.91) 9360.60 (8683.72, 10037.47)James Bond 478.0 (448.78, 507.21) 285.5 (268.62, 302.37)Kangaroo 48.00 (29.51, 66.48) 26.00 (12.81, 39.18)Koolaid 1669.00 (1583.58, 1754.42) 1136.0 (1065.36, 1206.64)Krull 1316.10 (1223.23, 1408.96) 1025.00 (970.77, 1079.22)Kung Fu Master 40284.00 ( 38207.21, 42360.78) 35717.00 (34288.21, 37145.78)Name this game 1752.60 (1635.77, 1869.42) 12100.80 (11682.64, 12518.95)Phoenix 5106.10 (5056.43, 5155.76) 5384.10 (5178.12, 5590.07)Pong 20.32 (20.17, 20.46) 19.46 (19.32, 19.59)Q-bert 18922.50 (17302.94, 20542.05) 25840.25 (25528.49, 26152.00)Road Runner 22907.00 ( 22283.32, 23530.67) 59540.00 (58835.01, 60244.98)Sea quest 18076.90 (16964.16, 19189.63) 2799.60 (2790.22, 2808.97)Space Invaders 2251.95 (2147.13, 2356.76) 1268.75 (1179.25, 1358.24)Star Gunner 51269.00 (48629.42, 53908.57) 39835.00 (36365.24, 43304.75)Time Pilot 11865.00 (11435.25, 12294.74) 8969.00 (8595.57, 9342.42)Tutankhamun 276.95 (274.22, 279.67) 252.82(241.38, 264.25)Wizard of Wor 6688.00 (5783.48, 7592.51) 3230.00 (2355.75, 4104.24)Since the Atari 2600 games tend to be quite complex, jointly learning a factored policy fromrandom weight initializations proved to be less optimal as compared to a more stage-wise approach.The approach we followed for training FiGAR-A3C was to first train the networks using the regularA3C-objective function. This stage trains the action part of the policy faand value function fcfor a small number of iterations with a fixed action repetition rate (in this stage, gradients arenot back-propagated for fxand all action repetition predictions made are discarded). The nextstage was to then train the entire architecture (fa;fx;fc)jointly. This kind of a non-stationarytraining objective ensures that we have a good value function estimator fcand a good action policy13Published as a conference paper at ICLR 2017estimatorfabefore we start training the full policy (fa;fx)jointly. Every time FiGAR decidesto execute action atforxttime steps, we say one step of action selection has been made. Sincethe number of time steps for which an action is repeated is variable, training time is measured interms of action selections carried out. The first stage of the training was executed for 20million (ahyper-parameter we found by doing grid search) action selections (called steps here onwards) andthe next stage was executed for 80million steps. In comparison the baseline ran for 100millionsteps (action selections).Since a large entropy regularization was required to explore both components ( faandfx) of thepolicy-space, this also ends up meaning that the policies learnt are more diffused than one wouldlike them to be. Evaluation was done after every 1million steps and followed a strategy similarto-greedy. With = 0:1probability, the action and action repetition was drawn from the outputdistribution ( (faandfxrespectively) and with probability 1the action (and independentlythe action selection) with maximum probability was selected. This evaluation was done for 100episodes or 100000 steps whichever was smaller, to arrive at an average score.Table 4 contains the raw scores obtained by the final FiGAR-A3C and A3C policies on 33Atari2600 games. The numbers inside the brackets depict the confidence interval at a confidencethreshold of 0:95, calculated by averaging scores over 100 episodes. Table 5 contains scoresfor a competing method, STRAW [Vezhnevets et al. (2016)], which learns temporal abstractionsby maintaining action plans, for the subset of games on which both FiGAR and STRAW weretrained and tested. Note that the scores obtained by STRAW agents are averages over top 5performing replicas. We can infer from Tables 4 and 5 that FiGAR and STRAW and competitivewith each other, with FiGAR clearly out-performing STRAW in Breakout and STRAW clearingoutperforming FiGAR in Frostbite.Table 5: Game Playing Experiments on Atari 2600 by STRAW [Vezhnevets et al. (2016)]Name STRAW STRAW-eAlien 2626 3230Amidar 2223 2022Breakout 344 386Crazy Climber 143803 153327Frostbite 4394 8108Q-bert 20933 23892Figure 4 demonstrates the evolution of the performance of FiGAR-A3C versus training progress.It also contains corresponding metrics for A3C to facilitate comparisons. In the 100episode longevaluation phase we also keep track of the best episodic score. We also plot the best episode’s scoreversus time to get an idea of how bad the learnt policy is compared to the best it could have been.ARCHITECTURE DETAILSWe used the same low level architecture as Mnih et al. (2016) which in turn uses the same low levelarchitecture as Mnih et al. (2015), except that the pre-LSTM hidden layer had size 256instead of512as in Mnih et al. (2016). Similar to Mnih et al. (2016) the Actor and Critic share all but onelayer. Hence all but the final layer of fa,fxandfcare the same. Each of the 3networks hasa different final layer with faandfxhaving a softmax-non linearity as output non-linearity, tomodel the multinomial distribution and the fc(critic)’s output being linear.14Published as a conference paper at ICLR 2017Figure 4: Training progress plotted versus time for Atari 2600APPENDIX B: A DDITIONAL EXPERIMENTS FOR ATARI 2600These additional experiments are geared at understanding the repercussions of the evaluation strat-egy chosen by us.THE CHOICE OF WHETHER TO BE GREEDY OR STOCHASTICNote that in Appendix A, we state that for evaluating the policy learnt by the agent, we simplychose to sample from the output probability distributions with probability 0:1and chose the optimalaction/action repetition with probability 0:9. This choice of 0:1might seem rather arbitrary. Hencewe conducted experiments to understand how well the agent performs as we shift more and morefrom choosing the maximal action( 0:1-greedy policy) towards sampling from output distributions(stochastic policy).Figure 5 demonstrates that the performance of FiGAR-A3C does not deteriorate significantly, incomparison to A3C, even if we always sample from policy distributions, for most of the games. Inthe cases that there is a significant deterioration, we believe it is due to the diffused nature of thepolicy distributions (action and action repetition) learnt. Hence, although our choice of evaluationscheme might seem arbitrary, it is in fact reasonable.15Published as a conference paper at ICLR 2017Figure 5: Average performance plotted against the probability with which we sample from finalpolicy distribution for Atari 2600 . Points toward the left side of a sub-graph depict average perfor-mance for a greedy version of a policy and those towards the right side depict performance for thestochastic version of the policy.PERFORMANCE VERSUS SPEED TRADEOFFThe previous discussion leads to a novel way to trade-off game-play performance versus speed.Figure 3 demonstrated that although FiGAR-A3C learns to use temporally elongated macro-actions,it does favor shorter actions for many games. Since the action repetition distribution xis diffused(as will be shown by Table 6), sampling from the distribution should help FiGAR choose largeraction repetition rates probably at the cost of optimality of game play.Table 6 demonstrates that this is exactly what FiGAR does. It was generated by playing 10episodes,or100000 steps, whichever is lesser and recording the fraction of times each action repetition waschosen. The policy used in populating table 6 was the stochastic policy (described in previous sub-section). Contrast Table 6 to Table 7 which is an expanded version of Figure 3.16Published as a conference paper at ICLR 2017Table 6: Distribution of Action Repetitions chosen when the policy (both aandx) is completelystochasticName 1-3 4-6 7-9 10-12 13-15 16-18 19-21 22-24 25-27 28-30Alien 0.33 0.15 0.13 0.11 0.13 0.07 0.03 0.02 0.014 0.01Amidar 0.19 0.14 0.10 0.08 0.08 0.07 0.06 0.08 0.09 0.12Assault 0.29 0.26 0.21 0.11 0.04 0.03 0.02 0.01 0.01 0.01Asterix 0.40 0.25 0.15 0.08 0.04 0.04 0.02 0.02 0.01 0.01Atlantis 0.25 0.16 0.11 0.09 0.08 0.06 0.05 0.07 0.06 0.08Bank Heist 0.950 0.04 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Beam Rider 0.17 0.16 0.14 0.11 0.09 0.07 0.06 0.05 0.06 0.09Bowling 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.91Breakout 0.28 0.20 0.13 0.09 0.06 0.05 0.04 0.05 0.04 0.07Centipede 0.19 0.27 0.34 0.17 0.03 0.00 0.00 0.00 0.00 0.00Chpr Cmd 0.12 0.14 0.11 0.08 0.11 0.12 0.10 0.08 0.08 0.06Crzy Clmbr 0.34 0.06 0.03 0.51 0.02 0.01 0.01 0.01 0.01 0.01Dmn Attk 0.18 0.21 0.16 0.13 0.10 0.08 0.06 0.04 0.03 0.02Enduro 0.66 0.34 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Pong 0.16 0.15 0.13 0.10 0.10 0.08 0.08 0.07 0.08 0.07Freeway 0.14 0.12 0.11 0.10 0.09 0.09 0.08 0.08 0.08 0.12Frostbite 0.33 0.16 0.08 0.07 0.05 0.03 0.03 0.02 0.07 0.14Gopher 0.41 0.15 0.23 0.07 0.04 0.03 0.02 0.02 0.01 0.01James Bond 0.12 0.11 0.10 0.10 0.10 0.09 0.11 0.09 0.09 0.10Kangaroo 0.10 0.10 0.11 0.10 0.11 0.10 0.10 0.10 0.09 0.09Koolaid 0.14 0.14 0.11 0.11 0.10 0.08 0.08 0.09 0.08 0.07Krull 0.92 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.00Kung Fu 0.32 0.15 0.10 0.10 0.08 0.06 0.05 0.05 0.05 0.04NTG 0.10 0.10 0.12 0.11 0.10 0.11 0.09 0.10 0.09 0.09Phoenix 0.32 0.15 0.11 0.07 0.06 0.05 0.06 0.06 0.07 0.05Pong 0.15 0.15 0.14 0.10 0.09 0.08 0.07 0.07 0.07 0.08Q-bert 0.40 0.30 0.06 0.03 0.02 0.02 0.01 0.01 0.01 0.14Road Runner 0.99 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.00 0.00Sea Quest 0.40 0.26 0.10 0.05 0.04 0.04 0.04 0.03 0.02 0.01Spc Invdr 0.33 0.16 0.11 0.07 0.06 0.05 0.04 0.04 0.06 0.09Star Gunner 0.42 0.31 0.14 0.06 0.03 0.01 0.01 0.01 0.01 0.00Time Pilot 0.14 0.16 0.15 0.12 0.09 0.07 0.07 0.06 0.06 0.08Tutankham 0.34 0.18 0.08 0.08 0.07 0.06 0.06 0.05 0.05 0.04Wzd of Wor 0.11 0.11 0.11 0.11 0.12 0.11 0.09 0.09 0.08 0.07Both Figure 3 and Table 7 were created using the 0:1-greedy policy described in previous sub-section. The reason that we compare the stochastic policy with the 0:1-greedy version instead of thefully-greedy version (wherein the optimal action and action repetition is always chosen) is that sucha policy would end up being deterministic would not be good for evaluations.It can hence be seen that FiGAR learns to trade-off optimality of game-play for speed by choosingwhether to sample from policy probability distributions ( aandx) with probability 1and thusbehave stochastically, or behave 0:1-greedily, and sample from the distributions with only a smallprobability. Table 6 can be compared to Figure 3 to understand how stochasticity in final policyaffects action repetition chosen. A clear trend can be seen in all games wherein the stochasticvariant of final policy learns to use longer and longer actions, albeit at a small cost of some loss inthe optimality of game-play (as shown by Figure 5).An expanded version of Figure 3 is presented as Table 7 for comparison with Table 6. As explainedin Appendix A, the policy used for populating Table 7 is such that it picks a greedy action (or actionrepetition) with probability 0:9and stochastically samples from output probability distributions withprobability 0:1.17Published as a conference paper at ICLR 2017Table 7: Distribution of Action Repetitions chosen when the policy (both aandx) is0:1-greedyName 1-3 4-6 7-9 10-12 13-15 16-18 19-21 22-24 25-27 28-30Alien 0.50 0.08 0.11 0.07 0.12 0.07 0.02 0.02 0.01 0.01Amidar 0.49 0.08 0.06 0.04 0.04 0.04 0.04 0.07 0.03 0.11Assault 0.45 0.26 0.15 0.06 0.02 0.02 0.02 0.01 0.01 0.01Asterix 0.50 0.33 0.09 0.04 0.01 0.01 0.01 0.00 0.00 0.00Atlantis 0.51 0.07 0.18 0.08 0.02 0.02 0.01 0.02 0.03 0.07Bank Heist 0.96 0.04 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Beam Rider 0.34 0.31 0.13 0.04 0.05 0.03 0.01 0.02 0.02 0.06Bowling 0.01 0.91 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01Breakout 0.29 0.23 0.12 0.09 0.05 0.04 0.02 0.03 0.03 0.11Centipede 0.02 0.03 0.94 0.02 0.00 0.00 0.00 0.00 0.00 0.00Chpr Cmd 0.29 0.23 0.12 0.03 0.06 0.09 0.06 0.04 0.06 0.03Crzy Clmbr 0.55 0.04 0.01 0.38 0.01 0.01 0.01 0.00 0.00 0.00Dmn Attk 0.16 0.35 0.14 0.12 0.08 0.05 0.05 0.03 0.01 0.02Enduro 0.91 0.09 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Freeway 0.15 0.18 0.09 0.09 0.06 0.07 0.05 0.06 0.07 0.18Frostbite 0.47 0.20 0.13 0.01 0.03 0.01 0.01 0.00 0.03 0.11Gopher 0.47 0.19 0.21 0.05 0.04 0.01 0.02 0.01 0.00 0.00James Bond 0.28 0.11 0.22 0.08 0.06 0.06 0.05 0.05 0.03 0.06Kangaroo 0.20 0.39 0.27 0.02 0.01 0.04 0.01 0.01 0.01 0.04Koolaid 0.36 0.15 0.19 0.06 0.06 0.06 0.05 0.02 0.03 0.04Krull 0.92 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.00Kung Fu 0.46 0.10 0.05 0.11 0.08 0.06 0.04 0.03 0.04 0.05NTG 0.01 0.01 0.91 0.01 0.01 0.01 0.01 0.01 0.01 0.01Phoenix 0.44 0.44 0.04 0.02 0.01 0.01 0.01 0.01 0.02 0.01Pong 0.19 0.16 0.13 0.13 0.06 0.09 0.04 0.05 0.07 0.10Q-bert 0.51 0.27 0.05 0.02 0.00 0.00 0.00 0.00 0.01 0.13Road Runner 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Sea Quest 0.59 0.19 0.06 0.02 0.02 0.03 0.05 0.03 0.01 0.00Spc Invdrs 0.42 0.18 0.11 0.06 0.04 0.02 0.02 0.02 0.03 0.10Star Gunner 0.59 0.31 0.06 0.02 0.00 0.01 0.00 0.00 0.00 0.00Time Pilot 0.580 0.14 0.11 0.05 0.03 0.01 0.01 0.01 0.02 0.04Tutankham 0.16 0.74 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.01Wzd of Wor 0.28 0.12 0.08 0.19 0.11 0.08 0.04 0.04 0.04 0.02Table 8 contains the average action repetition chosen in each of the games for the two FiGAR-variants. The same episodes used to populate Table 6 and 7 were used to fill Table 8. It can be seenthat in most games, the Stochastic variant of policy learns to play at a higher speed, although thismight result in some loss in optimality of game play, as demonstrated in Figure 5.18Published as a conference paper at ICLR 2017Table 8: Average Action Repetition comparison between stochastic and greedy policiesName Stochastic 0:1-GreedyAlien 8.43 6.87Amidar 13.77 9.61Assault 7.14 5.86Asterix 6.53 4.22Atlantis 11.68 7.20Bank Heist 1.65 1.62Beam Rider 12.47 7.68Bowling 28.64 5.13Breakout 10.14 9.93Centipede 6.84 7.88Chopper Command 13.76 9.58Crazy Climber 8.00 5.74Enduro 2.91 2.69Demon Attack 10.23 8.59Freeway 14.62 14.25Frostbite 11.33 7.69Gopher 6.68 5.33James Bond 14.98 10.37Kangaroo 15.07 7.84Koolaid 13.66 8.48Krull 3.83 3.12Kung Fu Master 10.00 8.53Name this Game 14.98 9.55Phoenix 10.31 4.64Pong 12.99 12.28Q-bert 2.02 1.76Road Runner 1.63 1.26Sea Quest 6.98 5.33Space Invaders 10.48 8.55Star Gunner 5.21 3.69Time Pilot 12.72 5.39Tutankhamun 9.75 5.73Wizard of Wor 14.27 9.87APPENDIX C: E XPERIMENTAL SETUP FOR FIGAR-TRPOEXPERIMENTAL DETAILSFiGAR-TRPO and the corresponding baseline algorithm operate on low dimensional feature vectorobservations. The TRPO (and hence FiGAR-TRPO) algorithm operates in two phases. In the firstphase (P1),Ktrajectories are sampled according to current behavioral policy to create the sur-rogate loss function. In the second phase ( P2) a policy improvement step is performed by carryingout an optimization step on the surrogate loss function, subject to the KL-divergence constraint onthe new policy. In our experiments, 500such policy improvement steps were performed. Kvarieswith the learning progress and the schedule on what value Kwould take in next iteration of P1isdefined linearly in terms of the return in the last iteration of P1. Hence if the return was large inprevious iteration of P1, a small number of episodes are are used to construct the surrogate lossfunction in current iteration. The best policy was found by keeping track of the average returns seenduring the training phase P1. This policy was then evaluated on 100episodes to obtain the averagescore of the TRPO policy learnt. The most important hyper-parameters for FiGAR-TRPO are arandKL. By using a grid search on the set f0:01;0:02;0:04;0:08;0:16;0:32;0:64;1:28gwe foundthe optimal hyper-parameters ar= 1:28andKL= 0:64. These were tuned on all the 5tasks.19Published as a conference paper at ICLR 2017LOSS FUNCTION AND ARCHITECTUREThetanh non-linearity is used throughout. The mean vector is realized using a 2-Hidden Layerneural network (mean network) with hidden layer sizes (128;64). The standard deviation is realizedusing a Parameter layer (std-dev layer) which parameterizes the standard deviation but does notdepend on the input. Hence the concatenation of the output of mean network and the std-dev layerforms the action policy faas described in Section 4. The Action Repetition function fxis realizedusing a 2-Hidden Layer neural (act-rep network) network similar to the mean network albeit withsmaller hidden layer sizes: (128;64). However, its output non-linearity is a softmax layer of size30as dictated by the value of W. The action repetition network was kept small to ensure thatFiGAR-TRPO does not have significantly more parameters than TRPO. The mean network, std-devlayer and act-rep network do not share any parameters or layers (See appendix Gfor experimentson FiGAR-TRPO with shared layers).The surrogate loss function in TRPO when the Single Path method of construction is followedreduces to [Schulman et al. (2015)]:Lold(~) =Esold;aold~(ajs)old(ajs)Qold(s;a)whereq, the sampling distribution is just the old behavioral policy old(defining characteristic ofSingle-Path method) and is the improper discounted state visitation distribution.The surrogate loss function for a factored policy such as that of FiGAR-TRPO is:La;old;x;old(a;x) =Es;a;xa(ajs)a;old(ajs)x(xjs)x;old(xjs)Qa;old;x;old(s;a;x )wheresa;xold;aa;old;xx;old anda=fa,a;old =fa;old ,x=fxandx;old=fx;oldThis kind of a splitting of probability distributions happens because the action-policy faand theaction-repetition policy fxare independent probability distributions. The theoretically sound wayto realize FiGAR-TRPO is to minimize the loss La;old;x;old(a;x). However, we found that inpractice, optimizing a relaxed version of the objective function, that is,La;old;x;old(~a)La;old;x;old(~x)arworks better. This leads to the FiGAR-TRPO objective defined in Section 4:3.20Published as a conference paper at ICLR 2017APPENDIX D: E XPERIMENTAL DETAILS FOR FIGAR-DDPGEXPERIMENTAL DETAILSThe DDPG algorithm also operates on the low-dimensional (29 dimensional) feature-vector obser-vations. The domain consists of 3continuous actions, acceleration, break and steering. The Whyper-parameter used in main experiments was chosen to be 15arbitrarily. Unlike Lillicrap et al.(2015), we did not find it useful to use batch normalization and hence it was not used. However,a replay memory was used of size 10000 . Target networks were also used with soft updates beingapplied with = 0:001. Sine DDPG is an off-policy actor-critic method, we need to ensure thatsufficient exploration takes place. Use of an Ornstein-Uhlenbeck process (refer to Lillicrap et al.(2015) for details) ensured that exporation was carried out in action-policy space. To ensure explo-ration in the action-repetition policy space, we adopted two strategies. First, an -greedy version ofthe policy was used during train time. The was annealed from 0:2to0over50000 training steps.The algorithm was run for 40000 training steps for baselines as well as FiGAR-DDPG. Second,with probability 1, instead of picking the greedy action-repetition , we sampled from the outputdistributionfx(s).ARCHITECTURAL DETAILSThrough the architecture, the hidden layer non-linearity used was ReLU. All hidden layer weightswere initialized using the He initialization [He et al. (2015)]The actor network consisted of a 2-hidden layer neural network with hidden sizes (300;600) (callthe second hidden layer representation h2. We learn two different output layers on top of this com-mon hidden representation. fawas realized by transforming h2with an output layer of size 3. Theoutput neuron corresponding to the action steering used tanh non linearity where as those corre-sponding to acceleration and break used the sigmoid non-linearity. The fxnetwork was realizedby transforming h2using a softmax output layer of size jWj. The output of the Actor network is a3 +jWj= 18 dimensional vector.The critic network takes as input the state vector ( 29-dimensional) and the action vector ( 18-dimensional). The critic is a 3hidden layer network of size (300;600;600) . Similar to Lillicrapet al. (2015), actions were not included until the 2ndhidden layer of fc. The final output is linearand is trained using the TD-error objective function, similar to Lillicrap et al. (2015)21Published as a conference paper at ICLR 2017APPENDIX E: D ETAILS FOR FIGAR- VARIANTSFigure 6: Comparison of FiGAR-A3C variants to the A3C baseline for 2 games: Sea Quest andAsterixIt is clear from Figure 6 that even though FiGAR A3C needs to explore in 2separate action-spaces(those of primitive actions and the action repetitions), the training progress is not slowed down as aresult of this exploration, for any FiGAR variant.Figure 7: Comparison of FiGAR-A3C variants to the A3C baseline for 3 games: Sea Quest, SpaceInvaders and Asterix. Game scores have been scaled down by 1000 and rounded to 1decimal place.Table 2 contains final evaluation scores attained by various FiGAR variants. Figure 7 contains a bar-graph visualization of the same table to demonstrate the advantage of all FiGAR variants relative tothe baselines.22Published as a conference paper at ICLR 2017APPENDIX F: I MPORTANCE OF xOne could potentially use FiGAR at evaluation stage (after training has been completed) at an action-repetition rate of 1by picking every action according to aand completely discarding the learntrepetition policy x. Such a FiGAR variant is denoted as FiGAR-wo- x. We demonstrate thatFiGAR-wo-xis worse than FiGAR on most games and hence the temporal abstractions learnt byand encoded in xare indeed non-trivial and important for gameplay performance. Table 9 containsthe comparison between standard FiGAR agent and FiGAR-wo- x. Evaluation scheme is the sameas Appendix A.Table 9: Gameplay performance of FiGAR compared with FiGAR-wo- xName FiGAR FiGAR-wo- xAlien 3138.50 582.17Amidar 1465.70 497.90Assault 1936.37 1551.40Asterix 11949.00 780.00Atlantis 6330600.00 680890.00Bank Heist 3364.60 223.00Beam Rider 2348.78 3732.00Bowling 30.09 0.90Breakout 814.50 321.90Centipede 3340.35 3934.90Chopper Command 3147.00 2730.00Crazy Climber 154177.00 210.00Enduro 707.80 941.10Demon Attack 7499.30 6661.00Freeway 33.14 30.60Frostbite 309.60 308.00Gopher 12845.40 10738.00James Bond 478.0 320.00Kangaroo 48.00 40.00Koolaid 1669.00 2110.00Krull 1316.10 2076.00Kung Fu Master 40284.00 29770.00Name this Game 1752.60 1692.00Phoenix 5106.10 5266.00Pong 20.32 -21.00Road Runner 22907.00 23560.00Sea Quest 18076.90 18324.00Space Invaders 2251.95 1721.00Star Gunner 51269.00 55150.00Time Pilot 11865.00 11810.00Tutankhamun 276.95 182.20Wizard of Wor 6688.00 6160.00We observe that in 24 out of 33 games, xhelps the agent learn temporal abstractions which resultin a significant boost in performance compared to the FiGAR-wo- xagents.23Published as a conference paper at ICLR 2017APPENDIX G: S HARED REPRESENTATION EXPERIMENTS FOR FIGAR-TRPOSection 5:2contains results of experiments on FiGAR-TRPO. Appendix Ccontains the experimen-tal setup for the same. Throughout these experiments on FiGAR-TRPO the policy components faandfxdo not share any representations. This appendix contains experimental results in the settingwherein (faandfx)share all layers except the final one. This agent/network is denoted withthe name FiGAR-shared-TRPO. All the hyper-parameters are the same as those in Appendix Cex-ceptarandKLwhich were obtained through a grid-search similar to appendix C. These weretuned on all the 5tasks. The values for these hyper-parameters that we found to be optimal arear= 1:28andKL= 0:16. The same training and evaluation regime as appendix Cwas used.The performance of the best policy learnt is tabulated in Table 10Table 10: Evaluation of FiGAR with shared representations for faandfxon MujocoDomain FiGAR-TRPO FiGAR-shared-TRPO TRPOAnt 947.06 (28.35) 1779.72 (7.99) -161.93 (1.00)Hopper 3038.63 (1.00) 2649.09 (2.07) 3397.58 (1.00)Inverted Pendulum 1000.00 (1.00) 986.35 (1.00) 971.66 (1.00)Inverted Double Pendulum 8712.46 (1.01) 9138.85 (1.00) 8327.75 (1.00)Swimmer 337.48 (10.51) 340.74 (8.02) 364.55 (1.00)FiGAR-shared-TRPO on the whole does not perform much better than FiGAR-TRPO. In theseTRPO experiments, the neural networks we used were rather shallow at only two hidden layersdeep. Hence, we believe that sharing of layers thus leads to only small gains in terms of optimalityof policy learnt.24
BJFG8Yqxl
Under review as a conference paper at ICLR 2017GROUP SPARSE CNN S FOR QUESTION SENTENCECLASSIFICATION WITH ANSWER SETSMingbo Ma & Liang HuangDepartment of EECSOregon State UniversityCorvallis, OR 97331, USAfmam,liang.huang g@oregonstate.eduBing Xiang & Bowen ZhouIBM Watson GroupT. J. Watson Research Center, IBMYorktown Heights, NY 10598, USAfbingxia,zhoug@us.ibm.comABSTRACTClassifying question sentences into their corresponding categories is an importanttask with wide applications, for example in many websites’ FAQ sections. How-ever, traditional question classification techniques do not fully utilize the well-prepared answer data which has great potential for improving question representa-tion and could lead to better classification performance. In order to encode answerinformation into question representation, we first introduce novel group sparse au-toencoders which could utilize the group information in the answer set to refinequestion representation. We then propose a new group sparse convolutional neu-ral network which could naturally learn the question representation with respect totheir corresponding answers by implanting the group sparse autoencoders into thetraditional convolutional neural network. The proposed model show significantimprovements over strong baselines on four datasets.1 I NTRODUCTIONQuestion classification has applications in question answering (QA), dialog systems, etc., and hasbeen increasingly popular in recent years. Most existing approaches to this problem simply useexisting sentence modeling frameworks and treat questions as general sentences, without any specialtreatment. For example, several recent efforts employ Convolutional Neural Networks (CNNs) toachieve remarkably strong performance in the TREC question classification task as well as othersentence classification tasks such as sentiment analysis (Kim, 2014; Kalchbrenner et al., 2014; Maet al., 2015).We argue, however, that the general sentence modeling frameworks neglect several unique proper-ties in question classification not found in other sentence classification tasks (such as sentimentalclassification or sarcasm detection), which we detail below:The categories for most sentence classification tasks are flat and coarse (notable exceptionssuch as the Reuters Corpus RCV1 (Lewis et al., 2004) notwithstanding), and in many cases,even binary (i.e. sarcasm detection). However, question sentences commonly belong tomultiple categories, and these categories often have a hierarchical (tree or DAG) structuresuch as those from the New York State DMV FAQ section1in Fig. 1.Question sentences from different categories often share similar information or languagepatterns. This phenomenon becomes more obvious when categories are hierarchical. Fig. 2shows one example of questions sharing similar information from different categories. Thiscross-category shared patterns are not only shown in questions but can also be found inanswers corresponding to these questions.Another unique characteristic for question classification is the well prepared answer setwith detailed descriptions or instructions for each corresponding question category. Theseanswer sets generally cover a broader range of vocabulary (than the questions themselves)and carry more distinctive semantic meanings for each class. We believe there is great1http://nysdmv.custhelp.com/app/home1Under review as a conference paper at ICLR 20171: Driver License/Permit/Non-Driver IDa:Apply for original (49 questions)b:Renew or replace (24 questions)...2: Vehicle Registrations and Insurancea:Buy, sell, or transfer a vehicle (22 questions)b:Registration and title requirements (42 questions)...3: Driving Record / Tickets / Points...Figure 1: Examples from the NYDMV FAQ section. There are 8 top-level categories, 47 sub-categories, and 537 questions (388 unique questions; many questions fall into multiple categories).Category: FinanceQ: How to get a personal loan from the bank?Category: EducationQ: What are the steps for applying for student loan?Figure 2: Examples of questions from two different categories. These questions ask for the similarproblem even if they are in different classes. Their answers also contain similar information.potential to enhance the representation of questions with extra information from corre-sponding answer sets.To exploit the hierarchical and overlapping structures in question categories and extra informationfrom answer sets, we consider dictionary learning (Aharon et al., 2005; Roth & Black, 2005; Leeet al., 2007; Cand `e & Wakin, 2008; Kreutz-Delgado et al., 2003; Rubinstein et al., 2010) which isone common approach for representing samples from a vast, correlated groups with external infor-mation. This learning procedure first builds a dictionary with a series of grouped bases. These basescan be initialized randomly or from external data (from the answer set in our case) and optimizedduring training through Sparse Group Lasso (SGL) (Simon et al., 2013). There are many promis-ing improvements which have been achieved recently by this grouped-dictionary learning-basedmethods (Zhao et al., 2016; Rao et al., 2016). We also showcase some preliminary experiments inSection 6 for question classification with SGL, and the performance is indeed extraordinary com-pared with baselines but still lose to the CNNs-based method. Considering the unique advantagesfrom the SGL-based model and the CNNs-based model, we believe that performance of questionclassification will have another boost if we could put SGL-based and CNNs-based model within thesame end-to-end framework. This requires us to design a new neural-based model which behavessimilarly with SGL.Based on the above observations, we first propose a novel Group Sparse Autoencoders (GSA). Theobjective of GSA and SGL are very similar. The encoding matrix of GSA (like the dictionary inSGL) is grouped into different categories. The bases in different groups can be either initializedrandomly or by the sentences in corresponding answer categories. Each question sentence will bereconstructed by some bases within a few groups. To the best of our knowledge, GSA is the first fullneural network based model with group sparse constraints. GSA has can be either linear or nonlinearencoding or decoding while SGL is restrained to be linear. In order to incorporate both advantagesfrom GSA and CNNs, we then propose a new Group Sparse Convolutional Neural Networks (GSC-NNs) by implanting the GSA into CNNs between the convolutional layer and the classification layer.GSCNNs are jointly trained end-to-end neural-based framework for getting question representationswith group sparse constraint from both answer and question sets. Experiments show significant im-provements over strong baselines on four datasets.2 P RELIMINARIES : SPARSE AUTOENCODERSWe first review the basic autoencoders and sparse autoencoders to establish the mathematical nota-tions. Then we propose our new autoencoder with group sparse constraints in later section.2Under review as a conference paper at ICLR 20172.1 B ASIC AUTOENCODERSAs introduced in (Bengio et al., 2007), autoencoder is an unsupervised neural network which couldlearn the hidden representations of input samples. An autoencoder takes an input instance z2Rd,and then maps it into a hidden space in the form of h2Rsthrough a deterministic mapping functionh= (z) = (Wz+b), where=fW;bg.Wis adsprojection matrix and bis the biasterm. The projection function can be linear or non-linear function such as sigmoid. This projectionprocess often can be recognized as encoding process. The encoded hidden representation is thenmapped back to the original input space to reconstruct a vector ^z2Rdwith function ^z= 0(h) =(W0h+c)with0=fW0;cg. The reverse projection matrix W0may optionally be constrainedbyW0=WT. This reverse operation can be recognized as a decoding process which tries toreconstruct a new vector zsuch that the difference between ^zandzare as small as possible byminimizing the average reconstruction error:J(W;b;c ) =argminW;b;c1nnXi=1L(z(i);^z(i)) =argminW;b;c1nnXi=1Lz(i);WT;c(W;b(z(i)))(1)whereLis a loss function such as minimum square error L(z;^z) =kz^zk2. Depending on the applications,this loss function also can be defined in form of computing the reconstruction cross-entropy between zand^z:LC(z;^z) =dXk=1(zklog ^zk+ (1zk) log(1^zk))When the dimensionality of the hidden space sis smaller than the dimensionality of the input space d. Thenetwork is forced to learn a compressed representation of the input. If there is structure or feature correlationin the data, the linear autoencoders often ends up learning a low-dimensional representation like PCA. Mostof the time, autoencoders learns a compressed representation when the number of hidden units sbeing small.However, when the number of hidden units becomes larger than the dimensionality of input space, there are stillsome interesting structure that can be discovered by imposing other constraints on the network. The followingdiscussed sparse constraints is one of them.2.2 S PARSE AUTOENCODERSSparse autoencoders (Ng, 2011; Makhzani & Frey, 2014) shows interesting results of getting visualization ofthe hidden layers. Recall that hijrepresents the activations of jthhidden unit for a given specific input zi. Thenthe average activation of hidden unit j(average over the training batch) can be defined as:^j=1mmXi=1hij (2)wheremis the number of samples in training batch. The goal of sparse autoencoders is to enforce the constraint:^j= (3)whereis the sparsity parameter which controls how sparse you want the hidden representation to be. Typicallyis set to be a small value close to zero. In order to satisfy this constraint, the activations of hidden layer mustmostly be close to 0.In order to achieve the above objective, there will be an extra penalty term in our optimization function whichtries to reconstruct the original input with as few hidden layer activations as possible. The most commonly usedpenalty term (Ng, 2011) is as follows:sXj=1KL(jj^j) =sXj=1log^j+ (1) log11^j(4)wheresis the number of units in hidden layer, and jis the index of the hidden unit. This penalty term is basedon KL divergence which measures the difference between two different distributions.3Under review as a conference paper at ICLR 2017Then our new objective of the sparse autoencoders is defined as follows:Jsparse (W;b;c ) =J(W;b;c ) +sXj=1KL(jj^j) (5)whereJ(W;b;c )is defined in Eq. 1, and controls the weights of the sparsity penalty term. Note that the term^jis implicitly controlled by W,bandc. This is one of the difference between sparse autoencoders and sparsecoding which will be discussed in details in Section 6.3 G ROUP SPARSE AUTOENCODERSAs described above, sparse autoencoder has similar objective with sparse coding which tries to find sparserepresentations for input samples. Inspired by the motivations from group sparse lasso (Yuan & Lin, 2006) andsparse group lasso (Simon et al., 2013), we propose a novel Group Sparse Autoencoders (GSA)in this paper.Different from sparse autoencoders, in our GSA, the weight matrix is categorized into different groups. Fora given input, GSA reconstructs the input signal with the activations from only a few groups. Similar to theaverage activation defined in Eq. 2 for sparse autoencoders, in GSA, we define each grouped average activationfor the hidden layer as follows:^p=1mgmXi=1gXl=1jjhip;ljj2 (6)wheregrepresents the number of samples in each group, and mrepresents the number of samples in trainingbatch. ^jfirst sums up all the activations within pthgroup, then computes the average pthgroup respond acrossdifferent samples’ hidden activations.Similar with Eq.4, we also use KL divergence to measure the difference between estimated intra-group activa-tion and goal group sparsity as follows:GXp=1KL(jj^p) =log^p+ (1) log11^p(7)whereGis the number of groups. When we only need inter-group constraints, the loss function of autoencoderscan be defined as follows:Jgs(W;b;c ) =J(W;b;c ) +gXl=1KL(jj^p) (8)In some certain cases, inter- and intra- group sparsity are preferred and the same time. Then objective can bedefined as follows:Jgs(W;b;c ) =J(W;b;c ) +sXj=1KL(jj^j)+GXp=1KL(jj^p)(9)Inter-group sparse autoencoders defined in Eq. 8 has similar functionality with group sparse lasso in (Yuan &Lin, 2006). Inter- and intra- group sparse autoencoders which defined in Eq. 9 behaves similarly to sparse grouplasso in (Simon et al., 2013). Different from the sparse coding approach, the encoding and decoding processcould be nonlinear while sparse coding is always linear.Similar to sparse coding approach, the projection matrix in GSA works like a dictionary which includes all thenecessary bases for reconstructing the input signal with the activations in the hidden layer. Different initializa-tion methods for projection matrix are described in Section 5.4Under review as a conference paper at ICLR 2017(a)12345678910 (b) (c)Figure 3: The input figure with hand written digit 0is shown in (a). Figure (b) is the visualizationof projection matrix W. Different rows represent different groups of Win Eq. 9. For each group,we only show the first 15(out of 50) bases. The red numbers on the left side of (b) are the indexof different groups(10 groups in total). Figure (c) is the projection matrix visualization from a basicautoencoders.3.1 V ISUALIZATION FOR GROUP SPARSE AUTOENCODERSIn order to have a better understanding of how the GSA behaves, We use MNIST dataset for visualizing the in-ternal parameters of GSA. We visualize the projection matrix in Fig. 3 and the corresponding hidden activationin Fig. 4.In our experiments, we use 10;000samples for training. We set the size of hidden layer as 500with10differentgroups for GSA. We set the intra-group sparsity equal to 0:3and inter-group sparsity equal to 0:2.andare equal to 1. On the other hand, we also train the same 10;000examples on basic autoencoders with randomnoise added to the input signal (denoising autoencoders (Vincent et al., 2008)) for better hidden informationextraction. We add the same 30% random noise into both models. Note that the group size of this experimentsdoes not have to be set to 10. Since this is the image dataset with digit numbers, we may use fewer groups totrain GSA.In Fig. 3(b), we could find similar patterns within each group. For example, the 8thgroup in Fig. 3(b) hasdifferent forms of digit 0, and 9thgroup includes different forms of digit 7. However, it is difficult to tell anymeaningful patterns from the projection matrix of basic autoencoders in Fig. 3(c).Fig. 4 shows the hidden activations respect to the input image in Fig. 3(a). From the results, we can tell thatmost of the activations of hidden layer are in group 1,2,6and8. And the 8thgroup has the most significantactivations. When we refer this activations to the projection matrix visualization in Fig. 3(b). These results arereasonable since the 8throw has the most similar patterns of digit 0.1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10Figure 4: The hidden activations hrespect to the input image in Fig. 3(a). The red numbers corre-sponds to the index in Fig. 3(b). These activations come from 10different groups. The group sizehere is 50.GSA could be directly applied to small image data (i.e. MINIST dataset) for pre-training. However, in thetasks which prefer dense, semantic representation (i.e. sentence classification), we still need CNNs to learn the5Under review as a conference paper at ICLR 2017Any interesting places to visit in Lisbon ?..................N filters(PoolingFeed into NNfor classification Group Sparse Auto-EncoderConvolutional LayerWTzhzhW,b(·)W,b(·)W,b(·)Figure 5: Framework used in our model. We add extra encoding layer in CNNs. Sentence rep-resentation after convolutional layer is denoted as z, andWis the projection matrix (functions asdictionary) in Eq. 9. Hidden group sparse representation for question sentence is denoted as h.Different colors in projection matrix represent different groups. We show WTinstead ofWin thefigure for better visualization purpose. The darker color in hmeans larger value and white meanszero.sentence representation automatically. In this scenario, in order to incorporate both advantages from GSA andCNNs, we propose Group Sparse Convolutional Neural Networks in the following section.4 G ROUP SPARSE CONVOLUTIONAL NEURAL NETSConvolutional neural networks (CNNs) were first proposed by (LeCun et al., 1995) in computer vision. For agiven image, CNNs apply convolution kernels on a series of continuous areas on images. This concept was firstadapted to NLP by (Collobert et al., 2011). Recently, many CNNs-based techniques achieve great successes insentence modeling and classification (Kim, 2014; Kalchbrenner et al., 2014; Ma et al., 2015). For simplicity,we use the sequential CNNs (Kim, 2014) as our baseline.Following sequential CNNs, one dimensional convolution operates the convolution kernel in sequential orderin Eq. 10, where xi2Rerepresents the edimensional word representation for the i-th word in the sentence,andis the concatenation operator. Therefore xi;jrefers to concatenated word vector from the i-th word tothe(i+j)-th word in sentence:xi;j=xixi+1 xi+j (10)A convolution operates a filter w2Rneto a window of nwords xi;i+nwith bias term b0described in Eq. 11to produce a new feature.ai=(wxi;i+n+b0) (11)whereis a non-linear activation function such as rectified linear unit (ReLu) or sigmoid function. The filterwis applied to each word in the sentence, generating the feature map a2RL:a= [a1;a2;;aL] (12)whereLis the length of the sentence.The convolution described in Eq. 11 can be regarded as feature detection: more similar patterns will returnhigher activation. In sequential CNNs, max-over-time pooling (Collobert et al., 2011; Kim, 2014) operatesover the feature map to get the maximum activation ^a= maxfagrepresenting the entire feature map. The ideais to detect the strongest activation over time. This pooling strategy also naturally deals with sentence lengthvariations.In order to capture different aspects of patterns, CNNs usually randomly initialize a set of filters with differentsizes and values. Each filter will generate a feature as described above. To take all the features generated by Ndifferent filters into count, we use z= [ ^a1;;^aN]as the final representation.6Under review as a conference paper at ICLR 2017In conventional CNNs, zwill be directly fed into classifiers after the sentence representation is obtained, e.g.fully connected neural networks in (Kim, 2014). There is no easy way for CNNs to explore the possible hiddenrepresentations with interesting underlaying structures.In order to obtains the hidden representations for each sentence representation, we proposed a Group SparseConvolutional Neural Networks (GSCNNs) by placing one extra layer between convolutional layer and classi-fication layer. This extra layer is trying to mimic the functionality of GSA that we introduced in Section 2.Our proposed framework is shown in Fig. 5. The convolutional layer show in Fig. 5 follows the traditionalconvolution process which is described previously. After the convolutional layer, we get zwhich is the featuremap for each sentence. The feature maps zis treated as the feature representation for each sentence. In stead ofdirectly feeding zinto a fully connected neural network for classification, we enforce the group sparse constraintonzlike the group sparse constraint we have on hin Eq. 9. Then, we use the hidden representation hin Eq. 9as new sentence representation. The last step is feeding the hidden representation hinto fully connected neuralnetwork for classification. The parameters W,b, andcin Eq. 9 will also be fine tunned during the last step.In order to improve the robustness of the hidden representation and prevent it from simply learning the iden-tity, we follow the idea of decisioning autoencoders (Vincent et al., 2008) to add random noise (10% in ourexperiments) into z. The training process of our model is similar to the training process in stack autoencoders(Bengio et al., 2007).In order to prevent the co-adaptation of the hidden unites, we employ random dropout on penultimate layer(Hinton et al., 2014). We set the drop out rate as 0:5and learning rate as 0:95by default. In our experiments,training is done through stochastic gradient descent over shuffled mini-batches with the Adadelta update rule(Zeiler, 2012). All the settings of the CNNs are the same as the settings in (Kim, 2014).5 E XPERIMENTSSince there has been little effort to use answer sets in question classification, we did not find any well-fitteddatasets which are publicly available. We collected two datasets and use other two well-known datasets in ourexperiments. The statistics of these datasets is summarized in Table 1. The descriptions of each dataset are asfollows:TREC The TREC dataset2is a factoid question classification dataset. The task is to classify each questioninto one of the 6 different question types (Li & Roth, 2002). The reason we include this factoidquestions dataset is to show the effectiveness of the proposed method in an frequently used dataseteven there is no categorized answer sets available.Insurance This is a private dataset which we collected from a car insurance company’s website. Eachquestion is classified into the 319 possible classes with corresponding answer data. All questionswhich belongs to the same category share the same answers. All answers are generated manually.Most questions have multiple assigned labels.DMV dataset We collected this dataset from New York State DMV’s FAQ website. We will make this datapublicly available in the future.Yahoo Ans The Yahoo! Answers dataset (Fleming et al., 2012; Shah & Pomerantz, 2010) is a publiclyavailable dataset.3There are more than 4 million questions with answers. For simplicity reasons, weonly randomly sample 8,871 questions from the complete dataset. There are 27 top level categoriesacross different domains. To make our task more realistic and challenging, we test the proposedmodel with respect to the subcategories and there are 678 classes.Datasets CtCsNdataNtestNans Multi-label ?TREC 6 50 5952 500 - NoInsurance - 319 1580 303 2176 YesDMV 8 47 388 50 2859 YesYahoo Ans 27 678 8871 3027 10365 NoTable 1: Summary of dataset statistics. Ctrepresent the number of top categories, and Csrepresentsthe number of sub-category. Note we only do top level classification on TREC. Ndatais dataset size.Ntestis the size for test set. Nansis the size of answer set.2http://cogcomp.cs.illinois.edu/Data/QA/QC/3http://webscope.sandbox.yahoo.com/catalog.php?datatype=l7Under review as a conference paper at ICLR 2017TREC Insurance DMVYahoo datasetsub top unseenCNNs 93.6 51.2 60 20.8 53.9 47WR 93.8 53.5 62 21.8 54.5 48WQ 94.2 53.8 64 22.1 54.1 48WA - 55.4 66 22.2 55.8 53Table 2: Experiments with four datasets. Baseline is from sequential CNNs. WRmeans the pro-jection matrix is random initialized. WQrepresents the projection matrix is initialized by clusteringthe question sentences. WArepresents the performance of the model whose projection matrix isinitialized by answer set. There are three different settings for Yahoo dataset: classification onsubcategory, classification on top level category and classification on unseen sub-labels.We only compare our model’s performance with CNNs for two following reasons: we consider our “groupsparse” as a modification to the general CNNs for grouped feature selection. This idea is “orthogonal” to anyother CNNs-based models and can be easily applied to them; another reason is, as discussed in Sec. 1, we didnot find any other model which can be used for comparison in soloving question classification task with answersets.The datasets we use in the experiments require the label information for both questions and answers. Besidesthat, similar with websites’ FAQ section, all the questions which belong to the same category share the sameanswer sets. Among the above the four datasets, only the Insurance and DMV datasets are well-fitted for ourmodel. The questions which fall into the same category have different answers in Yahoo dataset.Different ways of initializing the projection matrix in Eq. 9 can be summarized as the followings:Random Initialization : when there is no answer corpus available, we first random initialize Nvectors (usually Ns) to represent the representation from answer set. Then we cluster these Nvectors intoGcategories with gcentroids for each category. These centroids from different categorieswill be the initialized bases for projection matrix W. This projection matrix will be optimized duringtraining.Initialization from Questions : instead of using random initialized vectors, we could also use ques-tion sentences for initializing the projection matrix when answer set is not available. We need topre-train the sentence with CNNs to get the sentence representation. We then select top Glargestcategories in terms of number of question sentences. Then we get gcentroids from each category byk-means. We concatenate these Ggvectors group after group to form the projection matrix. Weneed to pre-train the sentence with CNNs to get the sentence representation.Initialization from Answers : This is the most ideal case. We follow the same procedure fromabove. The only difference is that we then treat the answer sentence as question sentence to pre-trainthe CNNs to get answer sentence representation.Note that projection matrix will be updated during training for better classification performance.In the cases of single-label classification tasks (TREC and Yahoo dataset), we set the last layer as softmax-layerwhich tries to get one unique peaky choice across all other labels. But in the cases for multi-label classification(Insurance and DMV dataset), we replace the softmax-layer in CNNs with sigmoid-layer since sigmoid layerpredicts each category independently while softmax function has an exclusive property which allows crossinfluence between categories.All the experiments results are summarized in Table 2. TREC dataset is factoid question type classification.We include this experiments to show our performance on a frequently used dataset. Proposed method improvesmarginally over baseline because the sentences are too short in TREC dataset. For Insurance and DMV dataset,the improvement is significant.In the experiments with Yahoo dataset, the improvement is not as signification as Insurance and DMV . Onereason for this is the questions in Yahoo dataset are usually too short, sometime only have 2 to 3 words. Whenthe sentences become shorter, the group information become harder to encode. Another reason is that thequestions in Yahoo dataset are always single labeled, and can not fully utilize the benefits of group sparseproperties. Yahoo-top shows the results of top categories classification results. We map the subcategories backto the top categories and get the results in Table 2.Besides the conventional classification tasks, we also test our proposed model on unseen-label experiments. Inthis experiments, there are a few sub-category labels that are not included in training process. However, we stillhope that our model could correctly classify these unseen sub-category label into correct parent category basedon the model’s sub-category estimation. In the testing set of Yahoo dataset, we randomly add 100 questions8Under review as a conference paper at ICLR 2017k-NN based Modelvanillak-NN 31.2k-NN + SGL 32.2SVM based Modelvanilla SVM 33.7SVM + SGL 44.5CNNs based Model vanilla CNNs 51.2Table 3: Experiments for two baseline model, k-NN and SVM, for the Insurance dataset.whose labels are unseen in training set. The classification results of Yahoo-unseen in Table 2 are obtainedby mapping the subcategory classification results to top level category and check whether the true label’s topcategory match with predicted label’s parent category. The improvements are remarkable due to the groupinformation encoding.6 D ISCUSSIONThe idea of reforming signal to a sparse representation is first introduced in the domain of compressed sensing(Cand `e & Wakin, 2008) which achieves great success in signal compression, visualization and classificationtask. Especially when dictionary is well trained, the performance usually improves significantly, as shown in(Wang et al., 2010; Yang et al., 2009) for image classification tasks. In Table 3, we test the influence of SparseGroup Lasso (SGL) (Simon et al., 2013) with two baseline methods, k-Nearest Neighbor ( k-NN) and SVM onthe Insurance dataset. We use TF-IDF as feature representation for each question and answer sentence. Wefirst select all the answer sentences from top 20 largest category and then find 10 centroids for each of thesecategories by k-Means. Then we have a dictionary with 200 centroids with 20 groups. We notice there is agreat improvement of performance after we preprocess the original sentence representations with SGL beforewe use SVM. We further test the performance of CNNs on the same dataset, and CNNs outperforms SVM andk-NN even with SGL because of the well trained sentence representation through CNNs. However, for vanillaCNNs, it is not straightforward to embed SGL into the network and still get good representation for sentencessince SGL will break the training error in backpropagation.However, GSA is fully neural network based framework. Our proposed GSA has similar functionalities toSGL (Yuan & Lin, 2006; Simon et al., 2013), as it is shown in Fig. 3 and Fig. 4, but in different approach.Compared with sparse coding approaches which have intense optimizations on both dictionary and coding,GSA’s optimization is based on simple backpropagation. GSA also can be easily placed into any neural networkfor joint training. Another advantage of GSA over sparse coding is that the projection function in GSA canbe linear or non-linear, while sparse coding always learns linear codings.7 C ONCLUSIONS AND FUTURE WORKIn this paper, we first present a novel GSA framework which functions as dictionary learning and sparse codingmodels with inter- and intra- group sparse constraints. We also prove GSA’s learning ability by visualizing theprojection matrix and activations. We further propose a group sparse convolutional neural networks by embed-ding GSA into CNNs. We show that CNNs can benefit from GSA by learning more meaningful representationfrom dictionary.REFERENCESMichal Aharon, Michael Elad, and Alfred Bruckstein. K-svd: Design of dictionaries for sparse representation.InIN: PROCEEDINGS OF SPARS05 , pp. 9–12, 2005.Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deepnetworks. In B. Sch ̈olkopf, J.C. Platt, and T. Hoffman (eds.), Advances in Neural Information Pro-cessing Systems 19 , pp. 153–160. MIT Press, 2007. URL http://papers.nips.cc/paper/3048-greedy-layer-wise-training-of-deep-networks.pdf .Emmanuel J. Cand `e and Michael B. Wakin. An Introduction To Compressive Sampling. In Signal ProcessingMagazine, IEEE , volume 25, 2008. URL http://dx.doi.org/10.1109/msp.2007.914731 .R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing(almost) from scratch. volume 12, pp. 2493–2537, 2011.Simon Fleming, Dan Chalmers, and Ian Wakeman. A deniable and efficient question and answer serviceover ad hoc social networks. volume 15, pp. 296–331. Springer Netherlands, 2012. doi: 10.1007/s10791-012-9185-0. URL http://dx.doi.org/10.1007/s10791-012-9185-0 .9Under review as a conference paper at ICLR 2017Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Improvingneural networks by preventing co-adaptation of feature detectors. Journal of Machine Learning Research ,15, 2014.Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for modellingsentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics(Volume 1: Long Papers) , pp. 655–665, Baltimore, Maryland, June 2014. Association for ComputationalLinguistics. URL http://www.aclweb.org/anthology/P14-1062 .Yoon Kim. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Confer-ence on Empirical Methods in Natural Language Processing (EMNLP) , pp. 1746–1751, Doha, Qatar, Octo-ber 2014. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/D14-1181 .Kenneth Kreutz-Delgado, Joseph F. Murray, Bhaskar D. Rao, Kjersti Engan, Te-Won Lee, and Terrence J.Sejnowski. Dictionary learning algorithms for sparse representation. 2003.Y . LeCun, L. Jackel, L. Bottou, A. Brunot, C. Cortes, J. Denker, H. Drucker, I. Guyon, U. Mller, E. Sckinger,P. Simard, and V . Vapnik. Comparison of learning algorithms for handwritten digit recognition. In INTER-NATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS , pp. 53–60, 1995.Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y . Ng. Efficient sparse coding algorithms. In In NIPS ,pp. 801–808. NIPS, 2007.David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. Rcv1: A new benchmark collection for text catego-rization research. Journal of machine learning research , 5(Apr):361–397, 2004.Xin Li and Dan Roth. Learning question classifiers. In Proceedings of the 19th International Conference onComputational Linguistics - Volume 1 , COLING ’02, pp. 1–7, Stroudsburg, PA, USA, 2002. Association forComputational Linguistics. doi: 10.3115/1072228.1072378. URL http://dx.doi.org/10.3115/1072228.1072378 .Mingbo Ma, Liang Huang, Bing Xiang, and Bowen Zhou. Dependency-based convolutional neural networksfor sentence embedding. In Proceedings of ACL 2015 , 2015.Alireza Makhzani and Brendan Frey. K-sparse autoencoders. In International Conference on Learning Repre-sentations . 2014.Andrew Ng. Sparse autoencoder. 2011. URL https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf .Nikhil Rao, Robert Nowak, Christopher Cox, and Timothy Rogers. Classification with the sparse group lasso.IEEE Transactions on Signal Processing , 64(2):448–463, 2016.Stefan Roth and Michael J. Black. Fields of experts: A framework for learning image priors. In In CVPR , pp.860–867, 2005.R. Rubinstein, A. M. Bruckstein, and M. Elad. Dictionaries for sparse representation modeling. 2010.Chirag Shah and Jefferey Pomerantz. Evaluating and predicting answer quality in community qa. In Pro-ceedings of the 33rd International ACM SIGIR Conference on Research and Development in InformationRetrieval , SIGIR ’10, pp. 411–418, New York, NY , USA, 2010. ACM. ISBN 978-1-4503-0153-4. doi:10.1145/1835449.1835518. URL http://doi.acm.org/10.1145/1835449.1835518 .Noah Simon, Jerome Friedman, Trevor Hastie, and Rob Tibshirani. A sparse-group lasso. 2013.Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composingrobust features with denoising autoencoders. pp. 1096–1103, 2008.Jinjun Wang, Jianchao Yang, Kai Yu, Fengjun Lv, Thomas Huang, and Yihong Gong. Locality-constrained lin-ear coding for image classification. In IN: IEEE CONFERENCE ON COMPUTER VISION AND PATTERNCLASSIFICATOIN , 2010.Jianchao Yang, Kai Yu, Yihong Gong, and Thomas Huang. Linear spatial pyramid matching using sparsecoding for image classification. In in IEEE Conference on Computer Vision and Pattern Recognition(CVPR ,2009.Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. volume 68, pp.49–67, 2006.10Under review as a conference paper at ICLR 2017Mattgew Zeiler. Adadelta: An adaptive learning rate method. Unpublished manuscript :http://arxiv.org/abs/1212.5701 , 2012.Yize Zhao, Matthias Chung, Brent A Johnson, Carlos S Moreno, and Qi Long. Hierarchical feature selectionincorporating known and novel biological information: Identifying genomic features related to prostatecancer recurrence. Journal of the American Statistical Association , (just-accepted), 2016.11
Hkz6aNqle
Under review as a conference paper at ICLR 2017DEEPERROR -CORRECTING OUTPUT CODESGuoqiang ZhongDepartment of Computer Science and TechnologyOcean University of Chinagqzhong@ouc.edu.cnYuchen ZhengDepartment of Computer Science and TechnologyOcean University of Chinaouczyc@outlook.comPeng ZhangDepartment of Computer Science and TechnologyOcean University of Chinasdrzbruce@163.comMengqi LiDepartment of International Trade and EconomyOcean University of Chinaenri9615@outlook.comJunyu DongDepartment of Computer Science and TechnologyOcean University of Chinadongjunyu@ouc.edu.cnABSTRACTExisting deep networks are generally initialized with unsupervised methods, suchas random assignments and greedy layerwise pre-training. This may result in thewhole training process (initialization/pre-training + fine-tuning) to be very time-consuming. In this paper, we combine the ideas of ensemble learning and deeplearning, and present a novel deep learning framework called deep error-correctingoutput codes (DeepECOC). DeepECOC are composed of multiple layers of theECOC module, which combines multiple binary classifiers for feature learning.Here, the weights learned for the binary classifiers can be considered as weightsbetween two successive layers, while the outputs of the combined binary classi-fiers as the outputs of a hidden layer. On the one hand, the ECOC modules can belearned using given supervisory information, and on the other hand, based on theternary coding design, the weights can be learned only using part of the trainingdata. Hence, the supervised pre-training of DeepECOC is in general very effectiveand efficient. We have conducted extensive experiments to compare DeepECOCwith traditional ECOC, feature learning and deep learning algorithms on severalbenchmark data sets. The results demonstrate that DeepECOC perform not onlybetter than traditional ECOC and feature learning algorithms, but also state-of-the-art deep learning models in most cases.1 I NTRODUCTIONError correcting output codes (ECOC) are an ensemble learning framework to address multi-classclassification problems (Dietterich & Bakiri, 1995). The work by (Zhong & Liu, 2013) shows thatthe ECOC methods can also be used for feature learning, in either a linear or a nonlinear manner.However, although sophisticated coding and decoding strategies are applied (Escalera et al., 2010;Zhong et al., 2012; Zhong & Cheriet, 2013), the learnability of ECOC is limited by its single-layer structure. Therefore, to exploit the advantages of the ECOC framework, such as supervisedensemble learning and effective coding design, it’s necessary to combine its ideas with that of deeplearning.In recent years, many deep learning models have been proposed to handle various challenging prob-lems. Meantime, desirable performances in many domains have been achieved, such as image clas-sification and detection, document analysis and recognition, natural language processing, and videoanalysis (Hinton & Salakhutdinov, 2006; Krizhevsky et al., 2012; Szegedy et al., 2014; Simonyan &Zisserman, 2014; Zhang et al., 2015; Wang & Ji, 2015; Hong et al., 2015). Among others, (Hinton &1Under review as a conference paper at ICLR 2017Salakhutdinov, 2006) presents the ground-breaking deep autoencoder that learns the weight matricesby pre-training the stacked restricted Boltzmann machines (RBMs) and fine-tuning the weights usinggradient descent. It delivers much better representations of data than shallow feature learning algo-rithms, such as principal components analysis (PCA) (Jolliffe, 1986) and latent semantic analysis(LSA) (Deerwester et al., 1990). In order to boost the traditional autoencoder and prevent the “over-fitting” problem, (Vincent et al., 2008) introduces the denosing autoencoder that corrupted the datawith a random noise. Recently, most of the research focuses on deep convolutional neural networks(CNNs) and recurrent neural networks (RNNs), which greatly improves the state-of-the-art in the ar-eas of object recognition, unsegmented handwriting recognition and speech recognition (Krizhevskyet al., 2012; Graves et al., 2009; Sak et al., 2014). However, existing deep networks are generally ini-tialized with unsupervised methods, such as random assignments and greedy layerwise pre-training.In the case of random initialization, to obtain good results, many training data and a long trainingtime are generally used; while in the case of greedy layerwise pre-training, as the whole trainingdata set needs to be used, the pre-training process is very time-consuming and difficult to find astable solution.To overcome the limitations of both traditional ECOC methods and deep learning models, and mean-while, take advantages of both of them, in this paper, we propose a novel deep learning model calleddeep error-correcting output codes (DeepECOC). DeepECOC are composed of multiple stacked E-COC modules, each of which combines multiple binary classifiers for feature learning. Here, theweights learned for the binary classifiers can be considered as weights between two successive lay-ers, while the probabilistic outputs of the combined binary classifiers as the outputs of a hiddenlayer or new representations of data. On the one hand, the ECOC modules can be learned layer bylayer using the given supervisory information, and on the other hand, based on the ternary codingdesign, some classes of data are automatically neglected when training the binary classifiers, suchthat the weights are learned only using part of the training data. Hence, the supervised pre-trainingof DeepECOC is in general very effective and efficient. We have compared DeepECOC with tra-ditional ECOC, feature learning and deep learning algorithms to demonstrate the effectiveness andsuperiority of DeepECOC. The results are reported in Section 4.The rest of this paper is organized as follows: In Section 2, we give a brief overview to related work.In Section 3, we present the proposed model, DeepECOC, in detail. The experimental results arereported in Section 4, while Section 5 concludes this paper with remarks and future work.2 R ELATED WORKTraditional ECOC framework has two steps: coding and decoding. In the coding step, an E-COC matrix is defined or learned from data, and the binary classifiers are trained based on theECOC coding; in the decoding step, the class label is given to a test sample based on a similaritymeasure between codewords and outputs of the binary classifiers. The widely used coding strate-gies include one-versus-all (OneVsAll) (Nilsson, 1965), one-versus-one (OneVsOne) (Hastie et al.,1998), discriminant ECOC (DECOC) (Pujol et al., 2006), ECOC optimizing node embedding (E-COCONE) (Escalera et al., 2006), dense and sparse coding (Escalera et al., 2009; Allwein et al.,2001), and so on. Among them, the OneVsAll, OneVsOne, dense and sparse coding strategies areproblem-independent, whilst the DECOC and ECOCONE are problem-dependent. Generally, thelength of the codeword by the OneVsAll, OneVsOne, DECOC and ECOCONE coding designs isrelated to the number of classes, but that by the dense and sparse coding design is relatively flex-ible. In this work, we design the structure of DeepECOC based on the properties of each codingstrategy. The commonly used binary ECOC decoding strategies are the Hamming decoding (Nils-son, 1965) and Euclidean decoding (Hastie et al., 1998). For ternary ECOC decoding strategies, theattenuated Euclidean decoding (Pujol et al., 2008), loss-based decoding (Allwein et al., 2001), andprobabilistic-based decoding (Passerini et al., 2004) are widely used. Currently, the state-of-the-artternary ECOC decoding strategies are the discrete pessimistic beta density distribution decoding andloss-weighted decoding (Escalera et al., 2010). In this work, for the simplicity of back propagation,we directly add a Softmax layer at the top of DeepECOC for the decoding. Note that, althoughmany sophisticated coding and decoding strategies have been proposed in recent years (Escaleraet al., 2010; Zhong et al., 2012; Zhong & Cheriet, 2013), the learnability of ECOC is limited byits single-layer structure. To further exploit the advantages of ECOC, such as supervised ensemblelearning and effective coding design, it’s necessary to combine its ideas with that of deep learning.2Under review as a conference paper at ICLR 20171h2h3h4h1y2y3y4y(a) one-versus-all1h2h3h4h5h6h1y2y3y4y (b) one-versus-oneFigure 1: Two coding matrices encoded with the one-versus-all (binary case) and one-versus-one(ternary case) coding strategies.In the literature of deep learning, there is some work that attempts to construct a deep architecturewith multiple feature learning methods (Hinton & Salakhutdinov, 2006; Trigeorgis et al., 2014; Yuanet al., 2015; Zheng et al., 2015; 2014). For instance, deep autoencoder is built up by RBMs (Hinton& Salakhutdinov, 2006), and deep semi-NMF combines multiple steps of matrix factorization (Tri-georgis et al., 2014). Similarly, deep CNNs and RNNs can also be considered as deep models thatlearn the new representations of data layer by layer (Krizhevsky et al., 2012; Graves et al., 2009; Saket al., 2014). The success of these existing models demonstrate that deep networks are beneficial tothe representation learning tasks, especially for the large scale applications. However, as discussedin the previous section, existing deep learning models are generally initialized with unsupervisedmethods, such as random assignments and greedy layerwise pre-training, which result in a longtraining time of the deep models. In this work, we propose the DeepECOC model, which is basedon the stacked ECOC modules. When pre-training DeepECOC, the ECOC modules can be learnedwith the available supervisory information. Intuitively, as this manner of supervised pre-traininghas deterministic objective, the learned value of the parameters will be very close to the best localminimum on the solution manifold. Experimental results shown in Section 4 also demonstrate thisfact.3 D EEPERROR -CORRECTING OUTPUT CODES (DEEPECOC)In this section, we first introduce the traditional ECOC framework, which is the important buildingblock of DeepECOC. Then we present the learning procedures of DeepECOC in detail.3.1 T HEECOC F RAMEWORKError correcting output codes (ECOC), which combine multiple binary classifiers to solve multi-class classification problems, are an ensemble learning framework. The ECOC methods in gen-eral consist of two steps: coding and decoding. In the coding step, the ECOC coding matrixM2f 1;1gCL(binary case) or M2f 1;0;1gCL(ternary case) is first defined or learnedfrom the training data, where each row of Mis the codeword of a class, each column correspondsto a dichotomizer (binary classifier), Lis the length of the codewords (the number of binary classi-fiers),Cis the number of classes, symbol ‘1’ indicates positive class, ‘-1’ indicates negative class,and ‘0’ indicates that a particular class is not considered by a given classifier. Then, the binaryclassifiers (dichotomizers) are trained according to the partition of the classes in the columns of M.Fig. 1 shows two coding matrices encoded with the one-versus-all (binary case) and one-versus-one(ternary case) coding strategies. The matrix is coded using several dichotomizers for a 4-class prob-lem with respective codewords fy1;:::;y 4g. The white girds are coded by 1 (considered as positiveclass by the respective dichotomizer hj), the dark girds by -1 (considered as the negative class),and the gray girds by 0 (classes that are not considered by the respective dichotomizer hj). In thedecoding step, the test data are predicted based on an adopted decoding strategy and the outputs ofthe binary classifiers.In order to take the probabilistic outputs of the base classifiers as new representations of data, weadopt linear support vector machines (linear SVMs) as the binary classifiers (dichotomizers), whichsolve a quadratic programming problemminw;b;iJ(w) =12kwk2+CNXi=1is:t: yif(xi)1i;i0;i= 1;:::;N (1)3Under review as a conference paper at ICLR 2017where wandbare the coefficients and bias of the binary classifier, yi2f+1;1g,i’s are the slackvariables, and Nis the number of the training data. The discriminant function can be expressed asf(x) =wTx+b: (2)This problem can be solved as,w=NXi=1iyixi; (3)b=1NSVNSVXxi2SV;i=1(yiwTxi); (4)wherei’s are the non-negative Lagrange multipliers, Nsvis the number of support vectors and SVis the set of support vectors. The dual form of Problem (1) can be written asmaxNXi=1i12NXi;j=1ijyiyjxTixj=NXi=1i12NXi;j=1ijyiyjk(xi;xj)s:t: 0i;i= 1;:::;N;NXi=1iyi= 0; (5)wherek(xi;xj) = xTixjis the linear kernel function, is a constant number, and =f1;:::;Ngis the vector of Lagrange multipliers. Replacing the linear kernel function with anonlinear kernel, such as the Gaussian kernelk(xi;xj) = exp(1kxixjk2); (6)we can learn a nonlinear SVM, where is the parameter for the Gaussian kernel function. Thediscriminant function of SVMs with a nonlinear kernel can be written asf(x) =NXi=1iyik(xi;x) +b: (7)Applying a decoding strategy on the outputs of the binary classifiers, the ECOC framework can beused for multi-class learning, while applying the sigmoid function on the values of the discriminantfunction, ECOC can be used for feature learning (Zhong & Liu, 2013). This is also the foundationof the DeepECOC model.3.2 D EEPECOCTo combine the advantages of ECOC and deep learning algorithms, we build the DeepECOC archi-tecture as followsxqD!~xW1!b1h1W2!b2Wn1!bn1hn1softmax! y; (8)where the first step makes the clean input x2[0;1]dpartially destroyed by means of a stochasticmapping ~xqD(~xjx). In the corrupting process, we set a parameter called denoising rate .For each input x, a fixed number dof components are chosen at random, and their value is forcedto 0, while the others are left untouched. This operation makes the model more robust and preventthe overfitting problem in most cases (Vincent et al., 2008). Subsequently, the “corrupted” data aretaken as inputs for the DeepECOC model. W1andb1are the weight matrix and bias vector learnedfrom the first ECOC module. The output of the first hidden layer is denoted ash1=s(WT1x+b1); (9)4Under review as a conference paper at ICLR 2017wheres()is the sigmoid activation function s(x) =11+ex. From the second layer to the (n1)-th layer, we use the stacked ECOC modules to learn the weight matrices and biases, which can beconsidered as weights between two successive layers of a deep network. Similarly, we use the outputof the (k1)-th layer as the input of the k-th layer,hk=s(WTkhk1+bk): (10)Here, hkcan be viewed as an activation output and a new representation of the input datum x.For example, if we adopt the OneVsAll coding strategy for one layer of the ECOC module,we first define the coding matrix MCC, whereCis the number of classes. Then, we cantrainCSVM classifiers to obtain the weight matrix W=fw1;:::;wi;:::;wCgand the biasb=fb1;:::;bi;:::;bCg. Next, we calculate the output of the first layer by using Eq. (9). Subse-quently, we repeat this process layer by layer to build the DeepECOC model. It’s obvious that, if weadopt different coding strategies, we can get different kinds of DeepECOC architectures.For the last layer of DeepECOC, we employ the softmax regression for the multi-class learning. Itscost function is defined asJ(w) =1N(NXi=1KXj=1I(yi=j) logexp(wTjhn1i)PKl=1exp(wTlhn1i)); (11)where I(x)is the indicator function, I(x) = 1 ifxis true, else I(x) = 0 .yiis the label correspondingtoxi. It’s easy to compute the probability that xiis classified to class j,p(yi=jjxi;w) =exp(wTjhn1i)PKl=1exp(wTlhn1i): (12)Taking derivatives, one can show that the gradient of J(w)with respect to wis,rJ(w) =1NNXi=1[xi(I(yi=j)p(yi=jjxi;w))]: (13)After the pre-training step, we use back propagation (Rumelhart et al., 1988) to fine tune the wholearchitecture. Moreover, we also employ a technique called “dropout” for regularization (Hintonet al., 2012). When a large feedforward neural network is trained on a small training set, dropoutgenerally performs well on the test set. The basic idea of dropout is that each hidden node is ran-domly omitted from the network with a probability of . In another view, dropout is a very efficientway to perform model averaging with neural networks. Through these processes, we finally obtainthe DeepECOC model, which is robust and easy to be applied to multi-class classification tasks.Note that, compared to existing deep learning algorithms, DeepECOC have some important ad-vantages. Firstly, unlike previous deep learning algorithms, DeepECOC are built with the ECOCmodules and pre-trained in a supervised learning fashion. Secondly, if we adopt ternary coding s-trategies, due to the natural merit of ECOC, the weights can be learned using only part of the trainingdata. Thirdly, in contrast to the learning of the weight matrices in previous deep learning models,the binary classifiers in each ECOC module can be learned in parallel, which may greatly speed upthe learning of DeepECOC.4 E XPERIMENTSTo evaluate the effectiveness of the proposed method, DeepECOC, we conducted 4 parts of experi-ments. In the first part, we compared DeepECOC with some deep learning models and single-layerECOC approaches on 16 data sets from the UCI machine learning repository1. In the second part,we compared DeepECOC with traditional feature learning models, some deep learning models andsingle-layer ECOC approaches on the USPS handwritten digits2, and tested DeepECOC with dif-ferent number of hidden layers. In the third part, we used the MNIST handwritten digits3to further1http://archive.ics.uci.edu/ml/2http://www-i6.informatik.rwth-aachen.de/ keysers/usps.html3http://yann.lecun.com/exdb/mnist/5Under review as a conference paper at ICLR 2017Table 1: Details of the UCI data sets (T: training samples; A: attributes; C: classes).Problem ]of T]of A]of C Problem ]of T]of A]of CDermatology 366 34 6 Yeast 1484 8 10Iris 150 4 3 Satimage 6435 36 7Ecoli 336 8 8 Letter 20000 16 26Wine 178 13 3 Pendigits 10992 16 10Glass 214 9 7 Segmentation 2310 19 7Thyroid 215 5 3 Optdigits 5620 64 10V owel 990 10 11 Shuttle 14500 9 7Balance 625 4 3 Vehicle 846 18 4Table 2: Classification accuracy and standard deviation obtained by DeepECOC and the comparedapproaches on 16 UCI data sets. Here, DeepECOC(1) DeepECOC(3) are 3 variant of DeepECOCwith the ECOCONE coding design initialized by one-versus-one, one-versus-all and DECOC re-spectively. The best results are highlighted in boldface.Problem Single AE DAEDeepECOC(1)DeepECOC(2)DeepECOC(3)Dermatology 0.95130.94290.06710.96740.03120.97020.03540.97790.02080.97470.0318Iris 0.96000.96000.05620.93330.08890.96000.05350.92670.11090.95330.0383Ecoli 0.81470.77250.06080.80000.03620.85290.04030.88240.06260.91180.0636Wine 0.96050.97650.02640.95630.04220.98750.02640.98130.03020.96880.0329Glass 0.67620.66690.10320.66690.07150.78950.07880.73680.11400.75620.0879Thyroid 0.92100.95130.06140.95990.05670.96560.05130.97030.05400.96080.0518V owel 0.71770.69850.07450.71010.07560.74750.09010.60100.06270.68630.0788Balance 0.82220.80360.03200.82680.05480.91370.04120.83330.03180.91670.0312Yeast 0.52170.56410.03460.58910.02720.59590.05990.54940.04340.56970.0462Satimage 0.85370.86750.05280.88970.03040.89610.04800.83600.03900.90770.0555Letter 0.91920.92340.05470.93810.06410.95320.03410.92470.03520.95010.0563Pendigits 0.98010.98310.01230.98860.00340.99080.00310.98660.01070.98990.0075Segmentation 0.97010.95840.03170.95960.02110.97110.02860.95840.01630.97110.0233Optdigits 0.99820.97850.01010.98560.00880.98670.00960.98480.01230.99110.0091Shuttle 0.99880.99530.00120.99760.00140.99880.00210.99830.00180.99930.0010Vehicle 0.73150.69870.05210.73480.04540.75610.04800.69080.043210.71950.0148Mean rank 4.0938 4.8750 3.9375 1.7500 3.9375 2.4063demonstrate the effectiveness of DeepECOC for handwritten digits recognition. Finally, the CIFAR-10 data set4was used to demonstrate the effectiveness of DeepECOC on image classification tasks.For all the data sets, the features were normalized within [0;1]. In the following, we report theexperimental results in detail.4.1 C LASSIFICATION ON 16 UCI M ACHINE LEARNING REPOSITORY DATA SETSThe detail of the UCI data sets are shown in Table 1. In these experiments, we compared DeepECOCwith autoencoder (AE) (Hinton & Salakhutdinov, 2006), denoising autoencoder (DAE) (Vincentet al., 2008) and single-layer ECOC approaches (Single) (Escalera et al., 2010). We built DeepECOCwith the ECOC optimizing node embedding (ECOCONE) coding method (Escalera et al., 2006).Here, since we initialized ECOCONE with 3 different coding methods, i.e. one-versus-one, one-versus-all and DECOC, DeepECOC had 3 variants. In addition, the state-of-the-art linear loss-4http://www.cs.toronto.edu/ kriz/cifar.html6Under review as a conference paper at ICLR 2017weighted (LLW) decoding strategy was used for ECOCONE. Finally, a structure with 3 hiddenlayers was adopted for DeepECOC, which had 0.1 denoising rate and 0.1 dropout rate:xqD!~xW1!b1h1W2!b2h2W3!b3h3softmax! y: (14)For the fine-tuning process, we used the stochastic gradient descent algorithm. The learning rateand epoches from different data sets are described in Table 3. The autoencoder and denoising au-toencoder’s architectures are as same as DeepECOC with ECOCONE initialized by one-versus-one.For single-layer ECOC approaches, we chose the best results shown in (Escalera et al., 2010) asour compared results. For all DeepECOC models, we used support vector machines (SVMs) withRBF kernel function as base classifiers. The parameters of SVMs were set to default (Chang & Lin,2011).Table 2 shows the average classification accuracy and standard deviation on 16 UCI data sets. Excepton the OptDigits data set, DeepECOC achieved the best results compared with autoencoder, denois-ing autoencoder and single-layer ECOC approaches. In fact, on the OptDigits data set, DeepECOCachieved comparative result with single-layer ECOC approaches. Among others, DeepECOC withECOCONE (initialized by one-versus-one) coding strategy obtained the best results on 9 data sets,while DeepECOC with ECOCONE (initialized by DECOC) coding strategy obtained the best resultson 5 data sets. From the mean rank values, we can see that DeepECOC with ECOCONE (initializedby one-versus-one and DECOC) strategy far surpass other compared methods.Table 3: Details of the learning rate and epoch on the UCI data sets.Problem Epoch Problem EpochDermatology 0.1 2000 Yeast 0.01 4000Iris 0.1 400 Satimage 0.01 4000Ecoli 0.1 2000 Letter 0.01 8000Wine 0.1 2000 Pendigits 0.01 2000Glass 0.01 4000 Segmentation 0.01 8000Thyroid 0.1 800 Optdigits 0.01 2000V owel 0.1 4000 Shuttle 0.1 2000Balance 0.1 4000 Vehicle 0.1 40004.2 C LASSIFICATION ON THE USPS DATA SETThe USPS handwritten digits data set includes 7291 training samples and 2007 test samples from 10classes. The size of the images is 1616 = 256 . Our experiments on this data set were divided into2 parts. Firstly, we compared DeepECOC with two traditional feature learning models (principalcomponents analysis (PCA) (Jolliffe, 2002) and marginal Fisher analysis (MFA) (Yan et al., 2007)),autoencoder (AE), denoising autoencoder (DAE), LeNet (LeCun et al., 1998), PCANet (Chan et al.,2015) and single-layer ECOC approaches. Here, PCA is an unsupervised method, MFA is a super-vised method. For MFA, the number of nearest neighbors for constructing the intrinsic graph wasset to 5, while that for constructing the penalty graph was set to 15. For DeepECOC, we also used3 coding design methods in this experiment. We used batch gradient descent for the fine-tuningprocess, the batch size was set to 100, the learning rate was set to 1, the number of epoch was set to40000, the denoising rate, and dropout rate were set to 0.1. We also used SVMs with RBF kernel anddefault parameters as base classifiers. For single-layer ECOC approaches, we adopted ECOCONE(initialized by one-versus-one) as coding design method and linear loss-weighted (LLW) decodingstrategy. For the LeNet model, we used 2 convolutional layers, two pooling layers and two fullyconnected layers. The kernel size of the convolutional layers and pooling layers was set to 22, thestride was set to 1, the number of nodes of the first layer was set to 200, the epoch was set to 8000,the initial learning rate was set to 0.001, learning rate policy was set to “inv”, and the momentumwas set to 0.9. For the PCANet model, we used two PCA-filter stages, one binary hashing stageand one blockwise histograms. The filter size, the number of filters, and the block size were set tok1=k2= 3,L1=L2= 4, and 77, respectively. The experimental results are shown in Fig. 2(a).From Fig. 2(a), we can see that DeepECOC with ECOCONE (initialized by one-versus-one) codingstrategy achieved the best result than other methods include traditional feature learning models,existing deep learning methods and single-layer ECOC approaches.In the second part, we evaluated DeepECOC with different number of hidden layers. We used 2 to6 hidden layers in our experiments. The parameter settings were as same as the first part. Fig. 2(b)7Under review as a conference paper at ICLR 20170.80.820.840.860.880.90.920.940.960.981MethodsClassification accuracy LeNetPCANetAEDAEPCAMFASingleDeepECOC(1)DeepECOC(2)DeepECOC(3)(a)2 3 4 5 60.90.910.920.930.940.950.960.970.98Number of Hidden LayersClassification Accuracy (b)Figure 2: (a) Classification accuracy obtained on the USPS data set. Here, DeepECOC(1) Deep-ECOC(3) are 3 variant of DeepECOC with the ECOCONE coding design initialized by one-versus-one, one-versus-all and DECOC respectively. (b) Classification accuracy with different numbers ofhidden layers on the USPS data set.0.850.90.951MethodsClassification accuracy AEDAEDeepECOC(1)DeepECOC(2)DeepECOC(3)Single(a)784Z1Z2Z3100.850.90.951MethodsClassification accuracy LeNetPCANetAEDAESparseDenseSingle (b)784500500200010Figure 3: Classification accuracy obtained on the MNIST data set for two architectures.shows the experimental results. We can see that DeepECOC obtained the best result when using 3hidden layers. When the number of hidden layers is less than 3, the effectiveness of DeepECOCincreases with the increasing of the number of hidden layers. Along with the number of hiddenlayers continues to grow, the effectiveness of DeepECOC decreases.4.3 C LASSIFICATION ON THE MNIST DATA SETMNIST handwritten digits data set has a training set of 60,000 examples, and a test set of 10,000examples with 784 dimensional features. We designed 2 architectures for autoencoder, denoisingautoencoder and DeepECOC. The first architecture was 784Z1Z2Z310, whereZiwasthe number of hidden neurons designed based on some ECOC coding strategies. We designedthis architecture because we wanted to make autoencoder and denoising autoencoder had the samestructure with DeepECOC. The second architecture is 784500500200010. This architecturewas used in (Hinton & Salakhutdinov, 2006). In order to make DeepECOC adapt to this structure,we used the dense and sparse coding design methods that can control the codeword length. Notethat, the dense and sparse coding design methods are totally random and data-independent. Thedenoising rate and dropout rate were set to 0.1, the batch size was set to 100, the learning rate wasset to 0.01, and the number of epoch was set to 80000. For LeNet model, we adopted the parametersas same as (LeCun et al., 1998). For PCANet model, we used two PCA-filter stages, one binaryhashing stage and one blockwise histograms. In the PCANet, the filter size, the number of filters,and the block size were set to k1=k2= 8,L1=L2= 7, and 77, respectively.Fig. 3(a) and Fig. 3(b) show the experimental results on 2 architectures. We can see that DeepECOCare comparative with existing deep learning methods on the second architecture and outperform8Under review as a conference paper at ICLR 2017Table 4: Classification accuracy obtained on the LBP-CIFAR10 data set. The best result for eachscenario is highlighted in bold face.Problem AE DAE LeNet PCANet DeepECOC(1) DeepECOC(2) DeepECOC(3)LBP-CIFAR10 (36) 0.3501 0.3678 0.3256 0.2569 0.5089 0.4517 0.4752LBP-CIFAR10 (256) 0.4352 0.4587 0.3221 0.2569 0.5588 0.4589 0.5224them on the first architecture. In addition, DeepECOC with both two architectures outperform thesingle-layer ECOC approaches.4.4 C LASSIFICATION ON THE LBP-CIFAR10 D ATA SETThe CIFAR-10 dataset is a relative large scale data set which consists of 60000 3232colour imagesin 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.For the purpose of reducing computational cost, we attempted to extract features of the data usingan efficient local binary patterns algorithm. As a result, the representations with dimensionality 36and 256 were adopted and the data were normalized to [0, 1] as well, called LBP-CIFAR10 (36)and LBP-CIFAR10 (256). We also used 3 hidden layers for all deep learning methods. The learningrate was set to 0.1, and the epoch was set to 4000. For the LeNet model, we used 2 convolutionallayers and two fully connected layers without pooling layers. The kernel size was set to 22, thestride was set to 1, the number of node of the first fully connected layer was set to 64, the epochwas set to 4000, the initial learning rate was set to 0.01, learning rate policy was set to “inv”, andthe momentum was set to 0.9. For the PCANet model, we used two PCA-filter stages, one binaryhashing stage and one blockwise histograms. In the PCANet, the filter size, the number of filters,and the block size were set to k1=k2= 3,L1=L2= 4, and 77, respectively. The classificationaccuracy are reported in Table 4.From Table 4, we can easy to see that DeepECOC achieved the best results. Moreover, DeepECOCwith ECOCONE (initialized by one-versus-one) coding strategy achieved the better results thanautoencoder and denoising autoencoder, LeNet and PCANet. Hence, we can conclude that, Deep-ECOC are a general model to handle different real world applications and achieves desirable resultsin most cases.5 C ONCLUSIONIn this paper, we propose a novel deep learning model, called deep error correcting output codes(DeepECOC). DeepECOC extend traditional ECOC algorithms to a deep architecture fashion, andmeanwhile, brings new elements to the deep learning area, such as supervised initialization, andautomatic neglecting of part of the data during network training. Extensive experiments on 16data sets from the UCI machine learning repository, the USPS and MNIST handwritten digits andthe CIFAR-10 data set demonstrate the superiority of DeepECOC over traditional ECOC, featurelearning and deep learning methods. In future work, we will further exploit the learnability ofDeepECOC on large scale applications.REFERENCESE. L. Allwein, R. E. Schapire, and Y . Singer. Reducing multiclass to binary: A unifying approachfor margin classifiers. The Journal of Machine Learning Research , 1:113–141, 2001.T.-H. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, and Y . Ma. PCANet: A Simple Deep Learning Baselinefor Image Classification? Image Processing, IEEE Transactions on , 24(12):5017–5032, 2015.C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions onIntelligent Systems and Technology , 2(3):27, 2011.S. Deerwester, S. Dumais, T. Landauer, G. Furnas, and R. Harshman. Indexing by Latent SemanticAnalysis. JASIS , 41(6):391–407, 1990.T. G. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting outputcodes. Journal of artificial intelligence research , pp. 263–286, 1995.9Under review as a conference paper at ICLR 2017S. Escalera, O. Pujol, and P. Radeva. Ecoc-one: A novel coding and decoding strategy. In ICPR ,volume 3, pp. 578–581, 2006.S. Escalera, O. Pujol, and P. Radeva. Separability of ternary codes for sparse designs of error-correcting output codes. Pattern Recognition Letters , 30(3):285–297, 2009.S. Escalera, O. Pujol, and P. Radeva. On the decoding process in ternary error-correcting outputcodes. Pattern Analysis and Machine Intelligence, IEEE Transactions on , 32(1):120–134, 2010.A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, and J. Schmidhuber. A Novel Con-nectionist System for Unconstrained Handwriting Recognition. Pattern Analysis and MachineIntelligence, IEEE Transactions on , 31(5):855–868, 2009.T. Hastie, R. Tibshirani, et al. Classification by pairwise coupling. The annals of statistics , 26(2):451–471, 1998.G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science ,313(5786):504–507, 2006.G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neuralnetworks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 , 2012.S. Hong, T. You, S. Kwak, and B. Han. Online tracking by learning discriminative saliency mapwith convolutional neural network. In ICML , 2015.I. Jolliffe. Principal Component Analysis . New York: Springer-Verlag, 1986.I. Jolliffe. Principal component analysis . Wiley Online Library, 2002.A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neuralnetworks. In NIPS , pp. 1106–1114, 2012.Y . LeCun, L. Bottou, Y . Bengio, and P. Haffner. Gradient-based learning applied to documentrecognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.N. J. Nilsson. Learning Machines . McGraw-Hill, 1965.A. Passerini, M. Pontil, and P. Frasconi. New results on error correcting output codes of kernelmachines. Neural Networks, IEEE Transactions on , 15(1):45–54, 2004.O. Pujol, P. Radeva, and J. Vitria. Discriminant ECOC: a heuristic method for application depen-dent design of error correcting output codes. Pattern Analysis and Machine Intelligence, IEEETransactions on , 28(6):1007–1012, 2006.O. Pujol, S. Escalera, and P. Radeva. An incremental node embedding technique for error correctingoutput codes. Pattern Recognition , 41(2):713–725, 2008.D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagatingerrors. Cognitive modeling , 5:3, 1988.H. Sak, A. Senior, and F. Beaufays. Long short-term memory recurrent neural network architecturesfor large scale acoustic modeling. In INTERSPEECH , pp. 338–342, 2014.K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recog-nition. CoRR , abs/1409.1556, 2014.C. Szegedy, W. Liu, Y . Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V . Vanhoucke, and A. Ra-binovich. Going Deeper with Convolutions. CoRR , abs/1409.4842, 2014.G. Trigeorgis, K. Bousmalis, S. Zafeiriou, and B. Schuller. A deep semi-nmf model for learninghidden representations. In ICML , pp. 1692–1700, 2014.P. Vincent, H. Larochelle, Y . Bengio, and P.-A.Manzagol. Extracting and composing robust featureswith denoising autoencoders. In ICML , pp. 1096–1103, 2008.10Under review as a conference paper at ICLR 2017X. Wang and Q. Ji. Video Event Recognition with Deep Hierarchical Context Model. In CVPR , pp.4418–4427, 2015.S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin. Graph embedding and extensions:a general framework for dimensionality reduction. Pattern Analysis and Machine Intelligence,IEEE Transactions on , 29(1):40–51, 2007.Y . Yuan, L. Mou, and X. Lu. Scene Recognition by Manifold Regularized Deep Learning Architec-ture. Neural Networks and Learning Systems, IEEE Transactions on , 26(10):2222–2233, 2015.X. Zhang, J. Zhao, and Y . LeCun. Character-level convolutional networks for text classification. InNIPS , pp. 649–657, 2015.Y . Zheng, G. Zhong, J. Liu, X. Cai, and J. Dong. Visual texture perception with feature learningmodels and deep architectures. In Pattern Recognition , pp. 401–410. Springer, 2014.Y . Zheng, Y . Cai, G. Zhong, Y . Chherawala, Y . Shi, and J. Dong. Stretching deep architectures fortext recognition. In ICDAR , pp. 236–240, 2015.G. Zhong and M. Cheriet. Adaptive error-correcting output codes. In IJCAI , 2013.G. Zhong and C.-L. Liu. Error-correcting output codes based ensemble feature extraction. PatternRecognition , 46(4):1091–1100, 2013.G. Zhong, K. Huang, and C.-L. Liu. Joint learning of error-correcting output codes and dichotomiz-ers from data. Neural Computing and Applications , 21(4):715–724, 2012.11
Bk_zTU5eg
Under review as a conference paper at ICLR 2017INEFFICIENCY OF STOCHASTIC GRADIENT DESCENTWITH LARGER MINI -BATCHES (AND MORE LEARNERS )Onkar Bhardwaj & Guojing CongIBM T. J. Watson Research CenterYorktown Heights, NY 10598, USAfobhardw,gcong g@us.ibm.comABSTRACTStochastic Gradient Descent (SGD) and its variants are the most important op-timization algorithms used in large scale machine learning. Mini-batch versionof stochastic gradient is often used in practice for taking advantage of hardwareparallelism. In this work, we analyze the effect of mini-batch size over SGD con-vergence for the case of general non-convex objective functions. Building on thepast analyses, we justify mathematically that there can often be a large differencebetween the convergence guarantees provided by small and large mini-batches(given each instance processes equal number of training samples), while provid-ing experimental evidence for the same. Going further to distributed settings, weshow that an analogous effect holds with popular Asynchronous Gradient Descent(ASGD): there can be a large difference between convergence guarantees with in-creasing number of learners given that the cumulative number of training samplesprocessed remains the same. Thus there is an inherent (and similar) inefficiencyintroduced in the convergence behavior when we attempt to take advantage ofparallelism, either by increasing mini-batch size or by increase the number oflearners.1 I NTRODUCTIONStochastic gradient descent (SGD) and its parallel variants form the backbone of most popular deeplearning applications. Consequently, there has been a significant interest in investigating their con-vergence properties. SGD has been shown to satisfy an asymptotic convergence rate of O(1=S)for convex objective functions [Nemirovski et al. (2009)] and an asymptotic convergence rate ofO(1=pS)for general non-convex objective functions with mini-batch size 1in [Ghadimi & Lan(2013)] or with arbitrary mini-batch sizes in [Dekel et al. (2012)].Although SGD converges asymptotically with the same rate irrespective of mini-batch size, it hasbeen reported that for large mini-batch sizes, often it is slower to converge - for example, see Wilson& Martinez (2003) for the detailed graphical illustrations therein for the effect of increasing batch-size or see Bottou (2010) for the comments on the tradeoffs of mini-batching. In this work, we areinterested in using theoretical analysis for justifying such practical observations or comments. Inparticular, we show the following:We consider general non-convex objective functions and show that prior to reaching asymp-totic regime, SGD convergence could get much slower (inferred from the difference in theconvergence rate guarantees from Theorem 2) with using higher mini-batch size, assum-ing a constant learning rate. Here, to evaluate the convergence rate guarantee we use themeasure of average gradient norm since we are considering general non-convex objectives.As a consequence of slower convergence, the number of training samples required to at-tain a certain convergence guarantee (in terms of average gradient norm) increases as themini-batch size increases. We build the analysis based on the framework in Ghadimi & Lan(2013).Further, we investigate Asynchronous Stochastic Gradient Descent (ASGD) which is oneof the most popular asynchronous variants of SGD [Dean et al. (2012); Li et al. (2014a);1Under review as a conference paper at ICLR 2017Chilimbi et al. (2014)]. Recently, Lian et al. (2015) extended the results of SGD conver-gence to ASGD and showed that it converges asymptotically with a convergence rate ofO(1=pS). In our analysis we show that prior to the asymptotic regime, with using highernumber of learners ASGD convergence could get much slower in terms of average gradi-ent norm attained after cumulatively processing a fixed number of training samples (thisslow-down is inferred from the difference in the convergence guarantees from Theorem 4).This suggests that there is an inherent limit on harnessing parallelism with SGD eitherby increasing mini-batch size or increasing the number of learners (even when we do nottake into account overheads such as communication cost). The difference in convergencebehavior caused by increasing mini-batch size for SGD and by increasing the number oflearners with ASGD was found to be similar (See Theorem 2, Theorem 4 and the discussionat the end of Section 4).For rest of the paper, we use the following notation: Let F(x;zi)denote a non-convex functionof a parameter vector xand a training sample ziselected from the training set fz1;z2;:::;zng.Our aim is to find a parameter vector that minimizes the objective function f(x) =EzF(x;z).Towards this, we use mini-batch SGD, where in kthiteration, we select a random mini-batch zk=fz1k;z2k;:::;zMkgof sizeMand perform the following update:xk+1=xkG(x;zk) =xkMXi=1G(x;zik) (1)In the above equation, denotes the learning rate and we have G(x;zk) =PMi=1G(x;zik)whereG(x;zik)denotes the gradient of the objective function f(x)with respect to a training sample zik2zk. We define Df:=f(x1)f(x)wherex1is the initial parameter vector and xis a localoptima towards which SGD proceeds. We also denote by Sthe total number of training samples tobe processed.Additionally, we make the following standard assumptions, see e.g., [Lian et al. (2015), Ghadimi &Lan (2013)]:A.1Unbiased estimator: We assume that the expectation of G(x;z)equals to the true valueof the gradient, i.e., EzG(x;z) =rf(x)8x.A.2Bounded variance: We assume that there exists a constant 2such thatEz(kG(x;z)rf(x)k2)28x.A.3Lipschitzian Gradient: We assume f(x)to satisfy Lipschitzian smoothness, i.e., thereexists a constant Lsuch thatkrf(x)rf(y)kLkxyk8x;y.The paper is organized as follows: Section 2 discussed some related work. We follow it up byanalyzing impact of mini-batch size on SGD in Section 3. Later we extend the analysis to ASGD inSection 4. We provide experimental evidence regarding our analysis in Section 5 and conclude bydiscussing future directions in Section 6.2 R ELATED WORKIn recent years, there have been several works analyzing convergence properties of SGD and itsvariants. In Nemirovski et al. (2009), SGD has been shown to have a convergence rate of O(1=S)for convex objective functions, where Sis the number of samples seen, and this rate is in termsof distance of objective function from the optimal value. When the objective cost functions arenon-convex, as is the case with most deep learning applications, the rate of convergence of SGDin terms of average gradient norm has been shown to be O(1=pS)asymptotically by Ghadimi &Lan (2013). The results in Dekel et al. (2012) can be interpreted as showing the applicability of thisconvergence rate also for the mini-batches of size in M, where now Stakes form of MK withKbeing the number of mini-batches processed.Among the distributed variants of SGD, ASGD has been the most popular variant Dean et al. (2012);Li et al. (2014a); Chilimbi et al. (2014). Practically it has been observed that ASGD is often slowerto converge with increasing number of learners [Seide et al. (2014); Chan & Lane (2014); Dean et al.2Under review as a conference paper at ICLR 2017(2012)]. Although these works did not ignore communication overhead, in Section 4 we investigatethe inherent inefficiency in ASGD without communication overhead costs. In Lian et al. (2015),it was proved that in the asymptotic regime the convergence rate of O(1=pS)can be extended toASGD when the “age” of updates was bounded by the number of learners.There exist several other sequential and distributed variants of SGD. For example, SGD with vari-ance reduction techniques to mitigate the effects of gradient variance have been discussed in [John-son & Zhang (2013); Xiao & Zhang (2014)]. SGD with co-ordinate descent/ascent and its dis-tributed variants has been studied in [Hsieh et al. (2008); Richt ́arik & Tak ́ac (2013); Fercoq et al.(2014); Kone ˇcn`y et al. (2014); Qu & Richt ́arik (2014); Liu et al. (2015); Jaggi et al. (2014); Nes-terov (2012)]. The convergence properties of asynchronous stochastic coordinate descent have beenanalyzed in Liu & Wright (2015). More recently, the authors in Meng et al. (2016) have stud-ied combining variance reduction techniques, randomized block co-ordinate descent [Richt ́arik &Tak ́aˇc (2014)] and Nesterov acceleration methods [Nesterov (2013)] and analyzed its theoreticalproperties.There have been several recent works which attempt to mitigate the effect of degrading convergencefor large mini-batches (e.g., see [Li et al. (2014b)] where in each mini-batch a regularized objectivefunction is optimized to compute updated parameter vector), or there are works which attempt toselect mini-batch dynamically for better performance, for example see [Byrd et al. (2012); Tan et al.(2016); De et al. (2016)], or there have been works which attempt to improve SGD performance byintelligent selection of training samples, e.g., [Needell et al. (2014); Bouchard et al. (2015)].3 T HEIMPACT OF MINIBATCH SIZE ON SGDIn this section, we build on the SGD convergence analysis in Ghadimi & Lan (2013). In particu-lar, we consider the convergence guarantees from Theorem 2:1in Ghadimi & Lan (2013) restrictedto constant learning rate. However, we first modify their analysis to allow mini-batches of arbi-trary sizes. Building on, we show that SGD with smaller mini-batches can have better conver-gence guarantees than with using larger mini-batches. As a consequence, we observe that for largermini-batches, a larger number of samples is needed for the convergence rate to fall below a certainthreshold.Lemma 1. With mini-batches of size Mand a constant learning rate , afterKiterations of SGD,we havePKk=1E(krf(xk)k2)K1LM22DfS+L222(2)Proof outline. Using the property of Lipschitzian gradient, we getf(xk+1)f(xk) +hrf(xk);xk+1xki+L2kxk+1xkk2=f(xk)k*rf(xk);MXi=1G(xk;zik)++2kL2MXi=1G(xk;zik)2(3)Let us define ik=G(xk;zik)rf(xk)andk=PMi=1ik. Using it, the above equation can berewritten asf(xk+1)f(xk) khrf(xk);k+Mrf(xk)i+2kL2kk+Mrf(xk)k2)f(xk+1)f(xk) khrf(xk);kiMkkrf(xk)k2+2kL2kkk2+ 2Mhk;rf(xk)i+M2krf(xk)k2Rest of the proof involves adding such inequalities over first K1updates and bounding the kkk2using assumption A.2 and hk;rf(xk)iusing assumption A.1 from Section 1. Finally, we usef(xK)f(x)Dfand rearrange the terms to get the desired bound.3Under review as a conference paper at ICLR 2017In the following theorem, we justify that given a fixed number of samples Sto be processed, SGDwith smaller mini-batches SGD can have better convergence guarantees than with using larger mini-batches. Note that used in the theorem statement is a measure of (the square root of) the numberof training samples processed.Theorem 2. Let:=qS2LDf. Let 4MlMh=4. Then the convergence guarantee for SGDfor mini-batch size Mhafter processing Straining samples can be worse than the convergenceguarantee for mini-batch size Mlby a factor of 2Mh=(p2 +Ml).Proof outline. For a fixed number Sof training samples to be processed, we now minimize theright hand side of Equation 2 to find the best convergence guarantee supported by Lemma 1. Let=cpDf=(SL2), wherecis a scalar multiplier. Substituting it in Equation 2 and after somealgebraic manipulations, we getPKk=1E(krf(xk)k2)K 1c+c21cM2!sDfL2S(4)By applying simple calculus, it can be shown that the value cofcwhich minimizes right hand sideof the above equation is given byc=M 1 +r1 +22M2!(5)In appendix, we show that with M=Ml=4, we havecp2Ml=and consequentlythe coefficient ofpDfL2=Sfrom Equation 4 evaluates to approximatelyp2 +Ml=. WhereasforM=Mh4, we show that in the appendix using that c==Mhand the coefficient ofpDfL2=Sfrom Equation 4 evaluates to approximately 2Mh=. Combining these observations,we get that the convergence guarantees for MlandMhcan differ by a factor of 2Mh=(p2 +Ml)after processing Straining samples. See the appendix for complete proof.Note that while it could be possible to show that in general smaller mini-batches converge faster thanlarger mini-batches in theory, we used 4MlMh=4in Theorem 2 for the sake of simplicity.Also, although we theoretically justified faster convergence smaller mini-batches in Theorem 2, theexact factors by which bigger mini-batches can be worse can vary in practice. Here our purpose isto give theoretical support to the practical observations of smaller mini-batches being faster.Theorem 3. The number of samples needs to be processed in order for SGD achieve the sameconvergence guarantee increases as the mini-batch size increases.Proof. For the same values of andS, the value of the bound from Equation 2 becomes worse (i.e.,increases) as Mincreases. This is because for a fixed the quantity LM2=2must decreaseasMincreases. Consequently for given S, the best value of convergence guarantee (i.e., smallestaverage gradient norm) attained by varying must become worse (i.e., higher) as Mincreases. Thusin order to reach the same convergence guarantee, SGD must process more training samples withincreasing mini-batch size.Now we will proceed to the next section, where we will show that with ASGD (which is one ofthe most popular distributed variants of SGD) increasing number of learners can lead to slowerconvergence given a fixed total number of samples to be processed, and the effect is similar to thatof increasing mini-batch size for SGD.4 A SYNCHRONOUS STOCHASTIC GRADIENT DESCENTASGD typically has a parameter server maintaining the parameters (i.e., the weights in the neuralnetwork) and multiple learners. Each learner asynchronously repeats the following:Pull: Get the parameters from the server.4Under review as a conference paper at ICLR 2017Compute: Compute the gradient with respect to randomly selected mini-batch (i.e., a cer-tain number of samples from the dataset).Push and update: Communicate the gradient to the server. Server then updates the pa-rameters by subtracting this newly communicated gradient multiplied by the learning rate.We assume that the update performed by the server is atomic, i.e., the server does not send or receiveparameters while it updates the parameters. Now we express kthupdate step of the ASGD algorithmin terms of our notation. Note that for kthupdate, the partial gradient computed by a learner can bewith respect to an older parameter vector. This is because while computing the partial gradient, theparameter vector could have been updated because of the partial gradients sent in by other learners.Letxkbe the parameter vector used by a learner to compute the partial gradient to be used in kthupdate. Then the equation for kthupdate of ASGD becomes:xk+1=xkG(xk;z) (6)Lian et al. (2015) showed that when the age of the updates is bounded by the number of learnersN, then ASGD asymptotically converges with a rate of O(1=pS)whereSis the cumulative numberof training samples processed. From Theorem 1 in Lian et al. (2015), the convergence rate guarantee(expressed in the terms of average gradient norm) for ASGD with Nlearners after processing Kupdates becomesPKk=1E(krf(xk)k2)K2DfMK+2L+ 22L2MN2(7)s.t.LM + 2L2M2N221 (8)The terms independent of the number of updates Kin Equation 7 indicate that with a constantlearning rate, there is a limit on how close the algorithm can reach to the optimum without loweringthe learning rate. Although asymptotically, it can be shown that Equation 7-8 lead to O(1=pS)convergence (see Lian et al. (2015)), we now investigate the convergence behavior prior to such aregime. We have the following theorem about the effect of increasing the number of learners onASGD convergence guarantee:Theorem 4. LetN > 1be the number of learners and let =qK2MLD fN, then the optimalASGD convergence rate guarantee for 1learner and Nlearners can differ by a factor of approxi-matelyN.The proof of above theorem is in the same spirit as that of Theorem 2 and can be found in theappendix. Note that without asynchronous nature of ASGD, the analysis for synchronous distributedSGD would be the same as the analysis for SGD from Section 3. This is because synchronous SGD,where each of the Nlearners compute the gradient for a random mini-batch of size M, equivalentlyrepresents SGD with mini-batch size MN . The asynchronous nature of ASGD introduces extrafactors to be taken into account such as the “age” of the updates (i.e., the situation where the gradientreturned by a learner may have been computed by an older parameter vector).Theorem 5. For a constant mini-batch size M, the number of samples needs to be processed inorder to achieve the same convergence guarantee increases as the number of learners increases.Proof outline. The range of permissible by Equation 8 becomes smaller as Nincreases. Rest ofthe proof combines the observations that the minimum attained by the convergence guarantee byEquation 7 must become worse if the range of decreases and Nincreases. For complete proof,please see the appendix.Discussion: From Theorem 2, for sequential SGD, there could be a difference of 2Mh=(p2+Ml)between the convergence guarantee of mini-batch sizes Ml,Mhwith 4MlMh=4. AssumingMlto be far smaller than , this factor becomes approximatelyp2Mh=. This is comparable withthe difference N= between ASGD convergence guarantee of 1learner andNlearners. Althoughexact numerical multipliers may differ depending on the tightness of the original convergence bound,it points to the similarities between slow-down caused by bigger mini-batch sizes with SGD andlarger number of learners with ASGD. At a high level, ASGD with higher number of learners (with5Under review as a conference paper at ICLR 2017Input: mini-batch of MRGB images#Convolution: (nfeat, nkern, height, width) = (3;64;5;5)Rectified Linear Unit (ReLU)Max-Pooling: (height, width) = (2;2)Dropout: prob. = 0:5#Convolution: (nfeat, nkern, height, width) = (64;128;3;3)Rectified Linear Unit (ReLU)Max-Pooling: (height, width) = (2;2)Dropout: prob. = 0:5#Convolution: (nfeat, nkern, height, width) = (128;256;3;3)Rectified Linear Unit (ReLU)Max-Pooling: (height, width) = (2;2)Dropout: prob. = 0:5#Convolution: (nfeat, nkern, height, width) = (256;128;2;2)Rectified Linear Unit (ReLU)Max-Pooling: (height, width) = (2;2)Dropout: prob. = 0:5#Fully connected layer: 12810#Cross-entropy errorTable 1: Convolutional Neural Network for CIFAR10. For convolutional layers nfeat denotes thenumber of input feature maps and nkern denotes the number of kernels.bounded age/staleness of updates) can be thought of SGD with some effective mini-batch size. Thiseffective mini-batch size could be dependent on the number of learners as well as the age/stalenessof updates.5 E XPERIMENTSExperiment setup : We carry our experiments with CIFAR-10 dataset [cif (accessed January 11,2016a)] which contains 50;000training samples and 10;000test samples, each associated with 1out of 10possible labels For the CIFAR-10 dataset, our aim is to predict the correct label the inputimages. For our experiments, we train convolutional neural network shown in Table 1, which istaken from [cif (accessed January 11, 2016b)]. It is a fairly standard convolutional network designconsisting of a series of convolutional layers interspersed by max-pooling layers. The convolutionallayers outputs are filtered with rectified linear unit before applying max-pooling. Additionally, italso uses Dropout layers which act as regularization [Srivastava et al. (2014)]. At the end, it has afully connected layer with 10outputs (equal to the number of labels). The number of parameters tobe learned for CIFAR-10 network is0:5million.We use cross-entropy between the input labels and the predicted labels in the final output layer, i.e.,F(x;z)(see Section 1 for the notation) is the cross-entropy error and f(x)is the average cross-entropy error over all training samples. The implementation of neural network was done usingTorch. The target platform for our experiments is a Magma system with 16 GPUs connected toan IBM Power8 host. In our ASGD Downpour implementation, the learners are run on the GPUs,while the parameter server is run on the CPU.We carried our experiments for 100 epochs, here by an epoch we mean a complete pass of thetraining data. We chose the learning rate to be 0:01as it was seen to be performing well at the end6Under review as a conference paper at ICLR 2017Epochs20 40 60 80 100Test accuracy20406080100M = 16M = 32M = 64M = 128Figure 1: SGD experiments with CIFAR-10 :convergence becomes slower as mini-batch Msize increases.Epochs20 40 60 80 100Test accuracy20406080100N = 1N = 2N = 8N = 16Figure 2: ASGD convergence for CIFAR-10 :convergence becomes slower as the number oflearnersNincreases.of our experiments with respect to test accuracy (i.e., classification accuracy on the test data). ForASGD part of experiments, we randomly partitioned the training data between all Nlearners in thebeginning of each epoch. At the end of each epoch we measured the test accuracy. See Figure 1for the results of our SGD experiments. We can observe that as the mini-batch size increases, thetest error converges slower with respect to the number of epochs. See Figure 2 for the result ofour experiments with ASGD experiments. Again, we can observe that as the number of learnersincreases, the convergence of test error becomes slower.These observations agree with our justifications from Section 3 and 4. Moreover, they show thatthere are similarities between the slow-down caused by increasing mini-batch size with SGD andincreasing the number of learners with ASGD. Thus, exploiting parallelism, either by increasingmini-batch size or by increasing the number of learners introduces an inherent inefficiency in theconvergence behavior, even after disregarding other overheads such as communication time whenwe increase the number of learners.6 C ONCLUSION AND FUTURE DIRECTIONSIn this paper, we theoretically justified faster convergence (in terms of average gradient norm at-tained after processing a fixed number of samples) of SGD with small mini-batches or that of ASGDwith smaller number of learners. This indicates that there is an inherent inefficiency in the speed-upobtained with parallelizing gradient descent methods by taking advantage of hardware. It would beinteresting to see if such a conclusion holds for more advanced update methods than vanilla SGD,for example methods using momentum and its variants.7Under review as a conference paper at ICLR 2017REFERENCESCifar10 dataset. https://www.cs.toronto.edu/ ̃kriz/cifar.html , accessed January11, 2016a.Cifar10 model. https://github.com/eladhoffer/ConvNet-torch/blob/master/Models/Model.lua , accessed January 11, 2016b.L ́eon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings ofCOMPSTAT’2010 , pp. 177–186. Springer, 2010.Guillaume Bouchard, Th ́eo Trouillon, Julien Perez, and Adrien Gaidon. Accelerating stochasticgradient descent via online learning to sample. arXiv preprint arXiv:1506.09016 , 2015.Richard H Byrd, Gillian M Chin, Jorge Nocedal, and Yuchen Wu. Sample size selection in opti-mization methods for machine learning. Mathematical programming , 134(1):127–155, 2012.William Chan and Ian Lane. Distributed asynchronous optimization of convolutional neural net-works. In INTERSPEECH , pp. 1073–1077, 2014.Trishul Chilimbi, Yutaka Suzue, Johnson Apacible, and Karthik Kalyanaraman. Project adam:Building an efficient and scalable deep learning training system. In 11th USENIX Symposiumon Operating Systems Design and Implementation (OSDI 14) , pp. 571–582, 2014.Soham De, Abhay Yadav, David Jacobs, and Tom Goldstein. Big batch sgd: Automated inferenceusing adaptive batch sizes. arXiv preprint arXiv:1610.05792 , 2016.Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior,Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances inneural information processing systems , pp. 1223–1231, 2012.Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online predictionusing mini-batches. Journal of Machine Learning Research , 13(Jan):165–202, 2012.Olivier Fercoq, Zheng Qu, Peter Richt ́arik, and Martin Tak ́aˇc. Fast distributed coordinate descentfor non-strongly convex losses. In 2014 IEEE International Workshop on Machine Learning forSignal Processing (MLSP) , pp. 1–6. IEEE, 2014.Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochas-tic programming. SIAM Journal on Optimization , 23(4):2341–2368, 2013.Cho-Jui Hsieh, Kai-Wei Chang, Chih-Jen Lin, S Sathiya Keerthi, and Sellamanickam Sundarara-jan. A dual coordinate descent method for large-scale linear svm. In Proceedings of the 25thinternational conference on Machine learning , pp. 408–415. ACM, 2008.Martin Jaggi, Virginia Smith, Martin Takac, Jonathan Terhorst, Sanjay Krishnan, Thomas Hofmann,and Michael I Jordan. Communication-efficient distributed dual coordinate ascent. In Advancesin Neural Information Processing Systems , pp. 3068–3076, 2014.Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variancereduction. In Advances in Neural Information Processing Systems , pp. 315–323, 2013.Jakub Kone ˇcn`y, Zheng Qu, and Peter Richt ́arik. Semi-stochastic coordinate descent. arXiv preprintarXiv:1412.6293 , 2014.Mu Li, David G Andersen, Jun Woo Park, Alexander J Smola, Amr Ahmed, Vanja Josifovski,James Long, Eugene J Shekita, and Bor-Yiing Su. Scaling distributed machine learning with theparameter server. In 11th USENIX Symposium on Operating Systems Design and Implementation(OSDI 14) , pp. 583–598, 2014a.Mu Li, Tong Zhang, Yuqiang Chen, and Alexander J Smola. Efficient mini-batch training forstochastic optimization. In Proceedings of the 20th ACM SIGKDD international conference onKnowledge discovery and data mining , pp. 661–670. ACM, 2014b.8Under review as a conference paper at ICLR 2017Xiangru Lian, Yijun Huang, Yuncheng Li, and Ji Liu. Asynchronous parallel stochastic gradient fornonconvex optimization. In Advances in Neural Information Processing Systems , pp. 2737–2745,2015.Ji Liu and Stephen J Wright. Asynchronous stochastic coordinate descent: Parallelism and conver-gence properties. SIAM Journal on Optimization , 25(1):351–376, 2015.Ji Liu, Stephen J Wright, Christopher Re, Victor Bittorf, and Srikrishna Sridhar. An asynchronousparallel stochastic coordinate descent algorithm. Journal of Machine Learning Research , 16(285-322):1–5, 2015.Qi Meng, Wei Chen, Jingcheng Yu, Taifeng Wang, Zhi-Ming Ma, and Tie-Yan Liu. Asynchronousaccelerated stochastic gradient descent. In Proceedings of the 25th international joint conferenceon Artificial Intelligence , 2016.Deanna Needell, Rachel Ward, and Nati Srebro. Stochastic gradient descent, weighted sampling,and the randomized kaczmarz algorithm. In Advances in Neural Information Processing Systems ,pp. 1017–1025, 2014.Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochasticapproximation approach to stochastic programming. SIAM Journal on optimization , 19(4):1574–1609, 2009.Yu Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAMJournal on Optimization , 22(2):341–362, 2012.Yurii Nesterov. Introductory lectures on convex optimization: A basic course , volume 87. SpringerScience & Business Media, 2013.Zheng Qu and Peter Richt ́arik. Coordinate descent with arbitrary sampling i: Algorithms and com-plexity. arXiv preprint arXiv:1412.8060 , 2014.Peter Richt ́arik and Martin Tak ́ac. Distributed coordinate descent method for learning with big data.2013.Peter Richt ́arik and Martin Tak ́aˇc. Iteration complexity of randomized block-coordinate descentmethods for minimizing a composite function. Mathematical Programming , 144(1-2):1–38, 2014.Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. On parallelizability of stochasticgradient descent for speech dnns. In 2014 IEEE International Conference on Acoustics, Speechand Signal Processing (ICASSP) , pp. 235–239. IEEE, 2014.Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine LearningResearch , 15(1):1929–1958, 2014.Conghui Tan, Shiqian Ma, Yu-Hong Dai, and Yuqiu Qian. Barzilai-borwein step size for stochasticgradient descent. arXiv preprint arXiv:1605.04131 , 2016.D Randall Wilson and Tony R Martinez. The general inefficiency of batch training for gradientdescent learning. Neural Networks , 16(10):1429–1451, 2003.Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduc-tion. SIAM Journal on Optimization , 24(4):2057–2075, 2014.9Under review as a conference paper at ICLR 2017A A PPENDIXLemma 1. With mini-batches of size Mand a constant learning rate , afterKiterations of SGD,we havePKk=1E(krf(xk)k2)K1LM22DfS+L222(2)Proof. Using the property of Lipschitzian gradient, we getf(xk+1)f(xk) +hrf(xk);xk+1xki+L2kxk+1xkk2=f(xk)k*rf(xk);MXi=1G(xk;zik)++2kL2MXi=1G(xk;zik)2(9)Let us define ik=G(xk;zik)rf(xk)andk=PMi=1ik. Using it, the above equation can berewritten asf(xk+1)f(xk) khrf(xk);k+Mrf(xk)i+2kL2kk+Mrf(xk)k2)f(xk+1)f(xk) khrf(xk);kiMkkrf(xk)k2+2kL2kkk2+ 2Mhk;rf(xk)i+M2krf(xk)k2Generating such inequalities over Kmini-batches and adding them, we getDff(xK+1)f(x1)KXk=1khrf(xk);kiMkkrf(xk)k2+2kL2kkk2+ 2Mhk;rf(xk)i+M2krf(xk)k2Simple rearrangement of terms gives usKXk=1Mk2kLM22krf(xk)k2Df+KXk=1khrf(xk);ki+2kL2kkk2+ 2Mhk;rf(xk)i(10)Now we observe that E(kkk2) =E(kPMi=1ikk2)MPMi=1E(kikk2)M2, using As-sumption A.2. From the assumption of the stochastic gradient being unbiased estimator of thetrue gradient (Assumption A.1), we also know that E(hk;rf(xk)i) =PMi=1E(ik;rf(xk)) =DPMi=1E(ik);rf(xk)E= 0. Taking the expectation of both sides in Equation 10 with respect torandomly selected samples and using these observations we get the following equation:KXk=1MkLM22k2E(krf(xk)k2)Df+LM22KXk=12kThe above equation is equivalent to Equation 2:11from Ghadimi & Lan (2013) modified to allowarbitrary mini-batch sizes. Restricting ourselves to allow constant learning rate and after simplifyingthe above equation, we getPKk=1E(krf(xk)k2)K1LM22DfS+L222(11)10Under review as a conference paper at ICLR 2017Theorem 2. Let:=qS2LDf. Let 4MlMh=4. Then the convergence guarantee for SGDfor mini-batch size Mhafter processing Straining samples can be worse than the convergenceguarantee for mini-batch size Mlby a factor of 2Mh=(p2 +Ml).Proof outline. For a fixed number Sof training samples to be processed, we now minimize theright hand side of Equation 2 to find the best convergence guarantee supported by Lemma 1. Let=cpDf=(SL2), wherecis a scalar multiplier. Substituting it in Equation 2 and after somealgebraic manipulations, we getPKk=1E(krf(xk)k2)K 1c+c21cM2!sDfL2S(12)By applying simple calculus, it can be shown that the value cofcwhich minimizes right hand sideof the above equation is given byc=M 1 +r1 +22M2!(13)Consider the case of M=Ml=4. Denotecbyclfor this case. With M=Ml=4,we havep1 + 22=M2lp2=Ml. Thus we get clp2Ml=. Further, we can writethe following using Mlis little compared to and other simple known approximations:11clMl2=11(p2Ml)Ml211Mlp21 +Mlp2(14)1cl=1p2Ml=1p211Mlp21p21 +Mlp21p2+Ml2(15)cl2=1p2Ml2(16)Using Equation 14, 15, 16, we get that the coefficient ofpDfL2=Sfrom Equation 12evaluates to approximatelyp2 +Ml=.ConsiderMh4. Denotecbychfor this case. With M=Ml=4, we havep1 + 22=M2h1+2=M2h, using the approximationp1 +1+=2for small. Thusch=Mh. Sinceis much smaller than Mh, we can approximate 1=ch+ch=21=chMh=and we also have 1chM=(2) = 1=2. Thus the coefficient ofpDfL2=SfromEquation 12 evaluates to approximately 2Mh=.Combining the above observations, we get that the convergence guarantees for MlandMhcan differby a factor of 2Mh=(p2 +Ml)after processing Straining samples.Theorem 4. LetN > 1be the number of learners and let =qK2MLD fN, then the optimalASGD convergence rate guarantee for 1learner and Nlearners can differ by a factor of approxi-matelyN.Proof. We have=cpDf=(MKL2) =cMLfrom the definition of . Substituting this inEquation 7, we getPKk=1E(krf(xk)k2)K2DfMKcqDfMKL2+2LcrDfMKL2+ 22L2MNcMLcrDfMKL2=2c+c+2Nc2rDfL2MK(17)11Under review as a conference paper at ICLR 2017From the definition of , we haveK=2MLDf=2. Using it in the above equation, we getPKk=1E(krf(xk)k2)K2c+c+2Nc212M(18)Similarly, given =cqDfMKL2=cML, the condition in Equation 8 can also be expressed as:c+2N2c221) 2N2c2+c20Since learning rate (and hence c) is always positive, the above equation gives us0c4N2(1 +p1 + 8N2)Thus finding the optimal learning rate (within the regime of Equation 7 and 8) is equivalent tosolving the following:minimize2c+c+2Nc212M(19)s.t. 0c4N2(1 +p1 + 8N2) (20)Now, by means of solving the above optimization, we will investigate how much the convergenceguarantee can differ as the number of learners increase. In particular, we will look at the differencein the guarantee for 1learner andN0learners where N016. Taking the derivative of Equation 19with respect to cand setting it to 0, we get the following:4Nc3+c22= 0 (21)Letc1andcN0denote the solutions to the above equation for 1andN0learners respectively. Noticethat forN= 1and for16, the square term dominates in Equation 21 and c1p2(which alsosatisfies the constraint in Equation 20). Thus we getForN= 1:PKk=1E(krf(xk)k2)K.2p22M(22)However, for N=N0and 16N0, the cubic term dominates in Equation 21. Thus thevalue ofcwhich satisfies Equation 21 is approximately3p=(2N0). However, the upper bound inEquation 20 for large N0becomesc.=(p2N0). For the range of under consideration, thisis smaller than3p=(2N0), thus we get cN0==(p2N0). Thus for 16N0, Equation 18becomesForN=N0:PKk=1E(krf(xk)k2)K.2p2N02M(23)Thus comparing Equation 22 and 23, we see that the ASGD convergence guarantee for N= 1andN=N0learners can differ by a factor ofN0for16N0.12
H1GEvHcee
Under review as a conference paper at ICLR 2017ANNEALING GAUSSIAN INTO RELU: ANEWSAM-PLING STRATEGY FOR LEAKY -RELU RBMChun-Liang Li Siamak Ravanbakhsh Barnab ́as P ́oczosDepartment of Machine LearningCarnegie Mellon UniversityPittsburgh, PA 15213, USAfchunlial,mravanba,bapoczos g@cs.cmu.eduABSTRACTRestricted Boltzmann Machine (RBM) is a bipartite graphical model that is usedas the building block in energy-based deep generative models. Due to its numer-ical stability and quantifiability of its likelihood, RBM is commonly used withBernoulli units. Here, we consider an alternative member of the exponential fam-ily RBM with leaky rectified linear units – called leaky RBM. We first study thejoint and marginal distributions of the leaky RBM under different leakiness, whichleads to interesting interpretation of the leaky RBM model as truncated Gaussiandistribution. We then propose a simple yet efficient method for sampling fromthis model, where the basic idea is to anneal the leakiness rather than the energy;–i.e., start from a fully Gaussian/Linear unit and gradually decrease the leakinessover iterations. This serves as an alternative to the annealing of the temperatureparameter and enables numerical estimation of the likelihood that are more effi-cient and far more accurate than the commonly used annealed importance sam-pling (AIS). We further demonstrate that the proposed sampling algorithm enjoysrelatively faster mixing than contrastive divergence algorithm, which improves thetraining procedure without any additional computational cost.1 I NTRODUCTIONIn this paper, we are interested in deep generative models. One may naively classify these modelsinto a family of directed deep generative models trainable by back-propagation ( e.g., Kingma &Welling, 2013; Goodfellow et al., 2014), and deep energy-based models, such as deep belief net-work (Hinton et al., 2006) and deep Boltzmann machine (Salakhutdinov & Hinton, 2009). Thebuilding block of deep energy-based models is a bipartite graphical model called restricted Boltz-mann machine (RBM). The RBM model consists of two layers, visible and hidden. The resultinggraphical model which can account for higher-order interactions of the visible units (visible layer)using the hidden units (hidden layer). It also makes the inference easier that there are no interactionsbetween the variables in each layer.The conventional RBM uses Bernoulli units for both the hidden and visible units (Smolensky, 1986).One extension is using Gaussian visible units to model general natural images (Freund & Haussler,1994). For hidden units, we can also generalize Bernoulli units to the exponential family (Wellinget al., 2004; Ravanbakhsh et al., 2016).Nair & Hinton (2010) propose a variation using Rectified Linear Unit (ReLU) for the hidden layerwith a heuristic sampling procedure, which has promising performance in terms of reconstructionerror and classification accuracy. Unfortunately, due to its lack of strict monotonicity, ReLU RBMdoes not fit within the framework of exponential family RBMs (Ravanbakhsh et al., 2016). In-stead we study leaky-ReLU RBM (leaky RBM) in this work and address two important issues i) abetter training (sampling) algorithm for ReLU RBM and; ii) a better quantification of leaky RBM–i.e., evaluation of its performance in terms of likelihood.We study some of the fundamental properties of leaky RBM, including its joint and marginal dis-tributions (Section 2). By analyzing these distributions, we show that the leaky RBM is a union of1Under review as a conference paper at ICLR 2017truncated Gaussian distributions . In this paper, we show that training leaky RBM involves under-lying positive definite constraints. Because of this, the training can diverge if these constrains arenot satisfied. This is an issue that was previously ignored in ReLU RBM, as it was mainly used forpre-training rather than generative modeling.Ourcontribution in this paper is three-fold: I) we systematically identify and address model con-straints in leaky RBM (Section 3); II) for the training of leaky RBM, we propose a meta algo-rithm for sampling, which anneals leakiness during the Gibbs sampling procedure (Section 3) andempirically show that it can boost contrastive divergence with faster mixing (Section 5); III) Wedemonstrate the power of the proposed sampling algorithm on estimating the partition function . Inparticular, comparison on several benchmark datasets shows that the proposed method outperformsthe conventional AIS (Salakhutdinov & Murray, 2008) in terms of efficiency and accuracy (Sec-tion 4). Moreover, we provide an incentive for using leaky RBM by showing that the leaky ReLUhidden units perform better than the Bernoulli units in terms of the model log-likelihood (Section 4).2 R ESTRICTED BOLTZMANN MACHINE AND RELUThe Boltzmann distribution is defined as p(x) =eE(x)=ZwhereZ=PxeE(x)is the partitionfunction. Restricted Boltzmann Machine (RBM) is a Boltzmann distribution with a bipartite struc-ture It is also the building block for many deep models ( e.g., Hinton et al., 2006; Salakhutdinov &Hinton, 2009; Lee et al., 2009), which are widely used in numerous applications (Bengio, 2009). Theconventional Bernoulli RBM, models the joint probability p(v;h)for the visible units v2[0;1]Iandthe hidden units h2[0;1]Jasp(v;h)/exp(E(v;h)), whereE(v;h) =a>vv>Wh+b>h:The parameters are a2RI,b2RJandW2RIJ. We can derive the conditional probabilities asp(vi= 1jh) =0@JXj=1Wijhj+ai1A andp(hj= 1jv) = IXi=1Wijvi+bj!; (1)where(x) = (1 +ex)1is the sigmoid function.One extension of Bernoulli RBM is replacing the binary visible units by linear units v2RIwithindependent Gaussian noise. The energy function in this case is given byE(v;h) =IXi=1(viai)222iIXi=1JXj=1viiWijhj+b>h:To simplify the notation, we assume a normalized data so that aiandiis no longer required.The energy function is accordingly simplified to E(v;h) =kvk22v>Wh+b>h(Note that theelimination does not influence the discussion and one can easily extend all the results in this paperto the model that includes aiandi.).The conditional distributions are as follows:p(vijh) =N0@JXj=1Wijhj;11A andp(hj= 1jv) = IXi=1Wijvi+bj!; (2)whereN(;V)is a Gaussian distribution with mean and variance V. To simplify the notation,in the following we define j=PIi=1Wijvi+bj– that isjis the input to the jthhidden layerneuron and similarly define i=PJj=1Wijhj+ai. Using this notation the conditionals in the (2)arep(viji) =N(i;1)andp(hj= 1jj) =(j).2.1 R ELU RBM WITH CONTINUOUS VISIBLE UNITSFrom (1) and (2), we can see that the mean of the p(hjjv)is the nonlinearity of the hidden unitatj=PIi=1Wijvi+bj–e.g., mean of the Bernoulli unit is the sigmoid function. From thisperspective, we can extend the sigmoid function to other functions and thus allow RBM to havemore expressive power (Ravanbakhsh et al., 2016). In particular, it would be interesting to userectified linear unit (ReLU) nonlinearity, f(j) = max(0;j), for generative modeling.2Under review as a conference paper at ICLR 2017Nair & Hinton (2010) use an RBM with visible Gaussian unit and ReLU hidden activation functionsfor pretraining. They suggest sampling from max(0;j+N(0;(j))for conditional sampling fromthe hidden units (compare to (2)). However, this sampling heuristic does not suggest the parametricform of the joint ReLU-Gaussian distribution. This also means we cannot evaluate it using methodssuch as Annealed Importance Sampling that require access to this parametric form. In fact, onlystrictly monotonic activation functions can derive feasible joint and conditional distributions in theexponential familly RBM and ReLU is not strictly monotonic Ravanbakhsh et al. (2016). Similaractivation functions that are monotonic are Softplus, f(j) = log(1 + ej)and leaky ReLU (Maaset al., 2013), defined as f(j) = max(cj;j), wherec2(0;1)is the leakiness parameter. In con-trast to the ReLU RBM the joint parametric form of these two distributions are available. However,the energy (logarithm of the joint probability) in the case of Softplus activation function contains apolylogarithmic term that requires evaluation of an infinite series; see Table 1 in Ravanbakhsh et al.(2016). For this reason, here we focus on Leaky-ReLU activation function.By Ravanbakhsh et al. (2016), the conditional probability of the activation, assuming the nonlinear-ityf(j), is generally defined as p(hjjv) = exp (Df(jkhj) +g(hj)), whereDf(jkhj)is theBregman Divergence associated with f, and g(hj)is the base (or carrier) measure in the exponentialfamily which ensures the distribution is well-defined. The Bergman divergence, for strictly mono-tonic function f, isDf(jkhj) =jhj+F(j) +F(hj), whereFwithddjF(j) =f(j)isthe anti-derivative (integral) of fandFis the anti-derivative of f1(i.e.,f1(f()) =); Notethat due to the strict monotonicity of f,f1is well-defined, and FandFare commonly referredto as conjugate duals.Considering the leaky ReLU activation function f() = max(c;), using this formalism, theconditional distributions of hidden units in the leaky RBM simplifies to (see Appendix A.1 fordetails)p(hjjv) =N(j;1);ifj>0N(cj;c);ifj0:(3)Since the visible units uses the identity function, the corresponding conditional distribution is aGaussian1p(vijh) =N0@JXj=1Wijhj;11A; (4)Having these two conditional distributions is enough for training a leaky RBM model using con-trastive divergence (Hinton, 2002) or some other alternatives ( e.g., Tieleman, 2008; Tieleman &Hinton, 2009).3 T RAINING AND SAMPLING FROM LEAKY RBMGiven the conditional distributions p(vjh)andp(hjv), the joint distribution p(v;h)from the generaltreatment for MRF model is (Yang et al., 2012; Ravanbakhsh et al., 2016)p(v;h)/exp0@v>WhIXi=1(~F(vi) +g(vi))JXj=1(F(hj) +g(hj))1A; (5)where ~F(vi)andF(hj)are anti-derivatives of the inverses of the activation functions ~f(vi)andf(hj)for visible units viand hidden units hj, respectively (see Section 2.1). Assumingf(j) = max(cj;c)and~f(i) =iin leaky-ReLU RBM, the joint distribution above becomes(see Appendix A.2 for details)p(v;h)/exp0@v>Whkvk22Xj>0 h2j2+ logp2!Xj0 h2j2c+ logp2c!+b>h1A;1which can also be written as p(vijh) = expD~f(ikvi) +g(vi), wherei=Pj=1Wijhjand~f(i) =iandD~f(ikvi) = (ivi)2andg(vi) =logp2.3Under review as a conference paper at ICLR 2017W1W2W3R1R2R3R4R5R6R7Figure 1: A two dimensionalexample with 3hidden units.00:20:40:6Figure 2: An one dimensionalexample of truncated Gaussiandistributions with different vari-ances.xyzR3W1W2W3Figure 3: A three dimensionalexample with 3hidden units,whereWjare orthogonal toeach other.and the corresponding visible marginal distribution isp(v)/exp0@12v>0@IXj>0WjW>jcXj0WjW>j1Av+Xj>0bjW>jv+cXj0bjW>jv1A:(6)whereWjis thej-th column of W.3.1 L EAKY RBM ASUNION OF TRUNCATED GAUSSIAN DISTRIBUTIONSFrom (6) we see that the marginal probability is determined by the affine constraints j>0orj0for all hidden units j. By combinatorics, these constraints divide RI(the visible domain)into at most M=PIi=1Jiconvex regions R1;RM. An example with I= 2 andJ= 3 isshown in Figure 1. If I >J , then we have at most 2Jregions.We discuss the two types of these regions. For bounded regions, such as R1in Figure 1, the integra-tion of (6) is also bounded, which results in a valid distribution. Before we discuss the unboundedcases, we define =IPJj=1jWjW>j, wherej= 1j>0+c 1j0. For the unboundedregion, if 2RIIis a positive definite (PD) matrix, then the probability density is proportional toa multivariate Gaussian distribution with mean = 1PJj=1jbjWjand precision matrix (covariance matrix 1) but over an affine-constrained region. Therefore, the distribution of eachunbounded region can be treated as a truncated Gaussian distribution. The marginal distrubution canbe treated as a union of truncated Gaussain distribution. Note that leaky RBM is different from Suet al. (2017), which use single truncated Gaussian distribution to model joint (conditional) distribu-tions and require approximated and more complicated sampling algorithms for truncated Gaussiandistribution, while leaky RBM only requires to sample from Gaussian distributions.On the other hand, if is not PD, and the region Ricontains the eigenvectors with negative eigen-values of , the integration of (6) over Riis divergent (infinite), which can not result in a validprobability distribution. In practice, with this type of parameter, when we do Gibbs sampling on theconditional distributions, the sampling will diverge. However, it is unfeasible to check exponentiallymany regions for each gradient update.Theorem 1. IfIWW>is positive definite, then IPjjWjW>jis also positive definite, forallj2[0;1].The proof is shown in Appendix 1. From Theorem 1 we can see that if the constraint IWW>is PD, then one can guarantee that the distribution of every region is a valid truncated Gaussiandistribution. Therefore, we introduce the following projection step for each Wafter the gradientupdate.argmin~WkW~Wk2Fs.t.I~W~W>0(7)Theorem 2. The above projection step (7)can be done by shrinking the singular values to be lessthan 1.4Under review as a conference paper at ICLR 2017The proof is shown in Appendix C. The training algorithm of the leaky RBM is shown in Algo-rithm 1. By using the projection step (7), we could treat the leaky RBM as the union of truncatedGaussian distributions , which uses weight vectors to divide the space of visible units into severalregions and use a truncated Gaussian distribution to model each region. Note that the leaky RBMmodel is different from Su et al. (2016), which uses a truncated Gaussian distribution to model theconditional distribution p(hjv)instead of the marginal distribution.The empirical study about the divergent values and the necessity of the projection step is shown inAppendix D. Without the projection step, when we run Gibbs sampling for several iterations from themodel, the sampled values will diverge because the model does not have a valid marginal distributionp(v). It also implies that we cannot train leaky RBM with larger CD steps without projection, whichwould result in divergent gradients. The detailed discussion is shown in Appendix D.Algorithm 1 Training Leaky RBMfort= 1;:::;T doEstimate gradient gby CD or other algorithms with (13) and (4), where =fW;a;bg.(t) (t1)+g.ProjectW(t)by (7).end for3.2 S AMPLING FROM LEAKY -RELU RBMGibbs sampling is the core procedure for RBM, including training, inference, and estimating thepartition function (Fischer & Igel, 2012; Tieleman, 2008; Salakhutdinov & Murray, 2008). For ev-ery task, we start from randomly initializing vby an arbitrary distribution q, and iteratively samplefrom the conditional distributions. Gibbs sampling guarantees the procedure result in the stationarydistribution in the long run for any initialized distribution q. However, if qis close to the target dis-tributionp, it can significantly shorten the number of iterations to achieve the stationary distribution.If we set the leakiness cto be 1, then (6) becomes a simple multivariate Gaussian distributionN(IWW>)1Wb;(IWW>)1, which can be easily sampled without Gibbs sampling.Also, the projection step (7) guarantees it is a valid Gaussian distribution. Then we decrease theleakiness with a small , and use samples from the multivariate Gaussian distribution when c= 1as the initialization to do Gibbs sampling. Note that the distribution of each region is a truncatedGaussian distribution. When we only decrease the leakiness with a small amount, the resulted dis-tribution is a “similar” truncated Gaussian distribution with more concentrated density. From thisobservation, we could expect the original multivariate Gaussian distribution serves as a good initial-ization. The one-dimensional example is shown in Figure 2. We then repeat this procedure until wereach the target leakiness. The algorithm can be seen as annealing the leakiness during the Gibbssampling procedure. The meta algorithm is shown in Algorithm 2. Next, we show the proposedsampling algorithm can help both the partition function estimation and the training of leaky RBM.Algorithm 2 Meta Algorithm for Sampling from Leaky RBM.SamplevfromN(IWW>)1Wb;(IWW>)1= (1c)=Tandc0= 1fort= 1;:::;T doDecreasec0=c0and perform Gibbs sampling by using (13) and (4) with leakiness c0end for4 P ARTITION FUNCTION ESTIMATIONIt is known that estimating the partition function of RBM is intractable (Salakhutdinov & Murray,2008). Existing approaches, including Salakhutdinov & Murray (2008); Grosse et al. (2013); Liuet al. (2015); Carlson et al. (2016) focus on using sampling to approximate the partition function ofthe conventional Bernoulli RBM instead of the RBM with Gaussian visible units and non-Bernoullihidden units. In this paper, we focus on extending the classic annealed importance sampling (AIS)algorithm (Salakhutdinov & Murray, 2008) to leaky RBM.5Under review as a conference paper at ICLR 2017J= 5J= 10J= 20J= 30Log partition function 2825:48 2827:98 2832:98 2837:99Table 1: The true partition function for Leaky-ReLU RBM with different number of hidden units.Assuming that we want to estimate the partition function Zofp(v)withp(v) =p(v)=Zandp(v)/Phexp(E(v;h)), Salakhutdinov & Murray (2008) start from a initial distributionp0(v)/Phexp(E0(v;h)), where computing the partition Z0ofp0(v)is tractable and we candraw samples from p0(v). They then use the “geometric path” to anneal the intermediate distributionaspk(v)/pk(v) =Phexp (kE0(v;h)(1k)E(v;h)), where they grid kfrom 1to0. Ifwe let0= 1, we can draw samples vkfrompk(v)by using samples vk1frompk1(v)fork1via Gibbs sampling. The partition function is then estimated via Z=Z0MPMi=1!(i), where!(i)=p1(v(i)0)p0(v(i)0)p2(v(i)1)p1(v(i)1)pK1(v(i)K2)pK2(v(i)K2)pK(v(i)K1)pK1(v(i)K1);andK= 0:Salakhutdinov & Murray (2008) use the initial distribution with independent visible units and with-out hidden units. We consider application of AIS to the leaky-ReLU case with E0(v;h) =kvk22,which results in a multivariate Gaussian distribution p0(v). Compared with the meta algorithmshown in Algorithm 2 which anneals between leakiness , AIS anneals between energy functions .4.1 S TUDY ON TOYEXAMPLESAs we discussed in Section 3.1, leaky RBM with Jhidden units is a union of 2Jtruncated Gaussiandistributions. Here we perform a study on the leaky RBM with a small number hidden units. Sincein this example the number of hidden units is small, we can integrate out all possible configurationsofh. However, integrating a truncated Gaussian distribution with general affine constraints doesnot have analytical solutions, and several approximations have been developed ( e.g., Pakman &Paninski, 2014). To compare our results with the exact partition function, we consider a special casethat has the following form:p(v)/exp0@12v>0@IXj>0WjW>jcXj0WjW>j1Av1A: (8)Compared to (6), it is equivalent to the setting where b= 0. Geometrically, every Wjpasses throughthe origin. We further put the additional constraint Wi?Wj;8i6=j. Therefore. we divide thewhole space into 2Jequally-sized regions. A three dimensional example is shown in Figure 3. Thenthe partition function of this special case has the analytical formZ=12JXj2f1;cg;8j(2)I20@IJXj=1jWjW>j1A12:We randomly initialize Wand use SVD to make columns orthogonal. Also, we scale kWjktosatisfyIWW>0. The leakiness parameter is set to be 0:01. For Salakhutdinov & Murray(2008) (AIS-Energy), we use 105particles with 105intermediate distributions. For the proposedmethod (AIS-Leaky), we use only 104particles with 103intermediate distributions. In this smallproblem we study the cases when the model has 5;10;20and30hidden units and 3072 visible units.The true log partition function logZis shown in Table 1 and the difference between logZand theestimates given by the two algorithms are shown in Table 2.From Table 1, we observe that AIS-Leaky has significantly better and more stable estimationsthan AIS-Energy especially and this gap increases as we increase the number of hidden units.AIS-Leaky achieves this with orders magnitude reduced computation –e.g., here it uses.1%of resources used by conventional AIS. For example, when we increase Jfrom 5to30, the bias (dif-ference) of AIS-Leaky only increases from 0:02to0:13; however, the bias of AIS-Energy increasesfrom 1:76to9:6. We further study the implicit connection between the proposed AIS-Leaky andAIS-Energy in Appendix E, which shows AIS-Leaky is a special case of AIS-Energy under certainconditions.6Under review as a conference paper at ICLR 2017J= 5 J= 10 J= 20 J= 30AIS-Energy 1:760:011 3:560:039 7:950:363 9:600:229AIS-Leaky 0:020:001 0:040:002 0:080:003 0:130:004Table 2: The difference between the true partition function and the estimations of two algorithmswith standard deviation.CIFAR-10 SVHNBernoulli-Gaussian RBM 2548:32284:2Leaky-ReLU RBN 1031:1182:4Table 3: The log-likelihood performance of Bernoulli-Gaussian RBM and leaky RBM.4.2 C OMPARISON BETWEEN LEAKY -RELU RBM AND BERNOULLI -GAUSSIAN RBMIt is known that the reconstruction error is not a proper approximation of the likelihood (Hinton,2012). One commonly adopted way to compare generative models is to sample from the model,and visualize the images to check the quality. However, Theis et al. (2016) show the better visu-alization does not imply better likelihood. Also, the single layer model cannot adequately modelthe complicated natural images (the result for Bernoulli-Gaussian RBM has been shown in Ran-zato & Hinton (2010)), which makes the visualization comparison difficult (Appendix F has fewvisualization results).Fortunately, our accurate estimate of the partition function for leaky RBM can produce a reli-able quantitative estimate of the representation power of leaky RBM. We compare the Bernoulli-Gaussian RBM2, which has Bernoulli hidden units and Gaussian visible units. We trained bothmodels with CD-203and momentum. For both model, we all used 500hidden units. We initializedWby sampling from Unif (0;0:01),a= 0,b= 0and= 1. The momentum parameter was 0:9andthe batch size was set to 100. We tuned the learning rate between 101and106. We studied twobenchmark data sets, including CIFAR10 and SVHN. The data was normalized to have zero meanand standard deviation of 1for each pixel. The results of the log-likelihood are reported in Table 3.From Table 3, leaky RBM outperforms Bernoulli-Gaussian RBM significantly. The unsatisfactoryperformance of Bernoulli-Gaussian RBM may be in part due to the optimization procedure. If wetune the decay schedule of the learning-rate for each dataset in an ad-hoc way, we observe theperformance of Bernoulli-Gaussian RBM can be improved by 300nats for both datasets. Also,increasing CD-steps brings slight improvement. The other possibility is the bad mixing during theCD iterations. The advanced algorithms Tieleman (2008); Tieleman & Hinton (2009) may help.Although Nair & Hinton (2010) demonstrate the power of ReLU in terms of reconstruction errorand classification accuracy, it does not imply its superior generative capability. Our study confirmsleaky RBM could have much better generative performance compared to Bernoulli-GaussianRBM.5 B ETTER MIXING BY ANNEALING LEAKINESSIn this section, we show the idea of annealing between leakiness benefit the mixing in Gibbs sam-pling in other settings. A common procedure for comparison of sampling methods for RBM isthrough visualization. Here, we are interested in more quantitative metrics and the practical benefitsof improved sampling. For this, we consider optimization performance as the evaluation metric.The gradient of the log-likelihood function L(jvdata)of general RBM models is@L(jvdata)@=Ehjvdata@E(v;h)@Ev;h@E(v;h)@: (9)Since the second expectation in (9) is usually intractable, different approximation algorithms areused (Fischer & Igel, 2012).2Our GPU implementation with gnumpy and cudamat can reproduce the results ofhttp://www.cs.toronto.edu/ tang/code/GaussianRBM.m3CD-n means that contrastive divergence was run for n steps7Under review as a conference paper at ICLR 20170 0.5 1 1.5 2Iterations 104-1660-1640-1620-1600-1580-1560-1540-1520Log LikelhoodCDMixLeakyPCD(a) SVHN0 0.5 1 1.5 2Iterations 104-2140-2120-2100-2080-2060-2040-2020-2000Log LikelhoodCDMixLeakyPCD (b) CIFAR10Figure 4: Training leaky RBM with different sampling algorithms.In this section, we compare two gradient approximation procedures. The baselines are the conven-tional contrastive divergence (CD) (Hinton, 2002) and persistent contrastive divergence (Tieleman,2008) (PCD). The second method is using Algorithm 2 (Leaky) with the same number of mixingsteps as CD. The experiment setup is the same as that of Section 4.The results are shown in Figure 4. The proposed sampling procedure is slightly better than typicalCD steps. The reason is we only anneals the leakiness for 20steps. To get accurate estimationrequires thousands of steps as shown in Section 4 when we estimate the partition function. There-fore, the estimated gradient is still inaccurate. However, it still outperforms the conventional CDalgorithm. On the other hand, unlike the binary RBM case shown in Tieleman (2008), PCD doesnot outperform CD with 20 mixing steps for leaky RBM.The drawback of Algorithm 2 is that sampling vfromN(IWW>)1Wb;(IWW>)1requires computing mean, covariance and the Cholesky decomposition of the covariance matrix inevery iteration, which are computationally expensive. We study a mixture algorithm by combin-ing CD and the idea of annealing leakiness. The mixture algorithm replaces the sampling fromN(IWW>)1Wb;(IWW>)1with sampling from the empirical data distribution. Theresulted mix algorithm is almost the same as CD algorithm while it anneals the leakiness over theiterations as Algorithm 2. The results of the mix algorithm is also shown in Figure 4.The mix algorithm is slightly worse than the original leaky algorithm, but it also outperforms theconventional CD algorithm without additional computation cost. The comparison in terms of CPUtime is shown in Appendix F. Annealing the leakiness helps the mix algorithm explore differentmodes of the distribution, thereby improves the training. The idea could also be combined withmore advanced algorithms (Tieleman, 2008; Tieleman & Hinton, 2009)4.6 C ONCLUSIONIn this paper, we study the properties of the exponential family distribution produced by leaky RBM.This study relates the leaky RBM model and truncated Gaussian distribution and reveals an under-lying positive definite constraint of training leaky RBM. We further proposed a meta sampling algo-rithm, which anneals between leakiness during the Gibbs sampling procedure. We first demonstratethe proposed sampling algorithm is significantly more effective and efficient in estimating the par-tition function than the conventional AIS algorithm. Second, we show that the proposed samplingalgorithm has comparatively better mixing properties (compared to CD). A few direction are worthfurther study; in particular we are investigating on speeding up the naive projection step; either us-ing the barrier function as shown in Hsieh et al. (2011) or by eliminating the need for projection byartificially bounding the domain via additional constraints.4We studied the PCD extension of the proposed sampling algorithm. However, the performance is not asstable as CD.8Under review as a conference paper at ICLR 2017REFERENCESY . Bengio. Learning deep architectures for ai. Found. Trends Mach. Learn. , 2009.J ̈org Bornschein and Yoshua Bengio. Reweighted wake-sleep. In ICLR , 2015.Y . Burda, R. B. Grosse, and R. Salakhutdinov. Accurate and conservative estimates of mrf log-likelihood using reverse annealing. In AISTATS , 2015.D. E. Carlson, P. Stinson, A. Pakman, and L. Paninski. Partition functions from rao-blackwellizedtempered sampling. In ICML , 2016.KyungHyun Cho, Tapani Raiko, and Alexander Ilin. Enhanced gradient for training restricted boltz-mann machines. Neural Computation , 2013.A. Fischer and C. Igel. An introduction to restricted boltzmann machines. In CIARP , 2012.Y . Freund and D. Haussler. Unsupervised learning of distributions on binary vectors using two layernetworks. Technical report, 1994.I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, andY . Bengio. Generative adversarial nets. In ICML . 2014.R. B. Grosse, C. J. Maddison, and R. Salakhutdinov. Annealing between distributions by averagingmoments. In NIPS , 2013.G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computa-tion, 2002.G. E. Hinton. A practical guide to training restricted boltzmann machines. In Neural Networks:Tricks of the Trade (2nd ed.) . 2012.G. E. Hinton, S. Osindero, and Y .-W. Teh. A fast learning algorithm for deep belief nets. NeuralComputation , 2006.C.-J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estima-tion using quadratic approximation. In NIPS , 2011.D. P. Kingma and M. Welling. Auto-encoding variational bayes. CoRR , 2013.H. Lee, R. Grosse, R. Ranganath, and A. Y . Ng. Convolutional deep belief networks for scalableunsupervised learning of hierarchical representations. In ICML , 2009.Q. Liu, J. Peng, A. Ihler, and J. Fisher III. Estimating the partition function by discriminancesampling. In UAI, 2015.A. L. Maas, A. Y . Hannun, and A. Y . Ng. Rectifier nonlinearities improve neural network acousticmodels. In ICML Workshop on Deep Learning for Audio, Speech, and Language Processing ,2013.V . Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML ,2010.A. Pakman and L. Paninski. Exact hamiltonian monte carlo for truncated multivariate gaussians.Journal of Computational and Graphical Statistics , 2014.N. Parikh and S. Boyd. Proximal algorithms. Found. Trends Optim. , 2014.M. Ranzato and G. E. Hinton. Modeling pixel means and covariances using factorized third-orderboltzmann machines. In CVPR , 2010.S. Ravanbakhsh, B. P ́oczos, J. G. Schneider, D. Schuurmans, and R. Greiner. Stochastic neuralnetworks with monotonic activation functions. In AISTATS , 2016.R. Salakhutdinov and G. Hinton. Deep Boltzmann machines. In AISTATS , 2009.9Under review as a conference paper at ICLR 2017R. Salakhutdinov and I. Murray. On the quantitative analysis of Deep Belief Networks. In ICML ,2008.P. Smolensky. Parallel distributed processing: Explorations in the microstructure of cognition, vol.1. 1986.Q. Su, X. Liao, C. Chen, and L. Carin. Nonlinear statistical learning with truncated gaussian graph-ical models. In ICML , 2016.Qinliang Su, Xuejun Liao, Chunyuan Li, Zhe Gan, and Lawrence Carin. Unsupervised learningwith truncated gaussian graphical models. In AAAI , 2017.L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. In ICLR ,2016.T. Tieleman. Training restricted boltzmann machines using approximations to the likelihood gradi-ent. In ICML , 2008.T. Tieleman and G.E. Hinton. Using Fast Weights to Improve Persistent Contrastive Divergence. InICML , 2009.M. Welling, M. Rosen-Zvi, and G. E. Hinton. Exponential family harmoniums with an applicationto information retrieval. In NIPS , 2004.E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. Graphical models via generalized linear models. InNIPS , 2012.A D ERIVATION OF LEAKY (RELU) RBMA.1 C ONDITIONAL DISTRIBUTIONSFor leaky RBM, the activation function of hidden units is defined as f(j) = max(cj;j), wherec2(0;1)andj=PIi=1Wijvi+bj. The inverse function of fisf1(hj) = min(hj;hj=c).Therefore, the anti-derivatives areF(j) =122j;ifj>0c22j;else;(10)andF(hj) =12h2j;ifj>012ch2j;else:(11)The activation function of Gaussian visible units can be treated as the linear unit ~f(i) =i, wherei=PJj=1Wijhj. Following the similar steps for deriving FandF, we get the anti-derivatives~F(i) =122iand~F(vi) =12v2i.From Ravanbakhsh et al. (2016), the conditional distribution is defined asp(hjjj) = exp (jhj+F(j) +F(hj)) (12)By plugging FandFinto (12), we get the conditional distribution for leaky RBMp(hjjv) =N(j;1)withg(hj) =log(p2); ifj>0N(cj;c)withg(hj) =log(p2c);ifj0:(13)Similarly, we have p(viji) =N(i;1)withg(vi) =log(p2).10Under review as a conference paper at ICLR 2017A.2 J OINT AND MARGINAL DISTRIBUTIONSGiven the conditional distributions p(vjh)andp(hjv), the joint distribution p(v;h)from the generaltreatment for MRF model given by Yang et al. (2012) isp(v;h)/exp0@v>WhIXi=1(~F(vi) +g(vi))JXj=1(F(hj) +g(hj))1A; (14)By plugging F,~Fandgfrom Section A.1 into (14), we havep(v;h)/exp0@v>Whkvk22Xj>0 h2j2+ logp2!Xj0 h2j2c+ logp2c!+b>h1A;Then the marginal distribution isp(v)/Zhp(v;h)dh/Zhexpkvk22Yj>0exp h2j2+jhjlogp2!Yj0 h2j2c+hjjlogp2c!dh/expkvk22Yj>0exp 2j2!Yj0 c2j2!/exp0@12v>0@IXj>0WjW>jcXj0WjW>j1Av+Xj>0bjW>jv+cXj0bjW>jv1A:B P ROOF OF THEOREM 1Proof. SinceWW>PjjWjWj=Pj(1j)WjW>j0, we haveWW>PjjWjWj.Therefore,IPjjWjW>jIWW>0.C P ROOF OF THEOREM 2Proof. Let the SVD decomposition of Wand~WasW=USV>and~W=~U~S~V>. Then we havekW~Wk2F=kUSV>~U~S~V>k2FIXi=1(Sii~Sii)2; (15)and the constraint I~W~W>0can be rewritten as 0~Sii1;8i. The transformed problemhas a Lasso-like formulation and we can solve it by ~Sii= min(Sii;1)(Parikh & Boyd, 2014). Also,the lower boundPIi=1(Sii~Sii)2in (15) becomes a tight bound when we set ~U=Uand~V=V,which completes the proof.D N ECESSITY OF THE PROJECTION STEPWe conduct a short comparison to demonstrate the projection step is necessary for the leaky RBMon generative tasks. We train two leaky RBM as follows. The first model is trained by the samesetting in Section 4. We use the convergence of log likelihood as the stopping criteria. The secondmodel is trained by CD-1 with weight decay and without the projection step. We stop the trainingwhen the reconstruction error is less then 102. After we train these two models, we run Gibbssampling with 1000 independent chains for several steps and output the average value of the visibleunits. Note that the visible units are normalized to zero mean. The results on SVHN and CIFAR10are shown in Figure 5.From Figure 5, the model trained by weight decay without projection step is suffered by the problemof the diverged values. It confirms the study shown in Section 3.1. It also implies that we cannot11Under review as a conference paper at ICLR 20170 20 40 60 80 100Gibbs Sampling Iterations-2-1012345Average of Visible Units (log scale)Weight DecayProjection(a) SVHN0 1 2 3 4 5Gibbs Sampling Iterations 104-3-2-101234567Average of Visible Units (log scale)Weight DecayProjection (b) CIFAR10Figure 5: Divergence results on two datasets.train leaky RBM with larger CD steps when we do not do projection; otherwise, we would have thediverged gradients. Therefore, the projection is necessary for training leaky RBM for the generativepurpose. However, we also oberseve that the projection step is not necessary for the classificationand reconstruction tasks. he reason may be the independency of different evaluation criteria (Hinton,2012; Theis et al., 2016) or other implicit reasons to be studied.E E QUIVALENCE BETWEEN ANNEALING THE ENERGY AND LEAKINESSWe analyze the performance gap between AIS-Leaky and AIS-Energy. One major difference is theinitial distribution. The intermediate marginal distribution of AIS-Energy has the following form:pk(v)/exp0@12v>0@I(1k)Xj>0WjW>j(1k)cXj0WjW>j1Av1A: (16)Here we eliminated the bias terms bfor simplicity. Compared with Algorithm 2, (16) notonly anneals the leakiness (1k)cPj0WjW>jwhenj0, but also in the case (1k)Pj>0WjW>jwhenj>0, which brings more bias to the estimation. In other words,AIS-Leaky is a one-sided leakiness annealing while AIS-Energy is a two-sided leakiness annealingmethod.To address the higher bias problem of AIS-Energy, we replace the initial distribution with the oneused in Algorithm 2. By elementary calculation, the marginal distribution becomespk(v)/exp0@12v>0@IXj>0WjW>j(k+ (1k)c)Xj0WjW>j1Av1A; (17)which recovers the proposed Algorithm 2. From this analysis, we understand AIS-Leaky is a specialcase of conventional AIS-Energy with better initialization inspired by the study in Section 3. Also,by this connection between AIS-Energy and AIS-Leaky, we note that AIS-Leaky can be combinedwith other extensions of AIS (Grosse et al., 2013; Burda et al., 2015) as well.F M ORE EXPERIMENTAL RESULTS FOR SAMPLINGF.1 S AMPLED IMAGESWe show the sampled images from leaky RBM train on CIFAR10 and SVHN datasets. We randomlyinitialize 20 chains and run Gibbs sampling for 1000 iterations. The sampled results are shown inFigure 6 The results shows that single layer RBM does not adequately model CIFAR10 and SVHN12Under review as a conference paper at ICLR 2017(a) SVHN (b) CIFAR10Figure 6: Sampled images from leaky RBM.Figure 7: Sampled images in gray-scale from Bernoulli-Gaussian RBM trained on CIFAR10 (Ran-zato & Hinton, 2010).when compared to multilayer models. The similar results for single layer Bernoulli-Gaussian RBMfrom Ranzato & Hinton (2010) (in gray scale) is shown in Figure 7. Therefore, we instead focusedon quantitative evaluation of the log-likelihood in Table 3.F.2 C OMPUTATIONAL TIME BETWEEN DIFFERENT SAMPLING STRATEGIESThe comparison in terms of CPU time of different sampling algorithms discussed in Section 5 isshown in Figure 8. Please note that the complexity of CD and Mix are the almost the same. Mixonly need a few more constant time steps which can be ignored compared with sampling steps.Leaky is more time-consuming because of computing and decomposing the covariance matrix as wediscussed in Section 5. We also report the execution time of each step of algorithms in Table 4.F.3 S TUDY ON RELU-B ERNOULLI RBMWe study the idea of annealing leakiness on the RBM model with leaky ReLU hidden units andBernoulli visible units. We create the toy dataset with 20, 25 and 30 visible units as shown inFigure 9. The small datasets allow exact computation of the partition function. For each dataset, wesample 60,000 images for training and 10,000 images for testing. We use 100 hidden units and PCDto train the model. The log likelihood results are shown in Table 5.Compared to the Gaussian visible units case we study in Section 3, where p(v)is a multi-variateGaussian distribution when c= 1, the partition function of p(v)in ReLU-Bernoulli when c= 1does not have the analytical form. Therefore, we do the following two-stage alternative. We firstrun the standard AIS algorithm, which anneals the energy, to the distribution with leakiness c= 1.We then change to anneals the leakiness from 1to the target value. For the typical AIS algorithm(AIS-Energy), we use 104chains with 2104intermediate distributions. For the proposed two-staged algorithm (AIS-Leaky), we use 104chains with 104intermediate distributions for annealingtoc= 1and the other 104distributions for annealing the leakiness. The results are shown in Table 6.In Table 6, the standard AIS algorithm (AIS-Energy) has unsatisfactory performance. We showthe performance of AIS for estimating the partition function of models with different leakiness onToy20. We use the 104independent chains and 2104intermediate distributions. The results areshown in Table 7. From Table 7, we observe that the AIS performances worse when the leakinessis closer to 0. Although we observed that increasing chains and intermediate distributions couldimprove the performance, but the improvements are limited. The study demonstrates when theSVHN CIFAR10Inner Loop of CD 96:16 96:15Inner Loop of annealing leaky 96:17 96:16Projection 438:56 439:47Table 4: The execution time (s) of each step of algorithms (1000 iterations).13Under review as a conference paper at ICLR 20170 2000 4000 6000 8000 10000Running Time (s)-1660-1640-1620-1600-1580-1560-1540-1520Log LikelhoodCDMixLeaky(a) SVHN0 2000 4000 6000 8000 10000Running Time (s)-2140-2120-2100-2080-2060-2040-2020-2000Log LikelhoodCDMixLeaky (b) CIFAR10Figure 8: Training leaky RBM with different sampling algorithms.(a)I= 20(b)I= 25(c)I= 30Figure 9: Toy Datasets with different number of visible units.I= 20 I= 25 I= 30Log likelihood 5:6608216:8469378:448907Log partition function 21:626300 22:363024 27:937846Table 5: The log lokelihood and true partition function for ReLU-Bernoulli RBM with differentnumber of visible units.I= 20 I= 25 I= 30AIS-Energy 46:71:26 60:91:29 48:21:18AIS-Leaky 0:040:002 0:040:003 0:100:05Table 6: The difference between the true partition function and the estimations of two algorithmswith standard deviation.14Under review as a conference paper at ICLR 2017non-linearity of the distribution increases (the leakiness value cdecreases), the standard AIS cannoteffectively estimate the partition function within feasible computational time. On the other hand, italso confirm the proposed idea, annealing the leakiness, can serve as an effective building block foralgorithms without enhancing the algorithm complexity. Note that the unsatisfactory performanceof AIS may be addressed by Grosse et al. (2013). From Appendix E, the two-stage algorithm usedhere can also be improved by applying Grosse et al. (2013).c= 1 c= 0:9c= 0:5c= 0:1c= 0:01AIS-Energy 0:0010:001 0:320:001 3:690:015 19:180:26 46:71:26Table 7: The difference (with standard deviation) between the true partition function and the esti-mations of AIS-Energy under different leakiness.F.3.1 MNIST AND CALTECH DATASETSWe study MNIST and Caltech 101 Silhouettes datasets with 500hidden units and train the modelwith CD-25. The results are shown in Table 8 and Table 9. The leaky RBM is better than con-ventional Bernoulli RBM and some deep models on MNIST data. Although leaky RBM deos notoutperform Su et al. (2017), but it enjoys the advantage of the simpler sampling procedure (Gaussiandistribution vs truncated Gaussian distribution) in the binary visible unit case.Model Dim Test lldRBM (Salakhutdinov & Murray, 2008) 500 -86.3SBN (Bornschein & Bengio, 2015) 10-100-200-300-400 -85.4DBN (Salakhutdinov & Murray, 2008) 2000-500 -86.2Truncated Gaussian (Su et al., 2017) 500 -83.2Leaky RBM 500 -84.5Table 8: The testing log-likelihood result on MNIST.Model Dim Test lldRBM (Cho et al., 2013) 500 -114.7RBM (Cho et al., 2013) 4000 -107.7SBN (Bornschein & Bengio, 2015) 10-100-200-300 -113.3Truncated Gaussian (Su et al., 2017) 500 -105.1Leaky RBM 500 -107.6Table 9: The testing log-likelihood result on Caltech 101 Silhouettes.15
r1nTpv9eg
Published as a conference paper at ICLR 2017LEARNING TO PERFORM PHYSICS EXPERIMENTS VIADEEPREINFORCEMENT LEARNINGMisha Denil1Pulkit Agrawal2Tejas D Kulkarni1Tom Erez1Peter Battaglia1Nando de Freitas1;31DeepMind2University of California Berkeley3Canadian Institute for Advanced Researchfmdenil,tkulkarni,etom,peterbattaglia,nandodefreitas g@google.compulkitag@berkeley.eduABSTRACTWhen encountering novel objects, humans are able to infer a wide range of phys-ical properties such as mass, friction and deformability by interacting with themin a goal driven way. This process of active interaction is in the same spirit asa scientist performing experiments to discover hidden facts. Recent advances inartificial intelligence have yielded machines that can achieve superhuman perfor-mance in Go, Atari, natural language processing, and complex control problems;however, it is not clear that these systems can rival the scientific intuition of evena young child. In this work we introduce a basic set of tasks that require agentsto estimate properties such as mass and cohesion of objects in an interactive sim-ulated environment where they can manipulate the objects and observe the conse-quences. We found that deep reinforcement learning methods can learn to performthe experiments necessary to discover such hidden properties. By systematicallymanipulating the problem difficulty and the cost incurred by the agent for per-forming experiments, we found that agents learn different strategies that balancethe cost of gathering information against the cost of making mistakes in differentsituations. We also compare our learned experimentation policies to randomizedbaselines and show that the learned policies lead to better predictions.1 I NTRODUCTIONOur work is inspired by empirical findings and theories in psychology indicating that infant learningand thinking is similar to that of adult scientists (Gopnik, 2012). One important view in developmen-tal science is that babies are endowed with a small number of separable systems of core knowledgefor reasoning about objects, actions, number, space, and possibly social interactions (Spelke & Kin-zler, 2007). The object core system covering aspects such as cohesion, continuity, and contact,enables babies and other animals to solve object related tasks such as reasoning about oclusion andpredicting how objects behave.Core knowledge research has motivated the development of methods that endow agents with physicspriors and perception modules so as to infer intrinsic physical properties rapidly from data (Battagliaet al., 2013; Wu et al., 2015; 2016; Stewart & Ermon, 2016). For instance, using physics engines andmental simulation, it becomes possible to infer quantities such as mass from visual input (Hamricket al., 2016; Wu et al., 2015).In early stages of life, infants spend a lot of time interacting with objects in a seemingly randommanner (Smith & Gasser, 2005). They interact with objects in multiple ways, including throwing,pushing, pulling, breaking, and biting. It is quite possible that this process of actively engaging withobjects and watching the consequences of their actions helps infants understand different physicalproperties of the object which cannot be observed directly using their sensory systems. It seemsinfants run a series of “physical” experiments to enhance their knowledge about the world (Gopnik,2012). The act of performing an experiment is useful both for quickly adapting an agent’s policy toa new environment and for understanding object properties in a holistic manner. Despite impressiveadvances in artificial intelligence that have led to superhuman performance in Go, Atari and naturallanguage processing, it is still unclear if these systems behind these advances can rival the scientificintuition of even a small child.1Published as a conference paper at ICLR 2017While we draw inspiration from child development, it must be emphasized that our purpose is notto provide an account of learning and thinking in humans, but rather to explore how similar typesof understanding might be learned by artificial agents in a grounded way. To this end we show thatwe can build agents that can learn to experiment so as to learn representations that are informativeabout physical properties of objects, using deep reinforcement learning. The act of conducting anexperiment involves the agent having a belief about the world, which it then updates by observingthe consequences of actions it performs.We investigate the ability of agents to learn to perform experiments to infer object properties throughtwo environments—Which is Heavier and Towers. In the Which is Heavier environment, the agentis able to apply forces to blocks and it must infer which of the blocks is the heaviest. In the Towersenvironment the agent’s task is to infer how many rigid bodies a tower is composed of by knockingit down. Unlike Wu et al. (2015), we assume that the agent has no prior knowledge about physicalproperties of objects, or the laws of physics, and hence must interact with the objects in order tolearn to answer questions about these properties.Our results indicate that in the case Which is Heavier environment our agents learn experimentationstrategies that are similar to those we would expect from an algorithm designed with knowledge ofthe underlying structure of the environment. In the Towers environment we show that our agentslearn a closed loop policy that can adapt to a varying time scale. In both environments we showthat when using the learned interaction policies agents are more accurate and often take less time toproduce correct answers than when following randomized interaction policies.2 W HAT IS THIS PAPER ABOUT ?This is an unusual paper in that it does not present a new model or propose a new algorithm. Thereis a reinforcement learning task at the core of each of our experiments, but the algorithm and modelswe use to solve it are not new, and many other existing approaches should be expected to performequally well if they were to be substituted in the same setting.This paper is a step towards agents that understand objects and intuitive reasoning in physical worlds.Our best AI agents currently fail on simple control tasks and simple games, such as Montezuma’sRevenge, because when they look at a screen that has a ladder, a key and a skull they don’t imme-diately know that keys open doors, that skulls are probably hazardous and best avoided, that laddersallow us to defy gravity, etc. The understanding of physics, relations and objects enables childrento solve seemingly simple problems that our best existing AI agents do not come close to begin tosolve.Endowing our agents with knowledge of objects would help enormously with planning, reasoningand exploration, and yet, doing so is far from trivial. What is an object? It turns out this questiondoes not have a straightforward answer, and this paper is based around the idea that staring at a thingis not enough to understand what it is.Children understand their world by engaging with it. Poking something to find that it is soft, tastingit to discover it is delicious, or hitting it to see if it falls down. Much of the knowledge people haveof the world is the result of interaction. Vision or open loop perception alone is not enough.This paper introduces tasks where we can evaluate the ability of agents to learn about these “hid-den” properties of objects. This requires environments where the tasks depend on these properties(otherwise the agents have no incentive to learn about them) and also that we have a way to probefor this understanding in agents that complete the tasks.Previous approaches to this problem have relied on either explicit knowledge of the underlyingstructure of the environment (e.g. hard-wired physical laws) or on exploiting correlations betweenmaterial appearance and physical properties (see Section 7 for much more detail). One of the contri-butions of this paper is to show that our agents can still learn about properties of objects, even whenthe connection between material appearance and physical properties is broken. This setting allowsus to show that our agents are not merely learning that blocks are heavy; they are learning how tocheck if blocks are heavy.None of the previous approaches give a complete account of how agents could come to understandthe physical properties of the world around them. Specifying a model manually is difficult to scale,2Published as a conference paper at ICLR 2017generalize and to ground in perception. Making predictions from only visual properties will fail todistinguish between objects that look similar, and it will certainly be unable to distinguish betweena sack full of rocks and a sack full of tennis balls.3 A NSWERING QUESTIONS THROUGH INTERACTIONWe pose the problem of experimentation as that of answering questions about non-visual propertiesof objects present in the environment. We design environments that ask questions about these prop-erties by providing rewards when the agent is able to infer them correctly, and we train agents toanswer these questions using reinforcement learning.We design environments that follow a three phase structure:Interaction Initially there is an exploration phase, where the agent is free to interact with the envi-ronment and gather information.Labeling The interaction phase ends when the agent produces a labeling action through which itcommunicates its answer to the implicit question posed by the environment.Reward When the agent produces as labeling action, the environment responds with a reward,positive for a correct answer and negative for incorrect, and the episode terminates. Theepisode terminates automatically with a negative reward if the agent does not produce alabeling action before a maximum time limit is reached.Crucially, the transition between interaction and labeling does not happen at a fixed time, but isinitiated by the agent. This is achieved by providing the agent with the ability to produce eitheran interaction action or a labeling action at every time step. This allows the agent to decide whenenough information has been gathered, and forces it to balance the trade-off between answering nowgiven its current knowledge, or delaying its answer to gather more information.The optimal trade-off between information gathering and risk of answering incorrectly depends ontwo factors. The first factor is the difficulty of the question and the second is the cost of information .The difficulty is environment specific and is addressed later when we describe the environments.The cost of information can be generically controlled by varying the discount factor during learning.A small discount factor places less emphasis on future rewards and encourages the agent to answeras quickly as possible. On the other hand, a large discount factor encourages the agent to spendmore time gathering information in order to increase the likelihood of choosing the correct answer.Our use of “questions” and “answers” differs from how these terms are used elsewhere in the litera-ture. Sutton et al. (2011) talk about a value function as a question, and the agent provides an answerin the form of an approximation of the value. The answer incorporates the agent’s knowledge , andthe match between the actual value and the agent’s approximation grounds what it means for thisknowledge to be accurate.In our usage the environment (or episode) itself is the question, and answers come in the form oflabeling actions. In each episode there is a correct answer whose semantics is grounded in the signof the reward function, and the accuracy of an agents knowledge is assessed by the frequency withwhich it is able to choose the correct answer.Using reward (rather than value) to ground our semantics means that we have a straightforwardway to ask questions that do not depend on the agent’s behavior. For example, we can easily askthe question “Which block is heaviest?” without making the question contingent on a particularinformation acquisition strategy.4 A GENT ARCHITECTURE AND TRAININGWe use the same basic agent architecture and training procedure for all of our experiments, makingonly minimal modifications in order to adapt the agents to different observation spaces and actuators.For all experiments we train recurrent agents using an LSTM with 100 hidden units. When workingfrom features we feed the observations into the LSTM directly. When training from pixels wefirst scale the observations to 84x84 pixels and feed them through a three convolution layers, each3Published as a conference paper at ICLR 2017Figure 1: Left: Diagram of the Which is Heavier environment. Blocks are always arranged in a line,but mass of the different blocks changes from episode to episode. Right: Mass gap distributions fordifferent settings of used in the experiments.followed by a ReLU non-linearity. The three layers have 32, 64, 64 square filters with sizes 8,4, 3, which are applied at strides of 4, 2, 1 respectively. We train the agents using AsynchronousAdvantage Actor Critic (Mnih et al., 2016), but ensure that the unroll length is always greater thanthe timeout length so the agent network is unrolled over the entirety of each episode.5 W HICH IS HEAVIERThe Which is Heavier environment is designed to ask a question about the relative masses of differentobjects in a scene. We assign masses to objects in a way that is uncorrelated with their appearancein order to ensure that the task is not solvable without interaction.5.1 E NVIRONMENTThe environment is diagrammed in the left panel of Figure 1. It consists of four blocks, which areconstrained to only move vertically. The blocks are always the same size, but vary in mass betweenepisodes. The agent’s strength (i.e. magnitude of force it can apply) remains constant betweenepisodes.The question to answer in this environment is which of the four blocks is the heaviest. Since themass of each block is randomly assigned in each episode, the agent must poke the blocks and observehow they respond in order to make this determination. Assigning masses randomly ensures it is notpossible to solve this task from vision (or features) alone, since the appearance and identity ofeach block imparts no information about its mass in the current episode. The only way to obtaininformation about the masses of the blocks is to interact with them and watch how they respond.The Which is Heavier environment is designed to encode a latent bandit problem through a “physi-cal” lens. Each block corresponds to an arm of the bandit, and the reward obtained by pulling eacharm is proportional to the mass of the block. Identifying the heaviest block can then be seen as abest arm identification problem (Audibert & Bubeck, 2010). Best arm identification is a well studiedproblem in experimental design, and understanding of how an optimal solution to the latent banditshould behave is used to guide our analysis of the agents we train on this task.It is important to emphasize that we cannot simply apply standard bandit algorithms here, becausewe impose a much higher level of prior ignorance on our algorithms than that setting allows. Ban-dit algorithms assume that rewards are observed directly, whereas our agents observe mass throughits role in dynamics (and in the case of learning from pixels, through the lens of vision as well).To maintain a bandit setting one could imagine parameterizing this transformation from reward toobservation, and perhaps even learning the mapping as well; however, doing so requires explicitlyacknowledging the mapping in the design of the learning algorithm, which we avoid doing. More-over, acknowledging this mapping in any way requires the a-priori recognition of the existence ofthe latent bandit structure. From the perspective of our learning algorithm the mere existence ofsuch a structure also lies beyond the veil of ignorance.Controlling the distribution of masses allows us to control the difficulty of this task. In particular,by controlling the size of the mass gap between the two heaviest blocks we can make the task more4Published as a conference paper at ICLR 20170 5 10 15 20Steps (x1e6)0.00.20.40.60.81.0Success ProbabilityFeatures35100 5 10 15 20Steps (x1e6)0.00.20.40.60.81.0Success ProbabilityPixels3510Figure 2: Learning curves for a typical agent trained on the Which is Heavier environment at varyingdifficulty settings. The y-axes show the probability of the agent producing the correct answer beforethe episode times out. Each plot shows the top 50% of agents started from 10 random seeds withidentical hyperparameter settings. The light lines show learning curves from individual agents, andthe dark lines show the median performance across the displayed runs for each difficulty. Left:Agents trained from features. Right: Agents trained from pixels.or less difficult. We generate masses in the range [0;1]and scale them to an appropriate range forthe agent’s strength.We use the following scheme for controlling the difficulty of the Which is Heavier environment.First we select one of the blocks uniformly at random to be the “heavy” block and designate theremaining three as “light” blocks. We sample the mass of the heavy block from Beta(;1)and themass of the light blocks from Beta(1;). The single parameter effectively controls the distributionof mass gaps (and thus controls the difficulty), with large values of leading to easier problems.Figure 1 shows the distribution of mass gaps for three values of that we use in our experiments.We distinguish between problem level and instance level difficulty for this domain. Instance leveldifficulty refers to the size of the mass gap in a single episode. If the mass gap is small it is harderto determine which block is heaviest, and we say that one episode is more difficult than another bycomparing their mass gaps. Problem level difficulty refers to the shape of the generating distributionof mass gaps (e.g. as shown in the right panel of Figure 1). A distribution that puts more mass onconfigurations that have a small mass gap will tend to generate more episodes that are difficult at theinstance level, and we say that one distribution is more difficult than another if it is more likely togenerate instances with small mass gaps. We control the problem level difficulty through , but weincorporate both problem and instance level difficulty in our analysis.We set the episode length limit to 100 steps in this environment, which is sufficient time to be muchlonger than a typical episode by a successfully trained agent.5.2 A CTUATORSThe obvious choice for actuation in physical domains is some kind of arm or hand based manipulator.However, controlling an arm or hand is quite challenging on its own, requiring a fair amount ofdexterity on the part of the agent. The manipulation problem, while very interesting in its ownright, is orthogonal to our goals in this work. Therefore we avoid the problem of learning dexterousmanipulation by providing the agent with a much simpler form of actuation.We call the actuation strategy for this environment direct actuation, which allows the agent to affectforces on the different blocks directly. At every time step the agent can output one out of eightpossible actions. The first four actions result in an application of a vertical force of fixed magnitudeto center of mass of each of the four blocks respectively. The remaining actions are labeling actionsand correspond to agent’s selection of which is the heaviest block.5Published as a conference paper at ICLR 20175.3 E XPERIMENTSOur first experiment is a sanity check to show that we can train agents successfully on the Whichis Heavier environment using both features and pixels. This experiment is designed simply to showthat our task is solvable, and to illustrate that by changing the problem difficulty we can make thetask very hard.We present two additional experiments showing how varying difficulty leads to differentiated behav-ior both at the problem level and at the instance level. In both cases knowledge of the latent banditproblem allows us to make predictions about how an experimenting agent should behave, and ourexperiments are designed to show that qualitatively correct behavior is obtained by our agents inspite of their a-priori ignorance of the underlying bandit problem.We show that as we increase the problem difficulty the learned policies transition from guessingimmediately when a heavy block is found to strongly preferring to poke all blocks before making adecision. This corresponds to the observation that if it is unlikely for more than one arm to give highreward then any high reward arm is likely to be best.We also observe that our agents can adapt their behavior to the difficulty of individual probleminstances. We show that a single agent will tend to spend longer gathering information when theparticular problem instance is more difficult. This corresponds to the observation that when the twobest arms have similar reward then more information is required to accurately distinguish them.Finally, we conduct an experiment comparing our learned information gathering policies to a ran-domized baseline method. This experiment shows that agents more reliably produce the correctlabel by following their learned interaction policies than by observing the environment being drivenby random actions.Success in learning For this experiment we trained several agents at three different difficultiescorresponding to 2f3;5;10g. For each problem difficulty we trained agents on both featureobservations, which includes the zcoordinate of each of the four blocks; and also using raw pixels,providing 8484pixel RGB rendering of the scene to the agent. Representative learning curvesfor each condition are shown in Figure 2. The curves are smoothed over time and show a runningestimate of the probability of success, rather than showing the reward directly.The agents do not reach perfect performance on this task, with more difficult problems plateauing atprogressively lower performance. This can be explained by looking at the distributions of instancelevel difficulties generated by different settings of , which is shown in the right panel of Figure 1.For higher difficulties (lower values of ) there is a substantial probability of generating probleminstances where the mass gap is near 0, which makes distinguishing between the two heaviest blocksvery difficult.Population strategy differentiation For this experiment we trained agents at three different dif-ficulties corresponding to 2f3;5;10gall using a discount factor of = 0:95which correspondsa relatively high cost of gathering information. We trained three agents for each difficulty and showresults aggregated across the different replicas.After training, each agent was run for 10,000 steps under the same conditions they were exposed toduring training. We record the number and length of episodes executed during the testing period aswell as the outcome of each episode. Episodes are terminated by timeout after 100 steps, but the vastmajority of episodes are terminated in <30steps by the agent producing a label. Since episodesvary in length not all agents complete the same number of episodes during testing.The left plot in Figure 3 shows histograms of the episode lengths broken down by task difficulty.The dashed vertical line indicates an episode length of four interaction steps, which is the minimumnumber of actions required for the agents to interact with every block. At a task difficulty of = 10the agents appear to learn simply to search for a single heavy block (which can be found with anaverage of two interactions). However, at a task difficulty of = 3we see a strong bias away fromterminating the episode before taking at least four exploratory actions.Individual strategy differentiation For this experiment we trained agents using the same threetask difficulties as in the previous experiment, but with an increased discount factor of = 0:99.6Published as a conference paper at ICLR 2017Figure 3: Left: Histograms of episode lengths for different task difficulty ( ) settings. There is atransition from = 10 where the agents answer eagerly as soon as they find a heavy block to = 3where the agents are more conservative about answering before they have acted enough to poke allthe blocks at least once. Right: Episode lengths as a function of the normalized mass gap. Unitson the x-axes are scaled to the range of possible masses, and the y-axis shows the number of stepsbefore the agent takes a labeling action. The black dots show individual episodes, and the red lineshows a linear trend fit by OLS and error bars show a histogram estimate of standard deviations.Each plot shows the testing episodes of a single trained agent.This decreases the cost of exploration and encourages the agents to gather more information beforeproducing a label, leading to longer episodes.After training, each agent was run for 100,000 steps under the same conditions they were exposedto during training. We record the length of each episode, as well as the mass gap between the twoheaviest blocks in each episode. In the same way that we use the distribution of mass gaps as ameasure of task difficulty, we can use the mass gap in a single episode as a measure of the difficultyof that specific problem instance. We again exclude from analysis the very small proportion ofepisodes that terminate by timeout.The right plots in Figure 3 show the relationship between the mass gap and episode length acrossthe testing runs of two different agents. From these plots we can see how a single agent has learnedto adapt its behavior based on the difficulty of a single problem instance. Although the variance ishigh, there is a clear correlation between the mass gap and the length of the episodes. This behaviorreflects what we would expect from a solution to the latent bandit problem; more information isrequired to identify the best arm when the second best arm is nearly as good.Randomized interaction For this experiment we trained several agents using both feature andpixel observations at the same three task difficulties with a discount of = 0:95. In total we trainedsix sets of agents for this experiment.After training, each agent was run for 10,000 steps under the same conditions used during training.We record the outcome of each episode, as well as the number of steps taken by each agent before itchooses a label. For each agent we repeat the experiment using both the agent’s learned interactionpolicy as well as a randomized interaction policy.The randomized interaction policy is obtained as follows: At each step the agent chooses a can-didate action using its learned policy. If the candidate action is a labeling action then it is passedto the environment unchanged (and the episode terminates). However, if the candidate action is aninteraction action then we replace the agent action with a new interaction action chosen uniformly atrandom from the available action set. When following the randomized interaction policy the agenthas no control over the information gathering process, but still controls when each episode ends, andwhat label is chosen.Figure 4 compares the learned interaction policies to the randomized interaction baselines. Theresults show that the effect on episode length is small, with no consistent bias towards longer orshorter episodes across difficulties and observation types. However, the learned interaction policiesproduce more accurate labels across all permutations.7Published as a conference paper at ICLR 201710-F 5-F 3-F 10-P 5-P 3-P051015202530Episode LengthActiveRandomized10-F 5-F 3-F 10-P 5-P 3-P0.00.20.40.60.81.0Probability of SuccessActiveRandomizedFigure 4: Comparison between agents in the Which is Heavier environment following their learnedinteraction policies vs the randomized interaction policy baseline. The x-axes show Difficulty-Observation combinations (e.g. 10-F is difficulty 10 with feature observations and 3-P is difficulty 3with pixel observations) Left: Episode lengths when gathering information using the different inter-action policies. Right: Probability of choosing the correct label under different conditions (episodesterminating in timeout have been excluded). The dashed line shows chance performance.6 T OWERSThe Towers environment is designed to ask agents to count the number of cohesive rigid bodies in ascene. The environment is designed so that in its initial configuration it is not possible to determinethe number of rigid bodies from vision or features alone.6.1 E NVIRONMENTThe environment is diagrammed in the left panel of Figure 5. It consists of a tower of five blockswhich can move freely in three dimensions. The initial block tower is always in the same configura-tion but in each episode we bolt together different subsets of the blocks to form larger rigid bodiesas shown in the figure.The question to answer in this environment is how many rigid bodies are formed from the primitiveblocks. Since which blocks are bound together is randomly assigned in each episode, and bindingforces are invisible, the agent must poke the tower and observe how it falls down in order to deter-mine how many rigid bodies it is composed of. We parameterize the environment in such a way thatthe distribution over the number of separate blocks in the tower is uniform. This ensures that thereis no single action strategy that achieves high reward.6.2 A CTUATORSIn the Towers environment, we used two actuators: direct actuation, which is similar to the Whichis Heavier environment; and the fistactuator, described below. In case of the direct actuation, theagent can output one out of 25 actions. At every time step, the agent can apply a force of fixedmagnitude in either of +x, -x, +y or -y direction to one out of the five blocks. If two blocks are gluedtogether, both blocks move under the effect of force. We use towers of five blocks, which results in20 different possible actions. The remaining actions are labeling actions that are used by the agentto indicate the number of distinct blocks in the tower.The fist is a large spherical object that the agent can actuate by setting velocities in a 2D horizontalplane. Unlike direct actuation, the agent cannot apply any direct forces to the objects that constitutethe tower, but only manipulate them by pushing or hitting them with the fist. At every time stepagent can output one of nine actions. The first four actions corresponds to setting the velocity ofthe fist to a constant amount in (+x, -x, +y, -y) directions respectively. The remaining actions arelabeling actions, that are used by the agent to indicate the number of distinct blocks in the tower.In order to investigate if the agent learns a strategy of stopping after a fixed number of time steps orwhether it integrates sensory information in a non-trivial manner we used a notion of “control timestep”. The idea of control time step is similar to that of action repeats and if the physics simulationtime step is 0.025s and control time step is 0.1s, it means that the same action is repeated 4 times.For the direct actuators we use an episode timeout of 26 steps and for both actuator types.8Published as a conference paper at ICLR 20170.02 0.04 0.06 0.08 0.10Control Timestep0.00.20.40.60.81.0Prob / Length (s)Probability of SuccessEpisode LengthFigure 5: Top: Example trajectory of a block tower being knocked down using the fist actuator. Left:Diagram of the hidden structure of the Towers environment. The tower on the left is composed offive blocks, but could decompose into rigid objects in any several ways that can only be distinguishedby interacting with the tower. Right: Behavior of a single trained agent using fist actuators whenvarying the control time step. The x-axis shows different control time step lengths (the trainingcondition 0.1). The blue line shows probability of the agent correctly identifying the number ofblocks. The red line shows the median episode length (in seconds) with error bars showing 95%confidence intervals computed over 50 episodes. The shaded region shows +=1 control time steparound the median.6.3 E XPERIMENTSOur first experiment is again intended to show that we can train agents in this environment. We showsimply that the task is solvable by our agents using both types of actuation.The second experiment shows that the agents learn to wait for an observation where they can identifythe number of rigid bodies before producing an answer. This is designed to show that the agents finda closed loop strategy for counting the number of rigid bodies. An alternative hypothesis would bethat agents learn to wait for (approximately) the same number of steps each time and then take theirbest guess.Our third experiment compares the learned policy to a randomized interaction policy and showsthat agents are able to determine the correct number of blocks in the tower more quickly and morereliably when using their learned policy to gather information.Success in learning For this experiment we trained several agents on the Towers environment us-ing different pairings of actuators and perception. The features observations include the 3d positionof each primitive block, and when training using raw pixels we provide an 8484pixel RGB ren-dering of the scene as the agent observation. Figure 6 shows learning curves for each combinationof actuator and observation type.In all cases we obtain agents that solve the task nearly perfectly, although when training from pixelswe find that the range of hyperparameters which train successfully is narrower than when trainingfrom features. Interestingly, the fist actuators lead to the fastest learning, in spite of the fact that theagent must manipulate the blocks indirectly through the fist. One possible explanation is that the fistcan affect multiple blocks in one action step, whereas in the direct actuation only one block can beaffected per time step.Waiting for information For this experiment we trained an agent with pixel observations and thefist actuator on the towers task with an control time step of 0.1 seconds and examine its behaviorat test time with a smaller delay between actions. Reducing the control time step means that fromthe agent perspective time has been slowed down. Moving the fist a fixed amount of distance takeslonger, as does waiting for the block tower to collapse once it has been hit.After training the agent was run for 10000 steps for a range of different control time steps. We recordthe outcome of each episode, as well as the number of steps taken by the agent before it chooses alabel. None of the test episodes terminate by timeout, so we include all of them in the analysis.The plot in Figure 5 shows the probability of answering correctly, as well as the median length ofeach episode measured in seconds. In terms of absolute performance we see a small drop comparedto the training setting, where the agent is essentially perfect, but the agent performance remains goodeven for substantially smaller control timesteps than were used during training.9Published as a conference paper at ICLR 20170 10 20 30 40 50Steps (x1e6)0.00.20.40.60.81.0Success ProbabilityDirect Features0 10 20 30 40 50Steps (x1e6)0.00.20.40.60.81.0Success ProbabilityDirect Pixels0 10 20 30 40 50Steps (x1e6)0.00.20.40.60.81.0Success ProbabilityFist Features0 10 20 30 40 50Steps (x1e6)0.00.20.40.60.81.0Success ProbabilityFist PixelsFigure 6: Learning curves for agents trained on the Towers environment under different conditions.The y-axes show the probability of the agent producing the correct answer before the episode timesout. The different plots show different pairings of observations and actuators as indicated in the plottitles. Each plot shows the top 50% of runs from 10 random seeds with identical hyper-parametersettings. The black lines show learning curves from individual agents, and the red lines show themedian performance of the displayed runs.We also observe that the episodes with different time steps take approximate the same amount of realtime across the majority of the tested range. This corresponds to a large change in episode length asmeasured by number of agent actions, since with an control time step of 0.01 the agent must execute10x as many actions to cover the same amount of real time as compared to the control time stepused during training. From this we can infer that the agent has learned to wait for an informativeobservation before producing a label, as opposed to a simpler degenerate strategy of waiting a fixedamount of steps before answering.Randomized interaction For this experiment we trained several agents for each combination ofactuator and observation type, and examine their behavior when observing an environment drivenby a random interaction policy. The randomized interaction policy is identical to the randomizedbaseline used in the Which is Heavier environment.After training, each agent was run for 10,000 steps. We record the outcome of each episode, as wellas the number of steps taken by the agent before it chooses a label. For each agent we repeat theexperiment using both the agent’s learned interaction policy as well as the randomized interactionpolicy.Figure 7 compares the learned interaction policies to the randomized interaction baselines. Theresults show that the agents tend to produce labels more quickly when following their learned inter-action policies, and also that the labels they produce in this way are much more accurate.7 R ELATED WORKDeep learning techniques in conjunction with vast labeled datasets have yielded powerful modelsfor image classification (Krizhevsky et al., 2012; He et al., 2016) and speech recognition (Hintonet al., 2012). In recent years, as we have approached human level performance on these tasks, therehas been a strong interest in the computer vision field in moving beyond semantic classification, totasks that require a deeper and more nuanced understanding of the world.Inspired by developmental studies (Smith & Gasser, 2005), some recent works have focused onlearning representations by predicting physical embodiment quantities such as ego-motion (Agrawalet al., 2015; Jayaraman & Grauman, 2015), instead of symbolic labels. Extending the realm ofthings-to-be-predicted to include quantities beyond class labels, such as viewer centric parameters(Doersch et al., 2015) or the poses of humans within a scene (Delaitre et al., 2012; Fouhey et al.,2014), has been shown to improve the quality of feature learning and scene understanding. Re-searchers have looked at cross modal learning, for example synthesizing sounds from visual images(Owens et al., 2015), using summary statistics of audio to learn features for object recognition(Owens et al., 2016) or image colorization (Zhang et al., 2016).Inverting the prediction tower, another line of work has focused on learning about the visual world bysynthesizing, rather than analyzing, images. Major cornerstones of recent work in this area includethe Variational Autoencoders of Kingma & Welling (2014), the Generative Adversarial Networks10Published as a conference paper at ICLR 2017D-F F-F D-P F-P0510152025Episode LengthActiveRandomizedD-F F-F D-P F-P0.00.20.40.60.81.0Probability of SuccessActiveRandomizedFigure 7: Comparison between agents in the Towers environment following their learned interactionpolicies vs the randomized interaction policy baseline. The x-axes show different Observation-Actuator combinations (e.g. D-F is Direct-Features and F-P is Fist-Pixels). Left: Episode lengthswhen gathering information using the different interaction policies. Right: Probability of choosingthe correct label under different conditions (episodes terminating in timeout have been excluded).The dashed line shows chance performance.of (Goodfellow et al., 2014), and more recently autoregressive models have been very successful(van den Oord et al., 2016).Building on models of single image synthesis there have been many works on predicting the evo-lution of video frames over time (Ranzato et al., 2014; Srivastava et al., 2015; van den Oord et al.,2016). Xue et al. (2016) have approached this problem by designing a variational autoencoder archi-tecture that uses the latent stochastic units of the V AE to make choices about the direction of motionof objects, and generates future frames conditioned on these choices.A different form of uncertainty in video prediction can arise from the effect of actions taken by anagent. In environments with deterministic dynamics (where the possibility of “known unknowns”can, in principle, be eliminated), very accurate action-conditional predictions of future frames can bemade (Oh et al., 2015). Introducing actions into the prediction process amounts to learning a latentforward dynamics model, which can be exploited to plan actions to achieve novel goals (Watteret al., 2015; Assael et al., 2015; Fragkiadaki et al., 2016). In these works, frame synthesis plays therole of a regularizer, preventing collapse of the feature space where the dynamics model lives.Agrawal et al. (2016) break the dependency between frame synthesis and dynamics learning byreplacing frame synthesis with an inverse dynamics model. The forward model plays the same roleas in the earlier works, but here feature space collapse is prevented by ensuring that the model candecode actions from pairs of time-adjacent images. Several works, including Agrawal et al. (2016)and Assael et al. (2015) mentioned above but also Pinto et al. (2016); Pinto & Gupta (2016); Levineet al. (2016), have gone further in coupling feature learning and dynamics. The learned dynamicsmodels can be used for control not only after learning but also during the learning process in orderto collect data in a more targeted way, which has been shown to improve the speed and quality oflearning in robot manipulation tasks.A key challenge of learning from dynamics is collecting the appropriate data. An ingenious solutionto this is to import real world data into a physics engine and simulate the application of forces inorder to generate ground truth data. This is the approach taken by Mottaghi et al. (2016), whogenerate an “interactable” data set of scenes, which they use to generate a static data set of imageand force pairs, along with the ground truth trajectory of a target object in response to the applicationof the indicated force.When the purpose is learning an intuitive understanding of dynamics it is possible to do interestingwork with entirely synthetic data (Fragkiadaki et al., 2016; Lerer et al., 2016). Lerer et al. (2016)show that convolutional networks can learn to make judgments about the stability of synthetic blocktowers based on a single image of the tower. They also show that their model trained on syntheticdata is able to generalize to make accurate judgments about photographs of similar block towersbuilt in the real world.11Published as a conference paper at ICLR 2017Making intuitive judgments about block towers has been extensively studied in the psychophysicsliterature. There is substantial evidence connecting the behavior of human judgments to inferenceover an explicit latent physics model (Hegarty, 2004; Hamrick et al., 2011; Battaglia et al., 2013).Humans can infer mass by watching movies of complex rigid body dynamics (Hamrick et al., 2016).A major component of the above line of work is analysis by synthesis , in which understanding of aphysical process is obtained by learning to invert it. Observations are assumed to be generated froman explicitly parameterized generative model of the true physical process, and provide constraints toan inference process run over the parameters of this model. The analysis by synthesis approach hasbeen extremely influential due to its power to explain human judgments and generalization patternsin a variety of situations (Lake et al., 2015).Galileo (Wu et al., 2015) is a particularly relevant instance of tying together analysis by synthesis anddeep learning for understanding dynamics. This system first infers the physical parameters (massand friction coefficient) of a variety of blocks by watching videos of them sliding down slopes andcolliding with other blocks. This stage of the system uses an off-the-shelf object tracker to groundinference over the parameters of a physical simulator, and the inference is achieved by matchingsimulated and observed block trajectories. The inferred physical parameters are used to train a deepnetwork to predict the physical parameters from the initial frame of video. At test time the systemis evaluated by using the deep network to infer physical parameters of new blocks, which can be fedinto the physics engine and used to answer questions about behaviors not observed at training time.Physics 101 (Wu et al., 2016) is an extension of Galileo that more fully embraces deep learning.Instead of using a first pass of analysis by synthesis to infer physical parameters based on observa-tions, a deep network is trained to regress the output of an object tracker directly, and the relevantphysical laws are encoded directly into the architecture of the model. The authors show that they canuse latent intrinsic physical properties inferred in this way to make novel predictions. The approachof encoding physical models as architecture constraints has also been proposed by Stewart & Ermon(2016).Many of the works discussed thus far, including Galileo and Physics 101, are restricted to passivesensing. Pinto et al. (2016); Pinto & Gupta (2016); Agrawal et al. (2016); Levine et al. (2016)are exceptions to this because they learn their models using a sequential greedy data collectionbootstrapping strategy. Active sensing, it appears, is an important aspect of visual object learning intoddlers as argued by Bambach et al. (2016), providing motivation for the approach presented here.In computer vision, it is well known that recognition performance can be improved by moving so asto acquire new views of an object or scene. Jayaraman & Grauman (2016), for example, apply deepreinforcement learning to construct an agent that chooses how to acquire new views of an object soas to classify it into a semantic category, and their related work section surveys many other effortsin active vision.While Jayaraman & Grauman (2016) and others share deep reinforcement learning and active sens-ing in common with our work, their goal is to learn a policy that can be applied to images to makedecisions based on vision. In contrast, the goal in this paper is to study how agents learn to ex-periment continually so as to learn representations to answer questions about intrinsic properties ofobjects. In particular, our focus is on tasks that can only be solved by interaction and not by visionalone.8 C ONCLUSION AND FUTURE DIRECTIONSDespite recent advances in artificial intelligence, machines still lack a common sense understandingof our physical world. There has been impressive progress in recognizing objects, segmenting objectboundaries and even describing visual scenes with natural language. However, these tasks are notenough for machines to infer physical properties of objects such as mass, friction or deformability.We introduce a deep reinforcement learning agent that actively interacts with physical objects to in-fer their hidden properties. Our approach is inspired by findings from the developmental psychologyliterature indicating that infants spend a lot of their early time experimenting with objects throughrandom exploration (Smith & Gasser, 2005; Gopnik, 2012; Spelke & Kinzler, 2007). By letting ouragents conduct physical experiments in an interactive simulated environment, they learn to manip-12Published as a conference paper at ICLR 2017ulate objects and observe the consequences to infer hidden object properties. We demonstrate theefficacy of our approach on two important physical understanding tasks—inferring mass and count-ing the number of objects under strong visual ambiguities. Our empirical findings suggest that ouragents learn different strategies for these tasks that balance the cost of gathering information againstthe cost of making mistakes in different situations.Scientists and children are able not only to probe the environment to discover things about it, butthey can also leverage their findings to answer new questions. In this paper we have shown thatagents can be trained to gather knowledge to answer questions about hidden properties, but we havenot addressed the larger issue of theory building, or transfer of this information. Given agents thatcan make judgments about mass and numerosity, how can they be enticed to leverage this knowledgeto solve new tasks?Another important aspect of understanding through interaction is that that the shape of the interac-tions influences behavior. We touched on this in the Towers environment where we looked at twodifferent actuation styles, but there is much more to be done here. Thinking along these lines leadsnaturally to exploring tool use. We showed that agents can make judgments about object mass byhitting them, but could we train an agent to make similar judgments using a scale?Finally, we have made no attempt in this work to optimize data efficiency, but learning physicalproperties from fewer samples is an important direction to pursue.ACKNOWLEDGMENTSWe would like to thank Matt Hoffman for several enlightening discussions about bandits. We wouldalso like to thank the ICLR reviewers, whose helpful feedback allowed us to greatly improve thepaper.REFERENCESPulkit Agrawal, Jo ̃ao Carreira, and Jitendra Malik. Learning to see by moving. In IEEE International Confer-ence on Computer Vision , pp. 37–45, 2015.Pulkit Agrawal, Ashvin Nair, Pieter Abbeel, and Jitendra Malik. Learning to poke by poking: Experientiallearning of intuitive physics. In Neural Information Processing Systems , 2016.John-Alexander M Assael, Niklas Wahlstr ̈om, Thomas B Sch ̈on, and Marc Peter Deisenroth. Data-efficient learning of feedback policies from image pixels using deep dynamical models. arXiv preprintarXiv:1510.02173 , 2015.Jean-Yves Audibert and S ́ebastien Bubeck. Best arm identification in multi-armed bandits. In Conference onLearning Theory , pp. 13–p, 2010.Sven Bambach, David J Crandall, Linda B Smith, and Chen Yu. Active viewing in toddlers facilitates visualobject learning: An egocentric vision approach. CogSci , 2016.Peter W Battaglia, Jessica B Hamrick, and Joshua B Tenenbaum. Simulation as an engine of physical sceneunderstanding. Proceedings of the National Academy of Sciences , 110(45):18327–18332, 2013.Vincent Delaitre, David F Fouhey, Ivan Laptev, Josef Sivic, Abhinav Gupta, and Alexei A Efros. Scene se-mantics from long-term observation of people. In European Conference on Computer Vision , pp. 284–298.Springer, 2012.Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by contextprediction. In Proceedings of the IEEE International Conference on Computer Vision , pp. 1422–1430, 2015.David F Fouhey, Vincent Delaitre, Abhinav Gupta, Alexei A Efros, Ivan Laptev, and Josef Sivic. Peoplewatching: Human actions as a cue for single view geometry. International Journal of Computer Vision , 110(3):259–274, 2014.Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, and Jitendra Malik. Learning visual predictive modelsof physics for playing billiards. ICLR , 2016.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, AaronCourville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Process-ing Systems , pp. 2672–2680, 2014.Alison Gopnik. Scientific thinking in young children: Theoretical advances, empirical research, and policyimplications. Science , 337(6102):1623–1627, 2012.Jessica Hamrick, Peter Battaglia, and Joshua B Tenenbaum. Internal physics models guide probabilistic judg-ments about object dynamics. In Proceedings of the 33rd annual conference of the cognitive science society ,13Published as a conference paper at ICLR 2017pp. 1545–1550. Cognitive Science Society Austin, TX, 2011.Jessica B Hamrick, Peter W Battaglia, Thomas L Griffiths, and Joshua B Tenenbaum. Inferring mass in complexscenes by mental simulation. Cognition , 157:61–76, 2016.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. InConference on Computer Vision and Pattern Recognition , 2016.Mary Hegarty. Mechanical reasoning by mental simulation. Trends in cognitive sciences , 8(6):280–5, 2004.Geoffrey E. Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, AndrewSenior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, and Brian Kingsbury. Deep neural networks foracoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal ProcessingMagazine , 29(6):82–97, 2012.Dinesh Jayaraman and Kristen Grauman. Learning image representations tied to ego-motion. In Proceedingsof the IEEE International Conference on Computer Vision , pp. 1413–1421, 2015.Dinesh Jayaraman and Kristen Grauman. Look-ahead before you leap: End-to-end active recognition by fore-casting the effect of motion. In European Conference on Computer Vision , pp. 489–505, 2016.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In International Conference on Learn-ing Representations , 2014.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neuralnetworks. In Advances in neural information processing systems , pp. 1097–1105, 2012.Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning throughprobabilistic program induction. Science , 350(6266):1332–1338, 2015.Adam Lerer, Sam Gross, and Rob Fergus. Learning physical intuition of block towers by example. In Interna-tional Conference on Machine Learning , pp. 430–438, 2016.Sergey Levine, Peter Pastor, Alex Krizhevsky, and Deirdre Quillen. Learning hand-eye coordination for roboticgrasping with deep learning and large-scale data collection. arXiv preprint arXiv:1603.02199 , 2016.V olodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley,David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXivpreprint arXiv:1602.01783 , 2016.Roozbeh Mottaghi, Mohammad Rastegari, Abhinav Gupta, and Ali Farhadi. “what happens if. . . ” learning topredict the effect of forces in images. In European Conference on Computer Vision , pp. 269–285, 2016.Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard Lewis, and Satinder Singh. Action-conditional video pre-diction using deep networks in Atari games. In Neural Information Processing Systems , pp. 2863–2871,2015.Andrew Owens, Phillip Isola, Josh McDermott, Antonio Torralba, Edward H Adelson, and William T Freeman.Visually indicated sounds. arXiv preprint arXiv:1512.08512 , 2015.Andrew Owens, Jiajun Wu, Josh H McDermott, William T Freeman, and Antonio Torralba. Ambient sound pro-vides supervision for visual learning. In European Conference on Computer Vision , pp. 801–816. Springer,2016.Lerrel Pinto and Abhinav Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robothours. In IEEE International Conference on Robotics and Automation , pp. 3406–3413, 2016.Lerrel Pinto, Dhiraj Gandhi, Yuanfeng Han, Yong-Lae Park, and Abhinav Gupta. The curious robot: Learningvisual representations via physical interactions. In European Conference on Computer Vision , pp. 3–18,2016.MarcAurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan Collobert, and Sumit Chopra. Video(language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604 ,2014.Linda Smith and Michael Gasser. The development of embodied cognition: Six lessons from babies. Artificiallife, 11(1-2):13–29, 2005.Elizabeth S Spelke and Katherine D Kinzler. Core knowledge. Developmental science , 10(1):89–96, 2007.Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representationsusing lstms. CoRR, abs/1502.04681 , 2, 2015.Russell Stewart and Stefano Ermon. Label-free supervision of neural networks with physics and domain knowl-edge. arXiv preprint arXiv:1609.05566 , 2016.Richard S Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M Pilarski, Adam White, and DoinaPrecup. Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotorinteraction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume2, pp. 761–768. International Foundation for Autonomous Agents and Multiagent Systems, 2011.Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXivpreprint arXiv:1601.06759 , 2016.14Published as a conference paper at ICLR 2017Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locallylinear latent dynamics model for control from raw images. In Advances in Neural Information ProcessingSystems , pp. 2746–2754, 2015.Jiajun Wu, Ilker Yildirim, Joseph J Lim, Bill Freeman, and Josh Tenenbaum. Galileo: Perceiving physicalobject properties by integrating a physics engine with deep learning. In Neural Information ProcessingSystems . 2015.Jiajun Wu, Joseph J. Lim, Hongyi Zhang, Joshua B. Tenenbaum, and William T. Freeman. Physics 101:Learning physical object properties from unlabeled videos. In British Machine Vision Conference , 2016.Tianfan Xue, Jiajun Wu, Katherine L. Bouman, and William T. Freeman. Visual dynamics: Probabilistic futureframe synthesis via cross convolutional networks. arXiv preprint arXiv 1607.02586 , 2016.Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. arXiv preprintarXiv:1603.08511 , 2016.15
S1j4RqYxg
Under review as a conference paper at ICLR 2017EFFICIENT CALCULATION OF POLYNOMIAL FEATURESONSPARSE MATRICESNystrom, Andrewawnystrom@gmail.comyHughes, Johnjfh@cs.brown.eduyABSTRACTWe provide an algorithm for polynomial feature expansion that both operates onand produces a compressed sparse row matrix without any densification. For avector of dimension D, density d, and degree kthe algorithm has time complexityO(dkDk)where kis the polynomial-feature order; this is an improvement by afactor dkover the standard method.1 I NTRODUCTIONPolynomial feature expansion has long been used in statistics to approximate nonlinear func-tions Gergonne (1974); Smith (1918). The compressed sparse row (CSR) matrix format is a widely-used data structure to hold design matrices for statistics and machine learning applications. How-ever, polynomial expansions are typically not performed directly on sparse CSR matrices, nor onany sparse matrix format for that matter, without intermediate densification steps. This densificationnot only adds extra overhead, but wastefully computes combinations of features that have a productof zero, which are then discarded during conversion into a sparse format.We provide an algorithm that allows CSR matrices to be the input of a polynomial feature expansionwithout any densification. The algorithm leverages the CSR format to only compute products offeatures that result in nonzero values. This exploits the sparsity of the data to achieve an improvedtime complexity of O(dkDk)on each vector of the matrix where kis the degree of the expansion,Dis the dimensionality, and dis the density. The standard algorithm has time complexity O(Dk).Since 0d1, our algorithm is a significant improvement. While the algorithm we describe usesCSR matrices, it could be modified to operate on other sparse formats.2 P RELIMINARIESMatrices are denoted by uppercase bold letters thus: A. The ithe row of Ais written ai. All vectorsare written in bold, and a, with no subscript, is a vector.A compressed sparse row (CSR) matrix representation of an r-row matrix Aconsists of three vec-tors:c,d, andpand a single number: the number of columns of A. The vectors canddcontain thesame number of elements, and hold the column indices and data values, respectively, of all nonzeroelements of A. The vector phasrentries. The values in pindex both candd. The ith entry piofptells where the data describing nonzero columns of aiare within the other two vectors: cpi:pi+1contain the column indices of those entries; dpi:pi+1contain the entries themselves. Since onlynonzero elements of each row are held, the overall number of columns of Amust also be stored,since it cannot be derived from the other data.Scalars, vectors, and matrices are often referenced with the superscript k. This is not to be interpretedas an exponent, but to indicate that it is the analogous aspect of that which procedes it, but in itspolynomial expansion form. For example, c2is the vector that holds columns for nonzero values inA’s quadratic feature expansion CSR representation.For simplicity in the presentation, we work with polynomial expansions of degree 2, but continue touse the exponent kto show how the ideas apply in the general case.Now at GoogleyThe authors contributed equally important and fundamental aspects of this work.1Under review as a conference paper at ICLR 2017We do provide an algorithm for third degree expansions, and derive the big-O time complexity ofthe general case.We have also developed an algorithm for second and third degree interaction features (combinationswithout repetition), which can be found in the implementation.3 M OTIVATIONIn this section, we present a strawman algorithm for computing polynomial feature expansions ondense matrices. We then modify the algorithm slightly to operate on a CSR matrix, in order toexpose its infeasibility in that context. We then show how the algorithm would be feasible with anadded component, which we then derive in the following section.3.1 D ENSE EXPANSION ALGORITHMA natural way to calculate polynomial features for a matrix Ais to walk down its rows and, for eachrow, take products of all k-combinations of elements. To determine in which column of Akiproductsof elements in Aibelong, a simple counter can be set to zero for each row of Aand incrementedefter each polynomial feature is generated. This counter gives the column of Akiinto which eachexpansion feature belongs.SECOND ORDER (k= 2) DENSE POLYNOMIAL EXPANSION ALGORITHM (A)1N=row count of A2D=column count of A3Ak=empty ND2matrix4fori= 0toN15 cp= 06 forj1= 0toD17 forj2=j1toD18 Akicp=Aij1Aij29 cp=cp+ 13.2 I MPERFECT CSR E XPANSION ALGORITHMNow consider how this algorithm might be modified to accept a CSR matrix. Instead of walkingdirectly down rows of A, we will walk down sections of canddpartitioned by p, and instead ofinserting polynomial features into Ak, we will insert column numbers into ckand data elementsintodk.2Under review as a conference paper at ICLR 2017INCOMPLETE SECOND ORDER (k= 2) CSR P OLYNOMIAL EXPANSION ALGORITHM (A)1N=row count of A2pk=vector of size N+ 13pk0= 04nnzk= 05fori= 0toN16 istart =pi7 istop =pi+18 ci=cistart :istop9 nnzki=jcij210 nnzk=nnzk+nnzki11 pki+1=pki+nnzki/ /Build up the elements of pk,ck, anddk12pk=vector of size N+ 113ck=vector of size nnzk14dk=vector of size nnzk15n= 016fori= 0toN117 istart =pi18 istop =pi+119 ci=cistart :istop20 di=distart :istop21 forc1= 0tojcij122 forc2=c1tojcij123 dkn=dc0dc124 ckn=?25 n=n+ 1The crux of the problem is at line 24. Given the arbitrary columns involved in a polynomial featureofAi, we need to determine the corresponding column of Aki. We cannot simply reset a counter foreach row as we did in the dense algorithm, because only columns corresponding to nonzero valuesare stored. Any time a column that would have held a zero value is implicitly skipped, the counterwould err.To develop a general algorithm, we require a mapping from columns of Ato a column of Ak. Ifthere are Dcolumns of AandDkcolumns of Ak, this can be accomplished by a bijective mappingof the following form:(j0; j1; : : : ; j k1) pj0j1:::ik12f0;1; : : : ;Dk1g (1)such that 0j0j1 jk1< D where (j0; j1; : : : ; j k1)are elements of candpj0j1:::ik1is an element of ck.4 C ONSTRUCTION OF MAPPINGWithin this section, i,j, and kdenote column indices. For the second degree case, we seek a mapfrom matrix indices (i; j)(with 0i < j < D ) to numbers f(i; j)with 0f(i; j)<D(D1)2,one that follows the pattern indicated by264x0 1 3x x 2 4x x x 5x x x x375 (2)where the entry in row i, column j, displays the value f(i; j). We let T2(n) =12n(n+ 1) be thenth triangular number; then in Equation 2, column j(forj > 0) contains entries with T2(j1)3Under review as a conference paper at ICLR 2017e < T 2(j); the entry in the ith row is just i+T2(j1). Thus we have f(i; j) =i+T2(j1) =12(2i+j2j):For instance, in column j= 2in our example (the third column), the entry in rowi= 1isi+T2(j1) = 1 + 1 = 2 .With one-based indexing in both the domain and codomain, the formula above becomes f1(i; j) =12(2i+j23j+ 2):Forpolynomial features, we seek a similar map g, one that also handles the case i=j. In this case,a similar analysis yields g(i; j) =i+T2(j) =12(2i+j2+j+ 1):To handle three-way interactions , we need to map triples of indices in a 3-index array to a flat list,and similarly for higher-order interactions. For this, we’ll need the tetrahedral numbers T3(n) =Pni=1T2(n) =16(n3+ 3n2+ 2n).For three indices, i; j; k , with 0i < j < k < D , we have a similar recurrence. Calling themapping h, we haveh(i; j; k ) =i+T2(j1) +T3(k2); (3)if we define T1(i) =i, then this has the very regular formh(i; j; k ) =T1(i) +T2(j1) +T3(k2); (4)and from this the generalization to higher dimensions is straightforward. The formulas for “highertriangular numbers”, i.e., those defined byTk(n) =nXi=1Tk1(n) (5)fork > 1can be determined inductively.The explicit formula for 3-way interactions, with zero-based indexing, ish(i; j; k ) = 1 + ( i1) +(j1)j2+ (6)(k2)3+ 3(k2)2+ 2(k2)6: (7)5 F INAL CSR E XPANSION ALGORITHMWith the mapping from columns of Ato a column of Ak, we can now write the final form of theinnermost loop of the algorithm from 3.2. Let the mapping for k= 2 be denoted h2. Then theinnermost loop becomes:forc2=c1tojcij1j0=cc0j1=cc1cp=h2(j0; j1)dkn=dc0dc1ckn=cpn=n+ 1The algorithm can be generalized to higher degrees by simply adding more nested loops, usinghigher order mappings, modifying the output dimensionality, and adjusting the counting of nonzeropolynomial features in line 9.6 T IMECOMPLEXITY6.1 A NALYTICALCalculating k-degree polynomial features via our method for a vector of dimensionality Danddensity drequiresdDk(with repetition) products. The complexity of the algorithm, for fixed k4Under review as a conference paper at ICLR 2017dD, is thereforeOdD+k1k=O(dD+k1)!k!(dD1)!(8)=O(dD+k1)(dD+k2): : :(dD)k!(9)=O((dD+k1)(dD+k2): : :(dD))forkdD (10)=OdkDk(11)6.2 E MPIRICALTo demonstrate how our algorithm scales with the density of a matrix, we compare it to the tradi-tional polynomial expansion algorithm in the popular machine library scikit-learn Pedregosa et al.(2011) in the task of generating second degree polynomial expansions. Matrices of size 1005000were randomly generated with densities of 0:2,0:4,0:6,0:8, and 1:0. Thirty matrices of each den-sity were randomly generated, and the mean times (gray) of each algorithm were plotted. The red orblue width around the mean marks the third standard deviation from the mean. The time to densifythe input to the standard algorithm was not counted.The standard algorithm’s runtime stays constant no matter the density of the matrix. This is becauseit does not avoid products that result in zero, but simply multiplies all second order combinations offeatures. Our algorithm scales quadratically with respect to the density. If the task were third degreeexpansions rather than second, the plot would show cubic scaling.The fact that our algorithm is approximately 6:5times faster than the scikit-learn algorithm on 1005000 matrices that are entirely dense is likely a language implementation difference. What mattersis that the time of our algorithm increases quadratically with respect to the density in accordancewith the big-O analysis.Figure 1: Our algorithm (bottom) scales with the density of a matrix, unlike the traditional polyno-mial feature expansion method (top). The task was a second degree expansion, which is why thetime of our algorithm scales quadratically with the density.5Under review as a conference paper at ICLR 20177 C ONCLUSIONWe have developed an algorithm for performing polynomial feature expansions on CSR matricesthat scales polynomially with respect to the density of the matrix. The areas within machine learningthat this work touches are not en vogue, but they are workhorses of industry, and every improvementin core representations has an impact across a broad range of applications.REFERENCESJ.D. Gergonne. The application of the method of least squares to the interpolation of sequences.Historia Mathematica , 1(4):439–447, 1974.F. Pedregosa, G. Varoquaux, A. Gramfort, V . Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten-hofer, R. Weiss, V . Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, andE. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research ,12:2825–2830, 2011.Kirstine Smith. On the standard deviations of adjusted and interpolated values of an observed poly-nomial function and its constants and the guidance they give towards a proper choice of the dis-tribution of observations. Biometrika , 12(1/2):1–85, 1918.6
HJ1JBJ5gl
Under review as a conference paper at ICLR 2017REPRESENTING INFERENTIAL UNCERTAINTY IN DEEPNEURAL NETWORKS THROUGH SAMPLINGPatrick McClure & Nikolaus KriegeskorteMRC Cognition and Brain Sciences UnitUniversity of CambridgeCambridge, UKfpatrick.mcclure,nikolaus.kriegeskorte g@mrc-cbu.cam.ac.ukABSTRACTAs deep neural networks (DNNs) are applied to increasingly challenging prob-lems, they will need to be able to represent their own uncertainty. Modellinguncertainty is one of the key features of Bayesian methods. Bayesian DNNs thatuse dropout-based variational distributions and scale to complex tasks have re-cently been proposed. We evaluate Bayesian DNNs trained with Bernoulli orGaussian multiplicative masking of either the units (dropout) or the weights (drop-connect). We compare these Bayesian DNNs ability to represent their uncertaintyabout their outputs through sampling during inference. We tested the calibra-tion of these Bayesian fully connected and convolutional DNNs on two visualinference tasks (MNIST and CIFAR-10). By adding different levels of Gaussiannoise to the test images in z-score space, we assessed how these DNNs repre-sented their uncertainty about regions of input space not covered by the trainingset. These Bayesian DNNs represented their own uncertainty more accuratelythan traditional DNNs with a softmax output. We find that sampling of weights,whether Gaussian or Bernoulli, led to more accurate representation of uncertaintycompared to sampling of units. However, sampling units using either Gaussianor Bernoulli dropout led to increased convolutional neural network (CNN) clas-sification accuracy. Based on these findings we use both Bernoulli dropout andGaussian dropconnect concurrently, which approximates the use of a spike-and-slab variational distribution. We find that networks with spike-and-slab samplingcombine the advantages of the other methods: they classify with high accuracyand robustly represent the uncertainty of their classifications for all tested archi-tectures.1 I NTRODUCTIONDeep neural networks (DNNs), particularly convolutional neural networks (CNN), have recentlybeen used to solve complex perceptual and decision tasks (Krizhevsky et al., 2012; Mnih et al.,2015; Silver et al., 2016). However, these networks fail to model the uncertainty of their predictionsor actions. Although many networks deterministically map an input to a probabilistic prediction,they do not model the uncertainty of that mapping. In contrast, Bayesian neural networks (NNs)attempt to learn a distribution over their parameters thereby offering uncertainty estimates for theiroutputs (MacKay, 1992; Neal, 2012). However, these methods do not scale well due to the difficultyin computing the posterior of a network’s parameters.One type of method for sampling from the posteriors of these networks is Hamiltonian Monte Carlo(HMC) (Neal, 2012). These techniques use the gradient information calculated using backprop-agation to perform Markov chain Monte Carlo (MCMC) sampling by randomly walking throughparameter space. proposed stochastic gadient Langevien dynamcis (SGLD)Approximate methods, in particular variational inference, have been used to make Bayesian NNsmore tractable (Hinton & Van Camp, 1993; Barber & Bishop, 1998; Graves, 2011; Blundell et al.,2015). Due in large part to the fact that these methods substantially increase the number of param-eters in a network, they have not been applied to large DNNs, such as CNNs. Gal & Ghahramani1Under review as a conference paper at ICLR 2017(2016) and Kingma et al. (2015) bypassed this issue by developing Bayesian CNNs using dropout(Srivastava et al., 2014). Dropout is a widely used regularization technique where units are droppedout of a network with a probability pduring training and the output of all unit are multiplied by pduring inference. A similar technique is dropconnect (Wan et al., 2013), which drops network con-nections instead of units. Gal & Ghahramani (2015) detailed how dropping units was equivalent tosampling weights from a Bernoulli-based variational distribution and that in order to make a DNNwith dropout Bayesian, sampling should be used during both training and inference. Monte-Carlo(MC) sampling at inference allows a DNN to efficiently model a distribution over its outputs. Onelimitation of the Bayesian dropout method is that it does not model the uncertatiniy of each networkparameter. The uncertainty of a DNN can then be calculated using this probability distribution.In addition to Bernoulli and Gaussian distributions, there has also been work done using spike-an-slab distributions (Louizos, 2015), a combination of the two, which has been shown to increase thequality of linear regression (Ishwaran & Rao, 2005). Interestingly, dropout and dropconnect can beseen as approximations to spike-and-slab distributions for units and weights, respectively (Louizos,2015; Gal, 2016; Li et al., 2016). Dropout- and dropconnect-based variational DNNs are dependenton the dropout probability, which is often used as a hyperparameter. However, work has been doneon automatically learning the dropout probability during training for dropconnect (Louizos, 2015)using spike-and-slab distributions and Gaussian dropout (Kingma et al., 2015).In this paper, we investigate how using MC sampling to model uncertainty affects a network’s prob-abilistic predictions. Specifically, we test if using MC sampling improves the calibration of theprobabilistic predictions made by Bayesian DNNs with softmax output layers. We used variationaldistributions based on dropout and dropconnect with either Bernoulli or Gaussian sampling duringboth training and inference. Additonally, we propose a formulation of a spike-and-slab variationaldistribution based on Bernoulli dropout and Gaussian dropconnect. We find that the spike-and-slabnetworks robustly represented their uncertainty like Bayesian dropconnect networks and have theincreased CNN classification accuracy of Bayesian dropout networks. Each of these variationaldistributions scale extremely well and make the results of this work applicable to a large range ofstate-of-the-art DNNs.2 M ETHODS2.1 B AYESIAN NEURAL NETWORKSArtificial neural networks (NNs) can be trained using Bayesian learning by finding the maximuma posteriori (MAP) weights given the training data ( Dtrain ) and a prior over the weight matrix W(p(W)):maxWp(WjDtrain) =maxWp(DtrainjW)p(W) (1)This is usually done by minimizing the mean squared error (MSE) or cross entropy error for ei-ther regression or classification, respectively, while using L2 regularization, which corresponds toa Gaussian prior over the weights. At inference, the probability of the test data ( Dtest) is thencalculated using only the maximum likelihood estimate (MLE) of the weights ( W):p(DtestjW) (2)However, ideally the full posterior distribution over the weights would be learned instead of just theMLE:p(WjDtrain) =p(DtrainjW)p(W)p(Dtrain)(3)This can be intractable due to both the difficulty in calculating p(Dtrain)and in calculating thejoint distribution of a large number of parameters. Instead, p(WjDtrain)can be approximated usinga variational distribution q(W). This distribution is constructed to allow for easy generation ofsamples. Using variational inference, q(W)is learned by minimizing:2Under review as a conference paper at ICLR 2017Zlogp(DtrainjW)q(W)dW+KL(q(W)jjp(W)) (4)Monte-Carlo (MC) sampling can then be used to estimate the probability of test data using q(W):p(Dtest)1nnXip(Dtestj^Wi)where ^Wiq(W) (5)Baseline Neural Network X X Bernoulli DropConnect Gaussian DropConnect X X X Bernoulli Dropout Gaussian Dropout Spike -and-Slab Dropout X X X Figure 1: A visualization of the different variational distributions on a simple neural network.2.2 V ARIATIONAL DISTRIBUTIONSThe number and continuous nature of the parameters in DNNs makes sampling from the entiredistribution of possible weight matrices computationally challenging. However, variational distri-butions can make sampling easier. In deep learning, the most common sampling method is dropoutwith Bernoulli variables. However, dropconnect, which independently samples a Bernoulli for eachweight, and Gaussian weights have also been used. A visualization of the different methods is shownin Figure 1. All of these methods can be formulated as variational distributions where weights aresampled by element-wise multiplying the variational parameters V, thennconnection matrixwith an element for each connection between the nunits in the network, by a mask ^M, which issampled from some probability distribution. Mathematically, this can be written as:^W=V^M where ^Mp(M) (6)From this perspective, the difference between dropout and dropconnect, as well as Bernoulli andGaussian methods, is simply the probability distribution used to generate the mask sample, ^M(Fig-ure 2).WW� VV MM� Bernoulli DropConnect Gaussian DropConnect Bernoulli Dropout Gaussian Dropout 0 1 >1 Spike- and-Slab Dropout Figure 2: An illustration of sampling network weights using the different variational distributions.2.2.1 B ERNOULLI DROPCONNECT & D ROPOUTBernoulli distributions are simple distributions which return 1 with probability pand 0 with prob-ability (1p). In Bernoulli dropconnect, each element of the mask is sampled independently, so^mi;jBernoulli (p). This sets ^wi;jtovi;jwith probability pand 0 with a probability (1p).3Under review as a conference paper at ICLR 2017In dropout, however, the weights are not sampled independently. Instead, one Bernoulli variable issampled for each row of the weight matrix, so ^mi;Bernoulli (p).2.2.2 G AUSSIAN DROPCONNECT & D ROPOUTIn Gaussian dropconnect and dropout, the elements of the mask are sampled from normal distri-butions. This corresponds to sampling ^wi;jfrom a Gaussian distribution centered at variationalparametervi;j. Srivastava et al. (2014) proposed using Gaussian distribution with a mean of 1 anda variance of 2dc= (1p)=p, which matches the mean and variance of dropout when trainingtime scaling is used. In Gaussian dropconnect, each element of the mask is sampled independently,which results in ^mi;jN(1;2dc). In Gaussian dropout, each element in a row has the same randomvariable, so ^mi;N(1;2dc).2.2.3 S PIKE -AND -SLAB DROPOUTA spike-and-slab distribution is the normalized linear combination of a ”spike” of probability massat zero and a ”slab” consisting of a Gaussian distribution. This spike-and-slab returns a 0 withprobabilitypspike or a random sample from a Gaussian distribution N(slab;2slab). We proposeconcurrently using Bernoulli dropout and Gaussian dropconnect to approximate the use of a spike-and-slab variational distribution by optimizing a lower-bound of the objective function (See Ap-pendix A). In this formulation, mi;jbi;N(1;2dc), wherebi;Bern (pdo)for each mask rowand2dc=pdc=(1pdc). As for Bernoulli dropout, each row of the mask M,mi;;is multipliedby 0 with probability (1pdo), otherwise each element in that row is multiplied by a value inde-pendently sampled from a Gaussian distribution as in Gaussian dropconnect. During non-samplinginference, spike-and-slab dropout uses the mean weight values and, per Bernoulli dropout, multi-plies unit outputs by pdo. This differs from the work done by Louizos (2015) and Gal (2016) inthat they used additive Gaussian noise and learn separate means and variances for each weight. Incontrast, we define the variance as a function of the learned weight mean vi;j. Tying the variance ofa weight to its magnitude makes it only beneficial to learn large weights if they are robust to variance(Wang & Manning, 2013). Although we treat pdoandpdcas a hyperparameters thereby reducingthe space of variational distributions we optimize over, similar methods could potentially learn theseduring training (Louizos, 2015; Kingma et al., 2015; Gal, 2016).Standard Deviation 0 1 2 3 4 5 Figure 3: Examples of MNIST images with added Gaussian noise with varying standard deviations.3 R ESULTSIn this paper, we investigate how using MC sampling affects a DNN’s ability to represent the un-certainty of it’s probabalistic predictions. To test this, we trained several networks differing onlyin whether no sampling was performed (baseline DNN and DNN with L2-regularization), samplingwas only performed during training (dropout and dropconnect), or sampling was performed bothduring training and inference (MC dropout and MC dropconnect). We used the MNIST and CIFAR-10 datasets to train networks that sampled from different variational distribution using the abovemethods.For these DNNs, we compared the test classification error, the uncertainty of the softmax output,and the calibration of the softmax output for each type of sampling and variational distribution. Thetest classification error shows how well the probability distribution learned by each DNN models4Under review as a conference paper at ICLR 20170 500.51Classification ErrorBaseline NNNN+L20 500.51 Bernoulli DropConnect BDCMCBDC0 500.51 Gaussian DropConnect GDCMCGDC0 500.51 Bernoulli Dropout BDOMCDO0 500.51 Gaussian Dropout GDOMCGDO0 500.51 Spike−and−Slab Dropout SSDMCSSD0 50123Entropy0 50123 0 50123 0 50123 0 50123 0 50123 0 50123Calibration MSENoise Std. Dev.0 50123 Noise Std. Dev.0 50123 Noise Std. Dev.0 50123 Noise Std. Dev.0 50123 Noise Std. Dev.0 50123 Noise Std. Dev.Figure 4: The MNIST test classification error, entropy, and calibration of the predictions of the fullyconnected networks: NN, NN+L2, Bernoulli DropConnect (BDC) with and without Monte-Carlo(MC) sampling, Gaussian DropConnect (GDC) with and without MC sampling, Bernoulli Dropout(BDO) with and without MC sampling, Gaussian Dropout with and without MC sampling, andspike-and-slab Dropout (SSD) with and without MC sampling.0 0.5 100.51Noise Std. Dev. = 0 FrequencyBaseline NNNN+L20 0.5 100.51 Bernoulli DropConnect BDCMCBDC0 0.5 100.51 Gaussian DropConnect GDCMCGDC0 0.5 100.51 Bernoulli Dropout BDOMCDO0 0.5 100.51 Gaussian Dropout GDOMCGDO0 0.5 100.51 Spike−and−Slab Dropout SSDMCSSD0 0.5 100.51Noise Std. Dev. = 3 Frequency0 0.5 100.51 0 0.5 100.51 0 0.5 100.51 0 0.5 100.51 0 0.5 100.51 0 0.5 100.51Noise Std. Dev. = 5 FrequencyPredicted Probability0 0.5 100.51 Predicted Probability0 0.5 100.51 Predicted Probability0 0.5 100.51 Predicted Probability0 0.5 100.51 Predicted Probability0 0.5 100.51 Predicted ProbabilityFigure 5: The calibration curves for the MNIST test set with and without Gaussian noise of thesoftmax outputs of the fully connected networks: NN, NN+L2, Bernoulli DropConnect (BDC) withand without Monte-Carlo (MC) sampling, Gaussian DropConnect (GDC) with and without MCsampling, Bernoulli Dropout (BDO) with and without MC sampling, Gaussian Dropout with andwithout MC sampling, and spike-and-slab Dropout (SSD) with and without MC sampling.the data. The uncertainty shows how the probability distribution learned by each DNN is distributedacross classes. A low entropy means that the probability mass is primarily located at a few la-bels and a high entropy means that the probability mass is distributed across many labels. Thecalibration shows how well the probability distribution learned by the DNN models it’s own uncer-tainty. We evaluated how calibrated a prediction was by the following procedure: (1) We binnedtest set predictions by predicted probability. (2) We calculated the percentage of predictions in eachpredicted-probability bin that correctly predicted a target label. Perfect calibration means that targetspredicted with probability zare correct in ztimes 100% of the cases. We therefore (3) calculatedthe mean squared calibration error (i.e. the mean across bins of the squared deviations between thebin-mean predicted probability and the proportion of correct predictions in that bin). We evaluated5Under review as a conference paper at ICLR 2017Table 1: MNIST test error for the trained fully connected neural networks with and without Monte-Carlo (MC) sampling using 100 samples.Method Mean Error (%) Std. Dev.NN 1.68 -NN+L2 1.64 -Bernoulli DropConnect 1.33 -MC Bernoulli DropConnect 1.30 0.04Gaussian DropConnect 1.24 -MC Gaussian DropConnect 1.27 0.03Bernoulli Dropout 1.45 -MC Bernoulli Dropout 1.42 0.03Gaussian Dropout 1.36 -MC Gaussian Dropout 1.37 0.03Spike-and-Slab Dropout 1.23 –MC Spike-and-Slab Dropout 1.23 0.03these three measures for the trained networks on the MNIST and CIFAR test set with noise sampledfrom Gaussian distributions with varying standard deviations (Figure 3). This tested how well mod-elled each network’s uncertainty was for the test sets and the regions of input space not seen in thetraining set. For dropout and dropconnect, pwas set to 0:5, which corresponds to the best value forregularizing a linear layer Baldi & Sadowski (2013). However in practice, different values for phavebeen used Srivastava et al. (2014). We found that 0:5was a robust choice for different networks,measures and sampling methods we used. The one exception were the dropconnect parameter usedfor spike-and-slab distributions where 0:5made learning difficult due to the variance during training.Through validation, we found that using larger values spike-and-slab probabilities ( 0:75for the fullyconnected and 0:9for the convolutional) allowed the networks to fit to the training data better whilestill maintaining good generalization.3.1 MNISTWe trained two groups of DNNs, one with a fully connected (FC) architecture and one with a con-volutional architecture, on digit classification using the MNIST dataset (LeCun et al., 1998). Thisset contains 60,000 training images and 10,000 testing images. No data augmentation was used.3.1.1 F ULLY CONNECTED NEURAL NETWORKSFirst, we trained DNNs with two FC hidden layers, each with 800 units and ReLU non-linearities.For the L2-regularized network, an L2-coefficient of 1e-5 was used for all weights. For the dropoutmethods, unit sampling was performed after each FC layer. For the dropconnect methods, everyweight was sampled. The classification errors of the FC networks on the MNIST test set are shownin Table 1. Sampling during learning significantly increased accuracy in comparison to the baselineNNs, with the dropconnect-based networks being the most accurate. MC sampling at inference didnot significantly increase accuracy. We found that Gaussian dropconnect and spike-and-slab dropouthad the best accuracy.The classification error, uncertainty, and calibration of the learned probability distributions of eachFC network for varying levels of noise are shown in Figure 4. While not improving accuracy, MCsampling led to networks that better represent their own uncertainty. As the noise in the test setwas increased, the uncertainty of the networks with MC sampling highly increased, especially whencompared to networks with no sampling at inference. This resulted in better calibrated FC networksfor all levels of noise.The calibration curves show that sampling only during training, especially when using only dropout,led to overconfidence through placing too much probability mass on the most predicted label (Fig-ure 5). In particular, sampling only during training resulted in under-confidence for low predictedprobabilities and over-confidence for high predicted probabilities. By distributing probability massover several labels, the DNNs that sampled at inference better represented the uncertainty of theirpredictions.6Under review as a conference paper at ICLR 20170 500.51Classification ErrorBaseline CNNCNN+L20 500.51 Bernoulli DropConnect BDCMCBDC0 500.51 Gaussian DropConnect GDCMCGDC0 500.51 Bernoulli Dropout BDOMCBDO0 500.51 Gaussian Dropout GDOMCGDO0 500.51 Spike−and−Slab Dropout SSDMCSSD0 50123Entropy0 50123 0 50123 0 50123 0 50123 0 50123 0 50123Calibration MSENoise Std. Dev.0 50123Noise Std. Dev.0 50123Noise Std. Dev.0 50123Noise Std. Dev.0 50123Noise Std. Dev.0 50123Noise Std. Dev.Figure 6: The MNIST test classification error, entropy, and calibration of the predictions of theconvolutional networks: CNN, CNN+L2, Bernoulli DropConnect (BDC) with and without Monte-Carlo (MC) sampling, Gaussian DropConnect (GDC) with and without MC sampling, BernoulliDropout (BDO) with and without MC sampling, Gaussian Dropout with and without MC sampling,and spike-and-slab Dropout (SSD) with and without MC sampling.0 0.5 100.51Noise Std. Dev. = 0 FrequencyBaseline CNNCNN+L20 0.5 100.51 Bernoulli DropConnect BDCMCBDC0 0.5 100.51 Gaussian DropConnect GDCMCGDC0 0.5 100.51 Bernoulli Dropout BDOMCBDO0 0.5 100.51 Gaussian Dropout GDOMCGDO0 0.5 100.51 Spike−and−Slab Dropout SSDMCSSD0 0.5 100.51Noise Std. Dev. = 3 Frequency0 0.5 100.51 0 0.5 100.51 0 0.5 100.51 0 0.5 100.51 0 0.5 100.51 0 0.5 100.51Noise Std. Dev. = 5 FrequencyPredicted Probability0 0.5 100.51 Predicted Probability0 0.5 100.51 Predicted Probability0 0.5 100.51 Predicted Probability0 0.5 100.51 Predicted Probability0 0.5 100.51 Predicted ProbabilityFigure 7: The calibration curves for the MNIST test set with and without Gaussian noise of thesoftmax outputs of the convolutional networks: CNN, CNN+L2, Bernoulli DropConnect (BDC)with and without Monte-Carlo (MC) sampling, Gaussian DropConnect (GDC) with and withoutMC sampling, Bernoulli Dropout (BDO) with and without MC sampling, Gaussian Dropout withand without MC sampling, and spike-and-slab Dropout (SSD) with and without MC sampling.3.1.2 C ONVOLUTIONAL NEURAL NETWORKSWe also trained CNNs on MNIST. Every network had two convolutional layers and a fully-connectedlayer (See Appendix B for details). For the L2-regularized network, an L2-coefficient of 1e-5 wasused for all weights. For Bernoulli and Gaussian dropout, dropout was performed after each con-volutional layer and after the FC layer. For Bernoulli and Gaussian dropconnect, every weight wassampled. The classification error of the CNNs on the MNIST test set is shown in Table 2. Samplingduring training significantly increased the accuracy for the all of the networks, but especially for theGaussian dropout network. However, unlike for the FC networks, the dropout-based methods weremore accurate than the dropconnect-based methods. Unlike for the FC networks, spike-and-slab had7Under review as a conference paper at ICLR 2017Table 2: MNIST test error for the trained convolutional neural networks (CNNs) with and withoutMonte-Carlo (MC) sampling using 100 samples.Method Mean Error (%) Error Std. Dev.CNN 0.70 -CNN+L2 0.70 -Bernoulli DropConnect 0.59 -MC Bernoulli DropConnect 0.59 0.02Gaussian DropConnect 0.49 -MC Gaussian DropConnect 0.49 0.01Bernoulli Dropout 0.45 -MC Bernoulli Dropout 0.46 0.01Gaussian Dropout 0.38 -MC Gaussian Dropout 0.37 0.01Spike-and-Slab Dropout 0.43 –MC Spike-and-Slab Dropout 0.44 0.01accuracies more similar to Bernoulli dropout, which classified more accurately than Gaussian drop-connect. MC sampling during inference did not significantly increase the accuracy of the networks.The classification error, uncertainty, and calibration of the learned probability distributions of eachnetwork for varying levels of noise are shown in Figure 6. As with the FC networks, MC samplingat inference greatly increased the CNNs’ ability to estimate their own uncertainty, particularly forinputs that are different from the training set. MC sampling led to increased entropy as inputs becamemore noisy, which resulted in better calibration. In particular, this was true of both the Bernoulli andGaussian dropconnect networks, which very accurately represented their uncertainty even for highlynoisy inputs. The spike-and-slab CNN had similar robust calibration. The calibration curves showthat not using MC sampling at inference led networks that were under-confident when making lowprobability predictions and over-confident when making high probability predictions (Figure 7).3.2 CIFAR-10We trained large CNNs on natural image classification using the CIFAR-10 dataset, which contains50,000 training images and 10,000 testing images (Krizhevsky & Hinton, 2009). The CNNs had13 convolutional layer followed by a fully connected layer (See Appendix B for details). For L2-regularization, an L2-coefficient of 5e-4 was used for all weights. For the dropout networks, wasused after each convolutional layer, but before the non-linearities. For the dropconnect networks,all weights were sampled. During training, random horizontal flipping was used. The classificationerror of the CNNs on the CIFAR-10 test set is shown in Table 3. For each variational distribution,MC sampling significantly increased test accuracy. Also, the that used dropout, including spike-and-slab, had significantly higher accuracies than the networks that only used dropconnect.The classification error, uncertainty, and calibration of the learned probability distributions of eachnetwork for varying levels of noise are shown in Figure 8. One of the major differences betweenthe CIFAR-10 and the MNIST results was that using the layer-wise expectation for dropout did notproduce good models, regardless of what variational distribution was used. Instead, the standard testtime dropout methods led to relatively inaccurate networks with very high output entropy even whenTable 3: CIFAR-10 test error for the trained convolutional neural networks (CNNs) with and withoutMonte-Carlo (MC) sampling using 100 samples.Method Mean Error (%) Error Std. Dev.CNN 19.63 -CNN+L2 19.44 -Bernoulli DropConnect 17.64 -MC Bernoulli DropConnect 17.29 0.05Gaussian DropConnect 16.00 -MC Gaussian DropConnect 15.63 0.04Bernoulli Dropout 37.47 -MC Bernoulli Dropout 10.19 0.06Gaussian Dropout 24.10 -MC Gaussian Dropout 9.29 0.10Spike-and-Slab Dropout 18.05 –MC Spike-and-Slab Dropout 10.44 0.038Under review as a conference paper at ICLR 20170 0.5 100.51Classification ErrorBaseline CNNCNN+L20 0.5 100.51 Bernoulli DropConnect BDCMCBDC0 0.5 100.51 Gaussian DropConnect GDCMCGDC0 0.5 100.51 Bernoulli Dropout BDOMCDO0 0.5 100.51 Gaussian Dropout GDOMCGDO0 0.5 100.51 Spike−and−Slab Dropout SSDMCSSD0 0.5 10123Entropy0 0.5 10123 0 0.5 10123 0 0.5 10123 0 0.5 10123 0 0.5 10123 0 0.5 10123Calibration MSENoise Std. Dev.0 0.5 10123 Noise Std. Dev.0 0.5 10123 Noise Std. Dev.0 0.5 10123 Noise Std. Dev.0 0.5 10123 Noise Std. Dev.0 0.5 10123 Noise Std. Dev.Figure 8: The CIFAR-10 test classification error, entropy, and calibration of the predictions of theconvolutional neural networks: CNN, CNN+L2, Bernoulli DropConnect (BDC) with and with-out Monte-Carlo (MC) sampling, Gaussian DropConnect (GDC) with and without MC sampling,Bernoulli Dropout (BDO) with and without MC sampling, Gaussian Dropout with and without MCsampling, and spike-and-slab Dropout (SSD) with and without MC sampling.0 0.5 100.51Noise Std. Dev. = 0 FrequencyBaseline CNNCNN+L20 0.5 100.51 Bernoulli DropConnect BDCMCBDC0 0.5 100.51 Gaussian DropConnect GDCMCGDC0 0.5 100.51 Bernoulli Dropout BDOMCDO0 0.5 100.51 Gaussian Dropout GDOMCGDO0 0.5 100.51 Spike−and−Slab Dropout SSDMCSSD0 0.5 100.51Noise Std. Dev. = 0.5 Frequency0 0.5 100.51 0 0.5 100.51 0 0.5 100.51 0 0.5 100.51 0 0.5 100.51 0 0.5 100.51Noise Std. Dev. = 1 FrequencyPredicted Probability0 0.5 100.51 Predicted Probability0 0.5 100.51 Predicted Probability0 0.5 100.51 Predicted Probability0 0.5 100.51 Predicted Probability0 0.5 100.51 Predicted ProbabilityFigure 9: The calibration curves for the CIFAR-10 test set with and without Gaussian noise ofthe softmax outputs of the convolutional neural networks: CNN, CNN+L2, Bernoulli DropConnect(BDC) with and without Monte-Carlo (MC) sampling, Gaussian DropConnect (GDC) with andwithout MC sampling, Bernoulli Dropout (BDO) with and without MC sampling, Gaussian Dropoutwith and without MC sampling, and spike-and-slab Dropout (SSD) with and without MC sampling.no input noise was used. This agrees with the results reported by Gal & Ghahramani (2015)), whoalso found that using dropout at every layer can reduce accuracy if MC sampling is not used. How-ever, these results differ from those of Srivastava et al. (2014). In our experience, deeper networkswith higher regularization (e.g. Bernoulli dropout probabilities closer to 0.5) result in traditionaldropout inference performing significantly worse than MC dropout. As for the MNIST networks,MC sampling at inference overall greatly increased the CIFAR-10 trained CNNs’ ability to estimatetheir own uncertainty when no or little noise was added to the test images.The classification accuracies and the ability to model uncertainty of the networks with dropconnectsampling were far more robust to noise than the networks with only dropout. However, the MCdropconnect networks are significantly less accurate than the MC dropout networks for the CIFAR-9Under review as a conference paper at ICLR 201710 test set when no noise was added. Networks that used traditional dropout inference instead ofsampling were consistently uncertain, regardless of the noise. These networks have worse calibra-tion than the MC dropout networks at low levels of noise but better calibration than the MC dropoutnetworks at low levels of noise because they always had high uncertainty. For CIFAR-10, not us-ing MC sampling resulted in networks that were generally over-confident when making predictions(Figure 9). However, this was not true for the non-sampling dropout networks when no input noisewas used. In that case, the networks were highly under-confident.4 D ISCUSSIONIn this paper, we investigated the ability of MC sampling to improve a DNN’s representation ofits own uncertainty. We did this by training Bayesian DNNs with either multiplicative masking ofthe weights (dropconnect) or units (dropout) using Bernoulli, Gaussian, or spike-and-slab sampling.Based on the results, we draw the following main conclusions:1. Sampling during both learning and inference improved a network’s ability to represent its ownuncertaintyMC sampling at inference improved the calibration of a network’s predictions. Overall, this im-provement was particularly large for inputs from outside the training set, which traditional modelsclassified with high confidence despite not being trained on similar inputs.2. Sampling weights independently led to networks that best represented their own uncertaintyFor all the network architectures and datasets tested, using dropconnect sampling at training andinference resulted in the best calibrated networks overall. This was true regardless of whether drop-connect sampling led to the most accurate network. This is in contrast to CNNs with Gaussiandropout sampling, which were significantly the most accurate and also the worst calibrated of thenetworks with sampling both during training an inference3. Sampling weights independently led to the most accurate FC networks, but sampling units led tothe most accurate CNNsFor the FC networks, using dropconnect, particularly with Gaussian sampling, resulted in the mostaccurate networks. However, using dropout led to the most accurate CNNs. A potential cause ofthis is the large correlation in the information contained by adjacent elements in an image, whichare often covered by the same convolutional kernel. This could mean that sampling the weights of akernel does not provide as much regularization as the dropout methods.4. Sampling using both Bernoulli dropout and Gaussian dropconnect led to accurate and well cali-brated networksUsing spike-and-slab dropout, which combines Bernoulli dropout and Gaussian dropconnect, re-sulted in networks that performed well for all architectures. Spike-and-slab networks had accuraciessimilar to the Bernoulli dropout or Gaussian dropconnect depending on which performed better for agiven architecture and task, Gaussian dropconnect for FC networks and Bernoulli dropout for CNNs.Spike-and-slab networks also were robustly well calibrated similar to all of the other dropconnectmethods.These scalable methods for improving a network’s representation of its own uncertainty are widelyapplicable, since most DNNs already use dropout and getting uncertainty estimates only requiresusing MC sampling at inference. We plan to further investigate the use of different variational dis-tributions. We also plan to evaluate the use of dropout and dropconnect sampling on large recurrentneural networks. Our results suggest that sampling at inference allows DNNs to efficiently representtheir own uncertainty, an essential part of real-world perception and decision making.ACKNOWLEDGMENTSWe would like to thank Yarin Gal and Sergii Strelchuk for their helpful discussions regarding themanuscript. This research was funded by the Cambridge Commonwealth, European & InternationalTrust, the UK Medical Research Council (Program MC-A060-5PR20), and a European ResearchCouncil Starting Grant (ERC-2010-StG 261352).10Under review as a conference paper at ICLR 2017REFERENCESPierre Baldi and Peter J Sadowski. Understanding dropout. In Advances in Neural InformationProcessing Systems , pp. 2814–2822, 2013.David Barber and Christopher M Bishop. Ensemble learning in bayesian neural networks. NATOASI SERIES F COMPUTER AND SYSTEMS SCIENCES , 168:215–238, 1998.Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty inneural network. In Proceedings of The 32nd International Conference on Machine Learning , pp.1613–1622, 2015.Yarin Gal. Uncertainty in Deep Learning . PhD thesis, University of Cambridge, 2016.Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Insights and applications.InDeep Learning Workshop, ICML , 2015.Yarin Gal and Zoubin Ghahramani. Bayesian convolutional neural networks with Bernoulli approx-imate variational inference. In 4th International Conference on Learning Representations (ICLR)workshop track , 2016.Alex Graves. Practical variational inference for neural networks. In Advances in Neural InformationProcessing Systems , pp. 2348–2356, 2011.Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing thedescription length of the weights. In Proceedings of the sixth annual conference on Computationallearning theory , pp. 5–13. ACM, 1993.Hemant Ishwaran and J Sunil Rao. Spike and slab variable selection: frequentist and bayesianstrategies. Annals of Statistics , pp. 730–773, 2005.Diederik P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparame-terization trick. In Advances in Neural Information Processing Systems , pp. 2575–2583, 2015.Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-lutional neural networks. In Advances in neural information processing systems , pp. 1097–1105,2012.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Chunyuan Li, Andrew Stevens, Changyou Chen, Yunchen Pu, Zhe Gan, and Lawrence Carin. Learn-ing weight uncertainty with stochastic gradient mcmc for shape classification. In The IEEE Con-ference on Computer Vision and Pattern Recognition (CVPR) , June 2016.Christos Louizos. Smart regularization of deep architectures. Master’s thesis, University of Ams-terdam, 2015.David JC MacKay. A practical bayesian framework for backpropagation networks. Neural compu-tation , 4(3):448–472, 1992.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle-mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-levelcontrol through deep reinforcement learning. Nature , 518(7540):529–533, 2015.Radford M Neal. Bayesian learning for neural networks , volume 118. Springer Science & BusinessMedia, 2012.David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche,Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Masteringthe game of go with deep neural networks and tree search. Nature , 529(7587):484–489, 2016.11Under review as a conference paper at ICLR 2017Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine LearningResearch , 15(1):1929–1958, 2014.Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neuralnetworks using dropconnect. In Proceedings of the 30th International Conference on MachineLearning (ICML-13) , pp. 1058–1066, 2013.Sida Wang and Christopher Manning. Fast dropout training. In Proceedings of the 30th InternationalConference on Machine Learning (ICML) , pp. 118–126, 2013.12Under review as a conference paper at ICLR 2017A D ERIVATION OF APPROXIMATE SPIKE -AND -SLAB DROPOUTFor Bayesian inference:p(DtestjDtrain) =Zp(DtestjW)p(WjDtrain)dW (A.1)Using variational inference:p(Dtest) =Zp(DtestjW)q(W)dW (A.2)Where the variational distribution q(W)is learned by maximizing the log-evidence lower bound:log(p(Dtrain))Zlog(p(DtrainjW))q(W)dWKL(q(W)jjp(W)) (A.3)For spike-and-slab dropout, as when using Bernoulli dropout, W=BVwherebi;Bern (pdo),so if we assume independence between the random variables BandV:log(p(Dtrain))XBZVlog(p(DtrainjB;V))q(B)q(V)dVdBKL(q(B)jjp(B))KL(q(V)jjp(V))(A.4)For a spike-and-slab distribution, each element of Vis independently sampled from a Gaussiandistribution,N(vi;j;2vi;j). As in Gaussian dropconnect, 2vi;j=2vi;j.Vcan be sampled usingthe “reparameterization trick”:vi;j=N(vi;j;2vi;j) =g(vi;j;i;j) =vi;j+pvi;ji;j (A.5)whereN(0;1),= (1pdc)=pdc, andpdcis the dropconnect keep probability.This leads to:log(p(Dtrain))XBZlog(p(DtrainjB;V)q()q(B)ddBKL(q(B)jjp(B))KL(q(V)jjp(V))(A.6)This results in the following minimization objective function:LV:=XBZlog(p(DtrainjB;V))q()q(B)ddB +KL(q(B)jjp(B)) +KL(q(V)jjp(V)(A.7)UsingBern (pdo)as a prior for Bleads to a constant KLD of zero. Using a prior of N(0;2p)foreach element of Vleads to the following:KL(q(vi;j)jjp(vi;j)) =(vi;j0)222p+ logpvi;j+2vi;j22p12(A.8)By using L2 regularization, we are optimizing a lower-bound of the KLD between q(V)andN(0;1)by only matching the first moment (i.e. the mean):LVeLV:=XBZlog(p(DtrainjB;V))q()q(B)ddB +2V|V (A.9)whereVis a vector containing each vi;jandis a vector containing each i;j.Approximating using Monte Carlo integration for learning (Eq. A.10) and inference (Eq.A.11):eLV:1nX(B;)log(p(DtrainjB;V)) +2V|V (A.10)p(Dtest)1nX(B;)p(DtestjB;V) (A.11)wherebi;Bern (pdo)andN(0;1).13Under review as a conference paper at ICLR 2017B C ONVOLUTIONAL NEURAL NETWORK ARCHITECTURESB.1 MNISTTable B.1: The convolutional neural network (CNN) architecture used for MNIST.Layer Kernel Size # Features Stride Non-linearityConv-1 5x5 32 1 ReLUMaxPool-1 2x2 32 2 MaxConv-2 5x5 64 1 ReLUMaxPool-2 2x2 64 2 MaxFC 1500 500 - ReLULinear 500 10 - -B.2 CIFAR-10Table B.2: The convolutional neural network (CNN) architecture used for CIFAR-10.Layer Kernel Size # Features Stride Non-linearityConv-1 3x3 64 1 ReLUConv-2 3x3 64 1 ReLUMaxPool-1 2x2 64 2 MaxConv-3 3x3 128 1 ReLUConv-4 3x3 128 1 ReLUMaxPool-2 2x2 128 2 MaxConv-5 3x3 256 1 ReLUConv-6 3x3 256 1 ReLUConv-7 3x3 256 1 ReLUMaxPool-3 2x2 256 2 MaxConv-8 3x3 512 1 ReLUConv-9 3x3 512 1 ReLUConv-10 3x3 512 1 ReLUMaxPool-4 2x2 512 2 MaxConv-11 3x3 512 1 ReLUConv-12 3x3 512 1 ReLUConv-13 3x3 512 1 ReLUMaxPool-5 2x2 512 2 MaxFC 512 512 - ReLULinear 512 10 - -14
r1GKzP5xx
Under review as a conference paper at ICLR 2017RECURRENT NORMALIZATION PROPAGATIONC ́esar Laurent, Nicolas Ballas & Pascal VincentMontreal Institute for Learning Algorithms (MILA)D ́epartement d’Informatique et de Recherche Op ́erationnelleUniversit ́e de Montr ́ealMontr ́eal, Qu ́ebec, Canadaffirstname.lastname g@umontreal.caABSTRACTWe propose an LSTM parametrization that preserves the means and variances ofthe hidden states and memory cells across time. While having training benefitssimilar to Recurrent Batch Normalization and Layer Normalization, it does notneed to estimate statistics at each time step, therefore, requiring fewer computa-tions overall. We also investigate the parametrization impact on the gradient flowsand present a way of initializing the weights accordingly.We evaluate our proposal on language modelling and image generative modellingtasks. We empirically show that it performs similarly or better than other recurrentnormalization approaches, while being faster to execute.1 I NTRODUCTIONRecurrent neural network have shown remarkably good performances for sequential modelling tasksincluding machine translation (Bahdanau et al., 2015), visual captioning (Xu et al., 2015; Yao et al.,2015) or question answering (Hermann et al., 2015). However, such models remain notoriouslyhard to train with gradient backpropagation. As the number of time steps in the input sequenceincreases, the contractive or expanding effects associated with the state-to-state transformation ateach time step can shrink or grow exponentially, leading respectively to vanishing or explodinggradients (Hochreiter, 1991; Bengio et al., 1994; Pascanu et al., 2012). In particular, with gradi-ent vanishing, states at a given time are not influenced by changes happening much earlier in thesequence, preventing the model from learning long-term dependencies.While the long-term dependencies problem is unsolvable in absolute (Hochreiter, 1991; Bengioet al., 1994), different RNN parameterizations, such as LSTM or GRU (Hochreiter & Schmidhuber,1997; Cho et al., 2014) can help mitigate it. Furthermore, the LSTM parametrization has beenrecently extended to include layer-wise normalization (Cooijmans et al., 2016; Ba et al., 2016),building upon Batch Normalization (BN) (Ioffe & Szegedy, 2015). By normalizing the hidden statedistributions to a fix scale and shift through the different time steps, normalized LSTMs have beenshown to ease training, resulting in a parametrization that converges faster than a standard LSTM.However, normalized LSTM introduces extra-computations as it involves standardizing the hiddenstates, enforcing their means and variances at each time step. By contrast, we propose an LSTMreparametrization that allows by construction to cheaply preserve the normalization of the hiddenstates through time. Our approach can be seen as the recurrent counterpart to the recent normal-ization propagation applied in feed-forward network (Arpit et al., 2016). It results in faster trainingconvergence similar to Layer Normalization (LN) and Recurrent Batch Normalization while requir-ing fewer operations per time step and generalizing naturally to variable length sequences.In addition, we investigate the impact of our parametrization, and more generally of normalizedLSTM, on the vanishing and exploding gradient problems. We observe that layer-wise normalizationprovides a direct way to orient LSTM behaviour toward either gradient explosion or vanishing, andtherefore biases the LSTM either towards reliably storing bits of information throughout time orallowing it to be more sensitive to new input changes.Associate Fellow, Canadian Institute For Advanced Research (CIFAR)1Under review as a conference paper at ICLR 2017We empirically validate our proposal on character-level language modelling on the Penn Treebankcorpus (Marcus et al., 1993) and on image generative modelling, applying our normalisation to theDRAW architecture (Gregor et al., 2015).The paper is structured as follows: section 2 provides a brief overview of the Batch-NormalizedLSTM, in section 3 we derive our Normalized LSTM, section 4 investigates the impact of suchnormalization on the gradient flow, section 5 presents some experimental results, and we concludein section 5.2 P RE-REQUISITES2.1 BN-LSTMBatch-Normalized Long Short-Term Memory (BN-LSTM) (Cooijmans et al., 2016) is areparametrization of LSTM that takes advantage of Batch Normalization (BN) to address the Co-variate Shift (Shimodaira, 2000) occurring between time steps. Changes in the LSTM output at onetime-step are likely to cause correlated changes in the summed inputs of the sequence next time-steps. This Temporal Covariate Shift can slow down the training process as the parameters of themodel must not only be updated to minimize the cost of the task at hand but also adapt to the chang-ing distribution of the inputs. In other words, the latter time steps in an LSTM need to account forthe shifting distribution of the previous hidden states.BN-LSTM proposes to reduce this temporal covariate shift by fixing the mean and the variance ateach time step, relying on the BN transformBN(x;;) =xbE[x]qdVar[x] ++ (1)wherebE[x];dVar[x]are the activation mean and variance estimated from the mini-batch samples.Given an input sequence X= (x1;x2;:::; xT), the BN-LSTM defines a sequence of hidden stateshtand memory cell states ctaccording to0BB@~it~ft~ot~gt1CCA= BN( Wxxt;x;x) + BN( Whht1;h;h) +b (2)ct=(~it)tanh( ~gt) +(~ft)ct1 (3)ht=(~ot)tanh(BN( ct;c;c)); (4)where Wh2Rdh4dh;Wx2Rdx4dh;b2R4dhand the initial states h02Rdh;c02Rdharemodel parameters. is the logistic sigmoid function, and denotes the Hadamard product. Ba et al.(2016) latter extended this parametrization by estimating the normalizing statistics (bE[x];dVar[x])using the different feature channels rather than mini-batch samples in order to naturally generalizeto variable length sequences.2.2 N ORMALIZATION PROPAGATIONWhile increasing the training convergence speed relatively to a standard LSTM (Cooijmans et al.,2016), BN-LSTM needs to perform more computations per sample as it requires to compute 3x theBN transform at each time step.On the other hand, Normalization Propagation (Norm Prop) (Arpit et al., 2016) aims at preserve thenormalization of the input throughout the network. Unlike BN, the normalization doesn’t rely onthe statistics of the mini-batch. Instead, it is the structure of the network itself that maintains thenormalization. We therefore propose an LSTM reparametrization that preserves the normalizationthrough the different time steps in order to avoid those extra computation.2Under review as a conference paper at ICLR 20173 N ORMALIZED LSTMWhile Norm Prop properties are appealing for recurrent models, its application to LSTM is notstraightforward due to the memory cell structure. In this section we show how to derive a LSTMreparametrization that preserves normalization of the state htthrough time.3.1 C ONSTRUCTION OF THE NORMALIZED LSTMFollowing (Arpit et al., 2016; Salimans & Kingma, 2016), we will attempt to ensure, through ananalytical reparametrization, that several intermediate quantities in the computation remain approx-imately standardized. We first compensate for the distribution changes induced by the weight matri-ces in the gates and cell candidate gtcomputations0BB@~it~ft~ot~gt1CCA=xWxjjWx;ijj2xt+hWhjjWh;ijj2ht1+b: (5)wherejjW;ijj2is the vector of L2-norm of each line of the matrix and xandhare the trainablerescaling factors that restore the representation power lost in the rescaling of the weight matrices.To preserve the constant error carousel mechanism of the LSTM, we use the usual cell update,ct=(~it)tanh( ~gt) +(~ft)ct1 (6)Let us now construct an approximate analytical estimate of Var(ct). The evolution of ctthroughtime can bee seen as a geometric series, with (~ft)as constant ratio. Since ()is upper-bounded by(and in practice smaller than) 1, ctwill converge in expectation to a fixed value. This is the reasonwhy in BN-LSTM the mini-batch statistics converge to a fixed value after a few time steps (Cooij-mans et al., 2016). Moreover, if we consider that ~it;~ft;~gtandct1are (as a rough approximation)independent1, we can use the variance product rule of two independent random variables XandYVar[XY] = Var[X] Var[Y] + Var[X]E[Y]2+ Var[Y]E[X]2(7)to compute Var[ct]. Considering that E[tanh( ~gt)]0and assuming that the cell has converged i.e.Var[ct] = Var[ ct1], we haveVar[ct] = Var[tanh( ~gt)]Var[(~it)] +E[(~it)]21Var[(~ft)]E[(~ft)]2(8)We can therefore analytically or numerically compute the mean and variance of each of those ele-ments, assuming that both input xtand hidden state ht1are independent drawn from N(0;1)E[it] =E[(xzx+hzh)] (9)Var[it] = Var[(xzx+hzh)] (10)E[gt] =E[tanh(xzx+hzh)] (11)Var[gt] = Var[tanh( xzx+hzh)] (12)wherezx;zhN(0;1). The statistics of the gates otandftcan be computed in a similar way. Wecan then compute the value to which Var[ct]converges. Using this variance estimate, we compen-satectin order to compute the next hidden state htht=(~ot)tanh cctpVar[ct]!(13)Since we assumed that Var[ht1] = 1 , to ensure that we need to correct for the variance induced bythe product of the tanh with the output gate. Using again the variance product rule (equation 7) weobtainVar[ht] = Var"tanh cctpVar[ct]!#(Var[(~ot)] +E[(~ot)]2) (14)We can estimate this variance through similar computation than equation 12. Scaling htwith1=pVar[ht]ensure that its variance is 1 and so the propagation is maintained throughout the re-currence.1This assumption is strong, but we don’t have any easy way to model the covariance between those termswithout estimating it from the data.3Under review as a conference paper at ICLR 20173.2 P ROPOSED REPARAMETRIZATIONUsing equations 5, 6 and 13, we propose the following reparametrization of the LSTM, simply calledtheNormalized LSTM0BB@~it~ft~ot~gt1CCA=xWxjjWx;ijj2xt+hWhjjWh;ijj2ht1+b (15)ct=(~it)tanh( ~gt) +(~ft)ct1 (16)ht=1pVar[ht]"(~ot)tanh cctpVar[ct]!#(17)where Var[ct]andVar[ht]are computed using equations 8 and 14, respectively. Those two vari-ances are estimated at the initialization of the network (eq. 10 to eq. 12), and are then kept fixedduring the training as in Norp Prop. x,handcare parameters learned via gradient descent.Note that the reparametrization of equation 15 is identical to Weight Normalization (Weight Norm)(Salimans & Kingma, 2016). The main difference comes from equation 17, where we compensatefor the variance of ct, the tanh and(~ot), which ensures a normalized propagation. Overall, thisreparametrization is equivalent in spirit to the BN-LSTM, but it benefits from the same advantagesthat Norm Prop has over BN: There is no dependence on the mini-batch size and the computation isthe same for training and inference. Also, the rescaling of the matrices WxandWhcan be donebefore the recurrence, leading to computation time closer to a vanilla LSTM.3.3 W EIGHTS INITIALIZATIONWith such reparametrization of the weight matrices, one can think that the scale of the initializationof the weights doesn’t matter in the learning process anymore. It is actually true for the forward andbackward computation of the layeryi=aWijjaWijj2x=WijjWijj2x (18)@yi@x=aWijjaWijj2=WijjWijj2(19)and since the variance of both forward and backward passes is fixed, using an initialization schemesuch as Glorot (Glorot & Bengio, 2010) doesn’t make sense with Norm Prop. However, the updateof the parameters is affected by their scale:@yi@Wij@L@yi=1jjWijj2xjyiWijjjWijj2@L@yi(20)The scale of the parameters affect the learning rate of the layer: the bigger the weights, the smallerthe update. This induces a regularization effect in Norm Prop that is also present in BN (Ioffe& Szegedy, 2015). However, this could possibly be an issue for such parametrization: differentinitializations lead to different learning rates, and it is true even with adaptive step rules, such asAdam (Kingma & Ba, 2014). Moreover, the parameters that are not normalized (such as andb)aren’t affected by this effect, and so they are not regularized. This is the reason why forcing theweight matrices to have a unit L2 norm of the lines, as proposed in Arpit et al. (2016), helps thetraining procedure.To still benefit from the reduction of the learning rate, which is know to ease the optimization (V oglet al., 1988), we propose to simply force the unit L2 norm of the lines of the matrices and combineit with a global learning rate decay schedule.4 G RADIENT PROPAGATION IN NORMALIZED LSTMIn this section we study the gradient flow in the Normalized LSTM. Since this reparametrization issimilar to the BN-LSTM, the analysis we do here can be transposed to the BN-LSTM case.4Under review as a conference paper at ICLR 20174.1 T HEEXPLODING AND VANISHING GRADIENTS PROBLEMGiven an input sequence X= (x1;x2;:::; xT), we consider a recurrent network, parametrized by, that defines a sequence of hidden states ht=f(ht1;xt)and cost functionLwhich evaluatesthe model performance on a given task. Such network is usually trained using backpropagationthrough time, where the backpropagation is applied on the time-unrolled model. The chain rule canbe applied in order to compute the derivative of the loss Lwith respect to parameters .@L@=X1tT@Lt@=X1tTX1kt@Lt@hk@hk@ht@ht@: (21)The factors@hk@ht=Qklt@hl@hl1transports the error “in time” from step tback to step kand arealso the cause of vanishing or exploding gradient in RNN (Pascanu et al., 2012). Indeed, if theJacobian@hl@hl1has singular value different from 1, the factor@hk@ht, which is a product of tkJacobian matrices will either explode or vanish.4.2 G RADIENT OF THE NORMALIZED LSTMTo study the gradient propagation of the Normalized LSTM, we first need to derive it. Using equa-tion 15-17, we can write the gradient of htwith respect to ht1at=1pVar[ht]tanh cctpVar[ct]!(22)@ht@ht1=@ot@ht1at+ot@at@ht1@it@ht1gt+it@gt@ht1+@ft@ht1ct1(23)As we can see in equation 23 with the normalization, the gradient depends not only on the derivativeof the cell candidate, the gates and the output tanh, but also on on the variance of htandct.If we assume that ht1andxtare independent, we can compute the variance of ct. Neglecting theweight matrices and the effect of the gates, we can write from equations 8 and 14Var[ct]Var[gt] = Var[tanh( z)]; zN(0;2x+2h) (24)Var[ht] = Var[tanh( z)]; zN(0;2c(2x+2h)) (25)In both cases, the variance depends explicitly on the value of the different : The bigger the , thehigher the variance. Neglecting again the weight matrices, we can now write the equations of thecell candidates gtand the gates it;otandftwith respect to ht1@gt@ht1=@tanh( ~gt)@~gt@~gt@ht1=1tanh(xxt+hht1)2h (26)@it@ht1=@(~it)@~it@~it@ht1=(xxt+hht1)(1(xxt+hht1))h (27)The gradients of otandftcan be computed similarly. The effect of the here is double: They appearboth in the activation function, where they control the saturation regime, and halso appears as amultiplicative term in the gradient. They should therefore be small enough to prevent the activationfrom saturating too much, but at the same time hcan’t be too small, because it can also make thegradients vanish. Putting it all together, we have@ht@ht1=@ot@~othat+ot@at@~atcpVar[ct]h@it@~itgt+it@gt@~gt+@ft@~ftct1(28)In this equations we can see that the different directly scale the gradient, and they also controlthe saturation of the activation functions. Bad initialization of could thus lead to saturation orexplosion regimes. Figure 1 shows the norm of the gradient with respect to xandhin a simulatedLSTM. As we can see, one important parameter is the ratio between handx: They control mostof the propagation of the gradients. If x>h, the network will focus more on the input and so thegradients will tend to vanish more. On the other hand, if h> x, the network will tend have lessvanishing gradients, but will focus less on its inputs.5Under review as a conference paper at ICLR 20170.1 0.3 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1gamma h0.10.30.50.70.91.11.31.51.71.92.1gamma x||dht/dht-1|| gamma c=0.10.1 0.3 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1gamma h0.10.30.50.70.91.11.31.51.71.92.1gamma x||dht/dht-1|| gamma c=1.00.61.21.82.43.03.64.24.85.4Figure 1: Norm of the gradients for one time step in an LSTM with respect to xandh(simulation).Left:c= 0:1. Right:c= 1:0.5 E XPERIMENTS5.1 C HARACTER -LEVEL LANGUAGE MODELLINGThe first task we explore is character-level language modelling on the Penn Treebank corpus (Marcuset al., 1993). The goal is to predict the next character of the sequence given the previous ones. Weuse the same splits as Mikolov et al. (2012) and the same training procedure as Cooijmans et al.(2016), i.e. we train on sequences of length 100, with random starting point. The model is a1000 units LSTM followed by a Softmax classifier. We use orthogonal initialization for the weightmatrices. Because Norm Prop requires normalized inputs, we multiply the one-hot inputs vectorwith an untrained but fixed orthogonal matrix. This tricks does not only help the optimization ofNorm Prop, but also all other variants.To compare the convergence properties of Norm Prop against LN and BN, we first ran experimentsusing Adam (Kingma & Ba, 2014) with learning rate 2e-3, exponential decay of 1e-3 and gradientclipping at 1.0. As explained in section 3.3, we rescale the matrices such that they have a unit normon the lines. For Norm Prop, we use x=h= 2 andc= 1, for LN all the = 1:0and for BNall the= 0:1. The results are presented in Table 1 and in Figure 2.Model Validation TimeBaseline 1.455 386Weight Norm 1.438 402Batch Norm 1.433 545Layer Norm 1.439 530Norm Prop 1.422 413Table 1: Perplexity (bits-per-character) on sequences of length 100 from the Penn Treebank valida-tion set, and training time (seconds) per epoch.To show the potential of Norm Prop against other state-of-the-art system, we followed Ha et al.(2016) and apply dropout on both the input and output layer ( p= 0:1) and recurrent dropout insidethe LSTM (p= 0:1). We also used the Batch Data Normalization scheme presented by Arpit et al.(2016), so we standardize each input example using the mini-batch statistics and use populationstatistics at inference time. Finally, we also reduce the learning rate decay to 1e-4, to compensatefor the fact that a network with dropout needs more time to train. The results are presented in Table 2.As we can see in Figure 2 and in Table 1, Norm Prop compares really well against the otherreparametrization. Also Norm Prop is roughly 30 % computationally faster2than BN and LN. LNshows better optimization performances, but also overfits more. We also see that both optimizationand generalization are better than the ones from Weight Norm, which shows the importance of com-pensating for the variance of ctandht. Moreover, although Norm Prop doesn’t combine well with2The GPU used is a NVIDIA GTX 750.6Under review as a conference paper at ICLR 20170 5 10 15 20 25Epochs1.21.31.41.51.61.7PerplexityCharacter-Level Language ModellingBaselineWeight NormBatch NormLayer NormNorm PropFigure 2: Perplexity (bits-per-character) on sequences of length 100 from the Penn Treebank corpus.The dashed lines are the training curves, and the solid ones are the validation curves.Model TestRecurrent Dropout LSTM (Semeniuta et al., 2016) 1.301Zoneout LSTM (Krueger et al., 2016) 1.27Layer Norm LSTM (Ha et al., 2016) 1.267HyperLSTM (Ha et al., 2016) 1.265Norm Prop LSTM (ours) 1.262Layer Norm HyperLSTM (Ha et al., 2016) 1.250Table 2: Perplexity (bits-per-character) of the full Penn Treebank test sequence.dropout in feed-forward networks (Arpit et al., 2016), it works will with recurrent dropout, as we cansee in Table 2. We believe it is because recurrent dropout is less affecting its output distribution thandropout in feed forward networks, because we copy the variable at the previous time step insteadof setting it to 0. With such regularization, Norm Prop compares well with other state-of-the-artapproaches.5.2 DRAWThe second task we explore is a generative modelling task on binarized MNIST (Larochelle &Murray, 2011) using the Deep Recurrent Attentive Writer (DRAW) (Gregor et al., 2015) architecture.DRAW is a variational auto-encoder, where both encoder and decoder are LSTMs, and has twoattention mechanisms to select where to read and where to write.We use J ̈org Bornschein’s implementation3, with the same hyper-parameters as Gregor et al. (2015),ie the read and write size are 2x2 and 5x5 respectively, the number of glimpses is 64, the LSTMshave 256 units and the dimension of zis 100. We use Adam with learning rate of 1e-2, exponentialdecay of 1e-3 and mini-batch size of 128. We use orthogonal initialization and force the norm ofthe lines of the matrices to be 1. For Norm Prop, we use x=h=c= 0:5. The test variationalbound for the first 100 epochs is presented in Figure 3.As we can see in Figure 3, both Weight Norm and Norm Prop outperform the baseline network bya significant margin. Also, as expected, Norm Prop performs better than Weight Norm, showingone again the importance of the compensation of the variance of ctandht. Table 3 shows the testvariational bound after 200 epochs of training. Norm Prop also compares favorably against LN.3https://github.com/jbornschein/draw7Under review as a conference paper at ICLR 20170 20 40 60 80 100Epochs80859095100NLLDRAWBaselineNorm PropWeight NormFigure 3: Test negative log-likelihood on binarized MNIST.Model DRAWBaseline (ours) 84.30Layer Norm (Ba et al., 2016) 82.09Weight Norm (ours) 81.98Norm Prop (ours) 81.17Table 3: Test variational log likelihood (nats) after 200 epochs of training.6 C ONCLUSIONBased on the BN-LSTM, we have shown how to build a Normalized LSTM that is able to preservethe variance of its output at each time step, by compensating for the variance of the cell and thehidden state. Such LSTM can be seen as the Norm Prop version of the BN-LSTM, and thus benefitsfrom the same advantages that Norm Prop has over BN, while being way faster to compute. Also,we propose a scheme to initialize the weight matrices that takes into account the reparametrization.Moreover, we have derived the gradients of this LSTM and pointed out the importance of the initial-ization of the rescaling parameters. We have validated the performances of the Normalized LSTMon two different tasks, showing similar performances than BN-LSTM and LN-LSTM, while beingsignificantly faster in computation time. Also, unlike the feed-forward case, this architecture workswell with recurrent dropout, leading to close to state-of-the-art performances on the character-levellanguage modelling task.Future work includes trying this architecture on more challenging tasks and also studying the impactof not keeping the variance estimates of the cell and the hidden states fixed during the learningprocess.ACKNOWLEDGMENTSPart of this work was funded by Samsung. We used Theano (Theano Development Team, 2016),Blocks and Fuel (van Merri ̈enboer et al., 2015) for our experiments. We also want to thanks CaglarGulcehre and Tim Cooijmans for the talks and J ̈org Bornschein for his DRAW implementation.REFERENCESD. Arpit, Y . Zhou, B. U Kota, and V . Govindaraju. Normalization propagation: A parametric tech-nique for removing internal covariate shift in deep networks. arXiv preprint , 2016.J. L. Ba, J. R. Kiros, and G. E Hinton. Layer normalization. arXiv preprint , 2016.D. Bahdanau, K. Cho, and Y . Bengio. Neural machine translation by jointly learning to align andtranslate. ICLR , 2015.8Under review as a conference paper at ICLR 2017Y . Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent isdifficult. Neural Networks, IEEE Transactions on , 1994.K. Cho, B. Van Merri ̈enboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y . Bengio.Learning phrase representations using rnn encoder-decoder for statistical machine translation.arXiv preprint , 2014.T. Cooijmans, N. Ballas, C. Laurent, and A. Courville. Recurrent batch normalization. arXivpreprint , 2016.X. Glorot and Y . Bengio. Understanding the difficulty of training deep feedforward neural networks.InAistats , volume 9, pp. 249–256, 2010.K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra. Draw: A recurrent neuralnetwork for image generation. arXiv preprint , 2015.D. Ha, A. Dai, and Q. V Le. Hypernetworks. arXiv preprint , 2016.K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom.Teaching machines to read and comprehend. In NIPS , 2015.S. Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Master’s thesis , 1991.S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation , 9(8), 1997.S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducinginternal covariate shift. CoRR , abs/1502.03167, 2015.D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint , 2014.D. Krueger, T. Maharaj, J. Kram ́ar, M. Pezeshki, N. Ballas, N. R. Ke, A. Goyal, Y . Bengio,H. Larochelle, A. Courville, et al. Zoneout: Regularizing rnns by randomly preserving hiddenactivations. arXiv preprint , 2016.H. Larochelle and I. Murray. The neural autoregressive distribution estimator. AISTATS, 2011.M. P. Marcus, M. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english:The penn treebank. Comput. Linguist. , 1993.T. Mikolov, I. Sutskever, A. Deoras, H. Le, S. Kombrink, and J. Cernocky. Subword languagemodeling with neural networks. preprint , 2012.R. Pascanu, T. Mikolov, and Y . Bengio. On the difficulty of training recurrent neural networks. arXivpreprint , 2012.T. Salimans and D. P Kingma. Weight normalization: A simple reparameterization to acceleratetraining of deep neural networks. arXiv preprint , 2016.S. Semeniuta, A. Severyn, and E. Barth. Recurrent dropout without memory loss. CoRR ,abs/1603.05118, 2016.H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihoodfunction. Journal of statistical planning and inference , 2000.Theano Development Team. Theano: A Python framework for fast computation of mathematicalexpressions. arXiv preprint , 2016.B. van Merri ̈enboer, D. Bahdanau, V . Dumoulin, D. Serdyuk, D. Warde-Farley, J. Chorowski, andY . Bengio. Blocks and fuel: Frameworks for deep learning. CoRR , abs/1506.00619, 2015.T. P. V ogl, J. K. Mangis, A. K. Rigler, W. T. Zink, and D. L. Alkon. Accelerating the convergenceof the back-propagation method. Biological Cybernetics , 59(4):257–263, 1988.K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y . Bengio. Show, attend andtell: Neural image caption generation with visual attention. arXiv preprint , 2015.L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville. Describing videos byexploiting temporal structure. In ICCV , 2015.9
BJh6Ztuxl
Published as a conference paper at ICLR 2017FINE-GRAINED ANALYSIS OF SENTENCEEMBEDDINGS USING AUXILIARY PREDICTION TASKSYossi Adi1;2, Einat Kermany2, Yonatan Belinkov3, Ofer Lavi2, Yoav Goldberg11Bar-Ilan University, Ramat-Gan, Israelfyoav.goldberg, yossiadidrum g@gmail.com2IBM Haifa Research Lab, Haifa, Israelfeinatke, oferl g@il.ibm.com3MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA, USAbelinkov@mit.eduABSTRACTThere is a lot of research interest in encoding variable length sentences into fixedlength vectors, in a way that preserves the sentence meanings. Two commonmethods include representations based on averaging word vectors, and represen-tations based on the hidden states of recurrent neural networks such as LSTMs.The sentence vectors are used as features for subsequent machine learning tasksor for pre-training in the context of deep learning. However, not much is knownabout the properties that are encoded in these sentence representations and aboutthe language information they capture.We propose a framework that facilitates better understanding of the encoded rep-resentations. We define prediction tasks around isolated aspects of sentence struc-ture (namely sentence length, word content, and word order), and score repre-sentations by the ability to train a classifier to solve each prediction task whenusing the representation as input. We demonstrate the potential contribution of theapproach by analyzing different sentence representation mechanisms. The analy-sis sheds light on the relative strengths of different sentence embedding methodswith respect to these low level prediction tasks, and on the effect of the encodedvector’s dimensionality on the resulting representations.1 I NTRODUCTIONWhile sentence embeddings orsentence representations play a central role in recent deep learningapproaches to NLP, little is known about the information that is captured by different sentence em-bedding learning mechanisms. We propose a methodology facilitating fine-grained measurementof some of the information encoded in sentence embeddings, as well as performing fine-grainedcomparison of different sentence embedding methods.In sentence embeddings, sentences, which are variable-length sequences of discrete symbols, areencoded into fixed length continuous vectors that are then used for further prediction tasks. Asimple and common approach is producing word-level vectors using, e.g., word2vec (Mikolov et al.,2013a;b), and summing or averaging the vectors of the words participating in the sentence. Thiscontinuous-bag-of-words (CBOW) approach disregards the word order in the sentence.1Another approach is the encoder-decoder architecture, producing models also known as sequence-to-sequence models (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014, inter alia). Inthis architecture, an encoder network (e.g. an LSTM) is used to produce a vector representationof the sentence, which is then fed as input into a decoder network that uses it to perform someprediction task (e.g. recreate the sentence, or produce a translation of it). The encoder and decodernetworks are trained jointly in order to perform the final task.1We use the term CBOW to refer to a sentence representation that is composed of an average of the vectorsof the words in the sentence, not to be confused with the training method by the same name which is used inthe word2vec algorithm.1Published as a conference paper at ICLR 2017Some systems (for example in machine translation) train the system end-to-end, and use the trainedsystem for prediction (Bahdanau et al., 2014). Such systems do not generally care about the encodedvectors, which are used merely as intermediate values. However, another common case is to train anencoder-decoder network and then throw away the decoder and use the trained encoder as a generalmechanism for obtaining sentence representations. For example, an encoder-decoder network canbe trained as an auto-encoder, where the encoder creates a vector representation, and the decoderattempts to recreate the original sentence (Li et al., 2015). Similarly, Kiros et al. (2015) train a net-work to encode a sentence such that the decoder can recreate its neighboring sentences in the text.Such networks do not require specially labeled data, and can be trained on large amounts of unanno-tated text. As the decoder needs information about the sentence in order to perform well, it is clearthat the encoded vectors capture a non-trivial amount of information about the sentence, makingthe encoder appealing to use as a general purpose, stand-alone sentence encoding mechanism. Thesentence encodings can then be used as input for other prediction tasks for which less training datais available (Dai & Le, 2015). In this work we focus on these “general purpose” sentence encodings.The resulting sentence representations are opaque, and there is currently no good way of comparingdifferent representations short of using them as input for different high-level semantic tasks (e.g.sentiment classification, entailment recognition, document retrieval, question answering, sentencesimilarity, etc.) and measuring how well they perform on these tasks. This is the approach takenby Li et al. (2015), Hill et al. (2016) and Kiros et al. (2015). This method of comparing sentenceembeddings leaves a lot to be desired: the comparison is at a very coarse-grained level, does not tellus much about the kind of information that is encoded in the representation, and does not help usform generalizable conclusions.Our Contribution We take a first step towards opening the black box of vector embeddings forsentences. We propose a methodology that facilitates comparing sentence embeddings on a muchfiner-grained level, and demonstrate its use by analyzing and comparing different sentence repre-sentations. We analyze sentence representation methods that are based on LSTM auto-encoders andthe simple CBOW representation produced by averaging word2vec word embeddings. For each ofCBOW and LSTM auto-encoder, we compare different numbers of dimensions, exploring the ef-fect of the dimensionality on the resulting representation. We also provide some comparison to theskip-thought embeddings of Kiros et al. (2015).In this work, we focus on what are arguably the three most basic characteristics of a sequence:its length, the items within it, and their order. We investigate different sentence representationsbased on the capacity to which they encode these aspects. Our analysis of these low-level propertiesleads to interesting, actionable insights, exposing relative strengths and weaknesses of the differentrepresentations.Limitations Focusing on low-level sentence properties also has limitations: The tasks focus onmeasuring the preservation of surface aspects of the sentence and do not measure syntactic andsemantic generalization abilities; the tasks are not directly related to any specific downstream appli-cation (although the properties we test are important factors in many tasks – knowing that a modelis good at predicting length and word order is likely advantageous for syntactic parsing, while mod-els that excel at word content are good for text classification tasks). Dealing with these limitationsrequires a complementary set of auxiliary tasks, which is outside the scope of this study and is leftfor future work.The study also suffers from the general limitations of empirical work: we do not prove generaltheorems but rather measure behaviors on several data points and attempt to draw conclusions fromthese measurements. There is always the risk that our conclusions only hold for the datasets onwhich we measured, and will not generalize. However, we do consider our large sample of sentencesfrom Wikipedia to be representative of the English language, at least in terms of the three basicsentence properties that we study.Summary of Findings Our analysis reveals the following insights regarding the different sentenceembedding methods:Sentence representations based on averaged word vectors are surprisingly effective, and encodea non-trivial amount of information regarding sentence length. The information they contain2Published as a conference paper at ICLR 2017can also be used to reconstruct a non-trivial amount of the original word order in a probabilisticmanner (due to regularities in the natural language data).LSTM auto-encoders are very effective at encoding word order and word content.Increasing the number of dimensions benefits some tasks more than others.Adding more hidden units sometimes degrades the encoders’ ability to encode word content. Thisdegradation is not correlated with the BLEU scores of the decoder, suggesting that BLEU overthe decoder output is sub-optimal for evaluating the encoders’ quality.LSTM encoders trained as auto-encoders do not rely on ordering patterns in the training sentenceswhen encoding novel sentences, while the skip-thought encoders do rely on such patterns.2 R ELATED WORKWord-level distributed representations have been analyzed rather extensively, both empirically andtheoretically, for example by Baroni et al. (2014), Levy & Goldberg (2014) and Levy et al. (2015).In contrast, the analysis of sentence-level representations has been much more limited. Commonlyused approaches is to either compare the performance of the sentence embeddings on down-streamtasks (Hill et al., 2016), or to analyze models, specifically trained for predefined task (Schmaltzet al., 2016; Sutskever et al., 2011).While the resulting analysis reveals differences in performance of different models, it does not ade-quately explain what kind of linguistic properties of the sentence they capture. Other studies analyzethe hidden units learned by neural networks when training a sentence representation model (Elman,1991; Karpathy et al., 2015; K ́ad ́ar et al., 2016). This approach often associates certain linguisticaspects with certain hidden units. K ́ad ́ar et al. (2016) propose a methodology for quantifying thecontribution of each input word to a resulting GRU-based encoding. These methods depend on thespecific learning model and cannot be applied to arbitrary representations. Moreover, it is still notclear what is captured by the final sentence embeddings.Our work is orthogonal and complementary to the previous efforts: we analyze the resulting sentenceembeddings by devising auxiliary prediction tasks for core sentence properties. The methodologywe purpose is general and can be applied to any sentence representation model.3 A PPROACHWe aim to inspect and compare encoded sentence vectors in a task-independent manner. The mainidea of our method is to focus on isolated aspects of sentence structure, and design experiments tomeasure to what extent each aspect is captured in a given representation.In each experiment, we formulate a prediction task. Given a sentence representation method, wecreate training data and train a classifier to predict a specific sentence property (e.g. their length)based on their vector representations. We then measure how well we can train a model to perform thetask. The basic premise is that if we cannot train a classifier to predict some property of a sentencebased on its vector representation, then this property is not encoded in the representation (or rather,not encoded in a useful way, considering how the representation is likely to be used).The experiments in this work focus on low-level properties of sentences – the sentence length, theidentities of words in a sentence, and the order of the words. We consider these to be the coreelements of sentence structure. Generalizing the approach to higher-level semantic and syntacticproperties holds great potential, which we hope will be explored in future work, by us or by others.3.1 T HEPREDICTION TASKSWe now turn to describe the specific prediction tasks. We use lower case italics ( s,w) to referto sentences and words, and boldface to refer to their corresponding vector representations ( s,w).When more than one element is considered, they are distinguished by indices ( w1,w2,w1,w2).Our underlying corpus for generating the classification instances consists of 200,000 Wikipediasentences, where 150,000 sentences are used to generate training examples, and 25,000 sentences3Published as a conference paper at ICLR 2017are used for each of the test and development examples. These sentences are a subset of the trainingset that was used to train the original sentence encoders. The idea behind this setup is to test themodels on what are presumably their best embeddings.Length Task This task measures to what extent the sentence representation encodes its length.Given a sentence representation s2Rk, the goal of the classifier is to predict the length (numberof words) in the original sentence s. The task is formulated as multiclass classification, with eightoutput classes corresponding to binned lengths.2The resulting dataset is reasonably balanced, witha majority class (lengths 5-8 words) of 5,182 test instances and a minority class (34-70) of 1,084 testinstances. Predicting the majority class results in classification accuracy of 20.1%.Word-content Task This task measures to what extent the sentence representation encodes theidentities of words within it. Given a sentence representation s2Rkand a word representationw2Rd, the goal of the classifier is to determine whether wappears in the s, with access to neitherwnors. This is formulated as a binary classification task, where the input is the concatenation of sandw.To create a dataset for this task, we need to provide positive and negative examples. Obtainingpositive examples is straightforward: we simply pick a random word from each sentence. Fornegative examples, we could pick a random word from the entire corpus. However, we found thatsuch a dataset tends to push models to memorize words as either positive or negative words, insteadof finding their relation to the sentence representation. Therefore, for each sentence we pick as anegative example a word that appears as a positive example somewhere in our dataset, but doesnot appear in the given sentence. This forces the models to learn a relationship between word andsentence representations. We generate one positive and one negative example from each sentence.The dataset is balanced, with a baseline accuracy of 50%.Word-order Task This task measures to what extent the sentence representation encodes wordorder. Given a sentence representation s2Rkand the representations of two words that appear inthe sentence, w1;w22Rd, the goal of the classifier is to predict whether w1appears before or afterw2in the original sentence s. Again, the model has no access to the original sentence and the twowords. This is formulated as a binary classification task, where the input is a concatenation of thethree vectors s,w1andw2.For each sentence in the corpus, we simply pick two random words from the sentence as a positiveexample. For negative examples, we flip the order of the words. We generate one positive and onenegative example from each sentence. The dataset is balanced, with a baseline accuracy of 50%.4 S ENTENCE REPRESENTATION MODELSGiven a sentence s=fw1; w2; :::; w Ngwe aim to find a sentence representation susing an encoder:ENC :s=fw1; w2; :::; w Ng7!s2RkThe encoding process usually assumes a vector representation wi2Rdfor each word in the vo-cabulary. In general, the word and sentence embedding dimensions, dandk, need not be the same.The word vectors can be learned together with other encoder parameters or pre-trained. Below wedescribe different instantiations of ENC.Continuous Bag-of-words (CBOW) This simple yet effective text representation consists of per-forming element-wise averaging of word vectors that are obtained using a word-embedding methodsuch as word2vec.Despite its obliviousness to word order, CBOW has proven useful in different tasks (Hill et al., 2016)and is easy to compute, making it an important model class to consider.Encoder-Decoder (ED) The encoder-decoder framework has been successfully used in a numberof sequence-to-sequence learning tasks (Sutskever et al., 2014; Bahdanau et al., 2014; Dai & Le,2015; Li et al., 2015). After the encoding phase, a decoder maps the sentence representation back tothe sequence of words:DEC :s2Rk7!s=fw1; w2; :::; w Ng2We use the bins (5-8), (9-12), (13-16), (17-20), (21-25), (26-29), (30-33), (34-70).4Published as a conference paper at ICLR 2017100 300 500 750 1000Representation dimensions102030405060708090Length prediction accuracy05101520253035BLEUEDCBOWED BLEU(a) Length test.100 300 500 750 1000Representation dimensions505560657075808590Content prediction accuracy05101520253035BLEUEDCBOWED BLEU (b) Content test.100 300 500 750 1000Representation dimensions5060708090Order prediction accuracy05101520253035BLEUEDCBOWED BLEU (c) Order test.Figure 1: Task accuracy vs. embedding size for different models; ED BLEU scores given for reference.Here we investigate the specific case of an auto-encoder, where the entire encoding-decoding processcan be trained end-to-end from a corpus of raw texts. The sentence representation is the final outputvector of the encoder. We use a long short-term memory (LSTM) recurrent neural network (Hochre-iter & Schmidhuber, 1997; Graves et al., 2013) for both encoder and decoder. The LSTM decoderis similar to the LSTM encoder but with different weights.5 E XPERIMENTAL SETUPThe bag-of-words (CBOW) and encoder-decoder models are trained on 1 million sentences from a2012 Wikipedia dump with vocabulary size of 50,000 tokens. We use NLTK (Bird, 2006) for tok-enization, and constrain sentence lengths to be between 5 and 70 words. For both models we controlthe embedding size kand train word and sentence vectors of sizes k2f100;300;500;750;1000g.More details about the experimental setup are available in the Appendix.6 R ESULTSIn this section we provide a detailed description of our experimental results along with their analysis.For each of the three main tests – length, content and order – we investigate the performance ofdifferent sentence representation models across embedding size.6.1 L ENGTH EXPERIMENTSWe begin by investigating how well the different representations encode sentence length. Figure 1ashows the performance of the different models on the length task, as well as the BLEU obtained bythe LSTM encoder-decoder (ED).With enough dimensions, the LSTM embeddings are very good at capturing sentence length, ob-taining accuracies between 82% and 87%. Length prediction ability is not perfectly correlated withBLEU scores: from 300 dimensions onward the length prediction accuracies of the LSTM remainrelatively stable, while the BLEU score of the encoder-decoder model increases as more dimensionsare added.Somewhat surprisingly, the CBOW model also encodes a fair amount of length information, withlength prediction accuracies of 45% to 65%, way above the 20% baseline. This is remarkable, as theCBOW representation consists of averaged word vectors, and we did not expect it to encode lengthat all. We return to CBOW’s exceptional performance in Section 7.6.2 W ORD CONTENT EXPERIMENTSTo what extent do the different sentence representations encode the identities of the words in thesentence? Figure 1b visualizes the performance of our models on the word content test.All the representations encode some amount of word information, and clearly outperform the ran-dom baseline of 50%. Some trends are worth noting. While the capacity of the LSTM encoderto preserve word identities generally increases when adding dimensions, the performance peaks at750 dimensions and drops afterwards. This stands in contrast to the BLEU score of the respective5Published as a conference paper at ICLR 2017encoder-decoder models. We hypothesize that this occurs because a sizable part of the auto-encoderperformance comes from the decoder, which also improves as we add more dimensions. At 1000 di-mensions, the decoder’s language model may be strong enough to allow the representation producedby the encoder to be less informative with regard to word content.CBOW representations with low dimensional vectors (100 and 300 dimensions) perform exception-ally well, outperforming the more complex, sequence-aware models by a wide margin. If your taskrequires access to word identities, it is worth considering this simple representation. Interestingly,CBOW scores drop at higher dimensions.6.3 W ORD ORDER EXPERIMENTSFigure 1c shows the performance of the different models on the order test. The LSTM encoders arevery capable of encoding word order, with LSTM-1000 allowing the recovery of word order in 91%of the cases. Similar to the length test, LSTM order prediction accuracy is only loosely correlatedwith BLEU scores. It is worth noting that increasing the representation size helps the LSTM-encoderto better encode order information.Surprisingly, the CBOW encodings manage to reach an accuracy of 70% on the word order task,20% above the baseline. This is remarkable as, by definition, the CBOW encoder does not attemptto preserve word order information. One way to explain this is by considering distribution patternsof words in natural language sentences: some words tend to appear before others. In the next sectionwe analyze the effect of natural language on the different models.7 I MPORTANCE OF “NATURAL LANGUAGENESS ”Natural language imposes many constraints on sentence structure. To what extent do the differ-ent encoders rely on specific properties of word distributions in natural language sentences whenencoding sentences?To account for this, we perform additional experiments in which we attempt to control for the effectof natural language.How can CBOW encode sentence length? Is the ability of CBOW embeddings to encode lengthrelated to specific words being indicative of longer or shorter sentences? To control for this, wecreated a synthetic dataset where each word in each sentence is replaced by a random word fromthe dictionary and re-ran the length test for the CBOW embeddings using this dataset. As Figure 2ashows, this only leads to a slight decrease in accuracy, indicating that the identity of the words is notthe main component in CBOW’s success at predicting length.100 300 500 750 1000Representation dimensions35404550556065Length prediction accuracyCBOWCBOW syn sent(a) Length accuracy for differentCBOW sizes on natural and synthetic(random words) sentences.5 10 15 20 25 30 35Sentence length0.350.400.450.500.55Norm(b) Average embedding norm vs. sen-tence length for CBOW with an em-bedding size of 300.An alternative explanation for CBOW’s ability to encode sentence length is given by considering thenorms of the sentence embeddings. Indeed, Figure 2b shows that the embedding norm decreases assentences grow longer. We believe this is one of the main reasons for the strong CBOW results.While the correlation between the number of averaged vectors and the resulting norm surprised us,in retrospect it is an expected behavior that has sound mathematical foundations. To understandthe behavior, consider the different word vectors to be random variables, with the values in each6Published as a conference paper at ICLR 2017dimension centered roughly around zero. Both central limit theorem and Hoeffding‘s inequality tellus that as we add more samples, the expected average of the values will better approximate the truemean, causing the norm of the average vector to decrease. We expect the correlation between thesentence length and its norm to be more pronounced with shorter sentences (above some number ofsamples we will already be very close to the true mean, and the norm will not decrease further), abehavior which we indeed observe in practice.How does CBOW encode word order? The surprisingly strong performance of the CBOW modelon the order task made us hypothesize that much of the word order information is captured in generalnatural language word order statistics.To investigate this, we re-run the word order tests, but this time drop the sentence embedding intraining and testing time, learning from the word-pairs alone. In other words, we feed the network asinput two word embeddings and ask which word comes first in the sentence. This test isolates generalword order statistics of language from information that is contained in the sentence embedding (Fig.3).100 300 500 750 1000Representation dimensions657075808590Order prediction accuracyEDED no sentCBOWCBOW no sentFigure 3: Order accuracy w/ and w/o sentence repre-sentation for ED and CBOW models.The difference between including and remov-ing the sentence embeddings when using theCBOW model is minor, while the LSTM-EDsuffers a significant drop. Clearly, the LSTM-ED model encodes word order, while the pre-diction ability of CBOW is mostly explained bygeneral language statistics. However, CBOWdoes benefit from the sentence to some extent:we observe a gain of 3% accuracy pointswhen the CBOW tests are allowed access to thesentence representation. This may be explainedby higher order statistics of correlation betweenword order patterns and the occurrences of spe-cific words.How important is English word order for en-coding sentences? To what extent are the models trained to rely on natural language word orderwhen encoding sentences? To control for this, we create a synthetic dataset, P ERMUTED , in whichthe word order in each sentence is randomly permuted. Then, we repeat the length, content andorder experiments using the P ERMUTED dataset (we still use the original sentence encoders that aretrained on non-permuted sentences). While the permuted sentence representation is the same forCBOW, it is completely different when generated by the encoder-decoder.Results are presented in Fig. 4. When considering CBOW embeddings, word order accuracy dropsto chance level, as expected, while results on the other tests remain the same. Moving to the LSTMencoder-decoder, the results on all three tests are comparable to the ones using non-permuted sen-tences. These results are somewhat surprising since the models were originally trained on “real”,non-permuted sentences. This indicates that the LSTM encoder-decoder is a general-purpose se-quence encoder that for the most part does not rely on word ordering properties of natural languagewhen encoding sentences. The small and consistent drop in word order accuracy on the permutedsentences can be attributed to the encoder relying on natural language word order to some extent,but can also be explained by the word order prediction task becoming harder due to the inability to100 300 500 750 1000Representation dimensions405060708090100Length prediction accuracyCBOWPerm CBOWEncoder-decoderPerm ED(a) Length test.100 300 500 750 1000Representation dimensions5560657075808590Content prediction accuracy (b) Content test.100 300 500 750 1000Representation dimensions5060708090Order prediction accuracy (c) Order test.Figure 4: Results for length, content and order tests on natural and permuted sentences.7Published as a conference paper at ICLR 2017use general word order statistics. The results suggest that a trained encoder will transfer well acrossdifferent natural language domains, as long as the vocabularies remain stable. When consideringthe decoder’s BLEU score on the permuted dataset (not shown), we do see a dramatic decreasein accuracy. For example, LSTM encoder-decoder with 1000 dimensions drops from 32.5 to 8.2BLEU score. These results suggest that the decoder, which is thrown away, contains most of thelanguage-specific information.8 S KIP-THOUGHT VECTORSIn addition to the experiments on CBOW and LSTM-encoders, we also experiment with the skip-thought vectors model (Kiros et al., 2015). This model extends the idea of the auto-encoder toneighboring sentences.Given a sentence si, it first encodes it using an RNN, similar to the auto-encoder model. However,instead of predicting the original sentence, skip-thought predicts the preceding and following sen-tences, si1andsi+1. The encoder and decoder are implemented with gated recurrent units (Choet al., 2014).Here, we deviate from the controlled environment and use the author’s provided model3with therecommended embeddings size of 4800. This makes the direct comparison of the models “unfair”.However, our aim is not to decide which is the “best” model but rather to show how our method canbe used to measure the kinds of information captured by different representations.Table 1 summarizes the performance of the skip-thought embeddings in each of the prediction taskson both the P ERMUTED and original dataset.Length Word content Word orderOriginal 82.1% 79.7% 81.1%Permuted 68.2% 76.4% 76.5%Table 1: Classification accuracy for the prediction tasks using skip-thought embeddings.The performance of the skip-thought embeddings is well above the baselines and roughly similarfor all tasks. Its performance is similar to the higher-dimensional encoder-decoder models, exceptin the order task where it lags somewhat behind. However, we note that the results are not directlycomparable as skip-thought was trained on a different corpus.The more interesting finding is its performance on the P ERMUTED sentences. In this setting we seea large drop. In contrast to the LSTM encoder-decoder, skip-thought’s ability to predict length andword content does degrade significantly on the permuted sentences, suggesting that the encodingprocess of the skip-thought model is indeed specialized towards natural language texts.9 C ONCLUSIONWe presented a methodology for performing fine-grained analysis of sentence embeddings usingauxiliary prediction tasks. Our analysis reveals some properties of sentence embedding methods:CBOW is surprisingly effective – in addition to being very strong at content, it is also predictiveof length, and can be used to reconstruct a non-trivial amount of the original word order . 300dimensions perform best, with greatly degraded word-content prediction performance on higherdimensions.With enough dimensions, LSTM auto-encoders are very effective at encoding word order andword content information. Increasing the dimensionality of the LSTM encoder does not signif-icantly improve its ability to encode length, but does increase its ability to encode content andorder information. 500 dimensional embeddings are already quite effective for encoding wordorder, with little gains beyond that. Word content accuracy peaks at 750 dimensions and drops at1000, suggesting that larger is not always better .3https://github.com/ryankiros/skip-thoughts8Published as a conference paper at ICLR 2017The trained LSTM encoder (when trained with an auto-encoder objective) does not rely on order-ing patterns in the training sentences when encoding novel sequences.In contrast, the skip-thought encoder does rely on such patterns. Its performance on the othertasks is similar to the higher-dimensional LSTM encoder, which is impressive considering it wastrained on a different corpus.Finally, the encoder-decoder’s ability to recreate sentences (BLEU) is not entirely indicative ofthe quality of the encoder at representing aspects such as word identity and order. This suggeststhatBLEU is sub-optimal for model selection .REFERENCESDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. arXiv preprint arXiv:1409.0473 , 2014.Marco Baroni, Georgiana Dinu, and Germ ́an Kruszewski. Don’t count, predict! A systematiccomparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) ,pp. 238–247, Baltimore, Maryland, June 2014. Association for Computational Linguistics. URLhttp://www.aclweb.org/anthology/P14-1023 .Steven Bird. NLTK: the natural language toolkit. In Proceedings of the COLING/ACL on Interactivepresentation sessions , pp. 69–72. Association for Computational Linguistics, 2006.Kyunghyun Cho, Bart Van Merri ̈enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol-ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoderfor statistical machine translation. arXiv preprint arXiv:1406.1078 , 2014.Ronan Collobert, Koray Kavukcuoglu, and Cl ́ement Farabet. Torch7: A matlab-like environmentfor machine learning. In BigLearn, NIPS Workshop , number EPFL-CONF-192376, 2011.Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in Neural Infor-mation Processing Systems , pp. 3061–3069, 2015.John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning andstochastic optimization. The Journal of Machine Learning Research , 12:2121–2159, 2011.Jeffrey L Elman. Distributed representations, simple recurrent networks, and grammatical structure.Machine learning , 7(2-3):195–225, 1991.Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. InInternational Conference on Artificial Intelligence and Statistics , pp. 315–323, 2011.Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-rent neural networks. In Proceedings of ICASSP , 2013.Felix Hill, Kyunghyun Cho, and Anna Korhonen. Learning Distributed Representations of Sen-tences from Unlabelled Data. In Proceedings of the 2016 Conference of the North AmericanChapter of the Association for Computational Linguistics: Human Language Technologies , pp.1367–1377, San Diego, California, June 2016. Association for Computational Linguistics. URLhttp://www.aclweb.org/anthology/N16-1162 .Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Improving neural networks by preventing co-adaptation of feature detectors. CoRR , 2012.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural Computation , 9(8):1735–1780, 1997. ́Akos K ́ad ́ar, Grzegorz Chrupała, and Afra Alishahi. Representation of linguistic form and functionin recurrent neural networks. arXiv preprint arXiv:1602.08952 , 2016.Andrej Karpathy, Justin Johnson, and Fei-Fei Li. Visualizing and understanding recurrent networks.arXiv preprint arXiv:1506.02078 , 2015.9Published as a conference paper at ICLR 2017Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Tor-ralba, and Sanja Fidler. Skip-thought vectors. In Advances in Neural Information ProcessingSystems , pp. 3276–3284, 2015.Nicholas L ́eonard, Sagar Waghmare, and Yang Wang. rnn: Recurrent library for torch. arXivpreprint arXiv:1511.07889 , 2015.Omer Levy and Yoav Goldberg. Linguistic regularities in sparse and explicit word representations.InProc. of CONLL , pp. 171–180, Baltimore, Maryland, 2014.Omer Levy, Yoav Goldberg, and Ido Dagan. Improving distributional similarity with lessonslearned from word embeddings. Transactions of the Association for Computational Linguistics , 3:211–225, 2015. ISSN 2307-387X. URL https://tacl2013.cs.columbia.edu/ojs/index.php/tacl/article/view/570 .Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. A hierarchical neural autoencoder for paragraphsand documents. arXiv preprint arXiv:1506.01057 , 2015.Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represen-tations in vector space. arXiv preprint arXiv:1301.3781 , 2013a.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen-tations of words and phrases and their compositionality. In Advances in neural information pro-cessing systems , pp. 3111–3119, 2013b.Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. InProceedings of the 27th International Conference on Machine Learning (ICML-10) , pp. 807–814,2010.Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automaticevaluation of machine translation. In Proceedings of the 40th annual meeting on association forcomputational linguistics , pp. 311–318. Association for Computational Linguistics, 2002.Donald B Rubin. Matching to remove bias in observational studies. Biometrics , pp. 159–183, 1973.Allen Schmaltz, Alexander M Rush, and Stuart M Shieber. Word ordering without syntax. arXivpreprint arXiv:1604.08633 , 2016.Ilya Sutskever, James Martens, and Geoffrey E Hinton. Generating text with recurrent neural net-works. In Proceedings of the 28th International Conference on Machine Learning (ICML-11) ,pp. 1017–1024, 2011.Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural net-works. In Advances in neural information processing systems , pp. 3104–3112, 2014.Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop. COURSERA: Neural networks formachine learning , 2012.Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 ,2012.10Published as a conference paper at ICLR 2017APPENDIX I: E XPERIMENTAL SETUPSentence Encoders The bag-of-words (CBOW) and encoder-decoder models are trained on 1million sentences from a 2012 Wikipedia dump with vocabulary size of 50,000 tokens. We useNLTK (Bird, 2006) for tokenization, and constrain sentence lengths to be between 5 and 70 words.For the CBOW model, we train Skip-gram word vectors (Mikolov et al., 2013a), with hierarchical-softmax and a window size of 5 words, using the Gensim implementation.4We control for theembedding size kand train word vectors of sizes k2f100;300;500;750;1000g.For the encoder-decoder models, we use an in-house implementation using the Torch7 toolkit (Col-lobert et al., 2011). The decoder is trained as a language model, attempting to predict the correctword at each time step using a negative-log-likelihood objective (cross-entropy loss over the softmaxlayer). We use one layer of LSTM cells for the encoder and decoder using the implementation inL ́eonard et al. (2015).We use the same size for word and sentence representations (i.e. d=k), and train models ofsizes k2f100;300;500;750;1000g. We follow previous work on sequence-to-sequence learn-ing (Sutskever et al., 2014; Li et al., 2015) in reversing the input sentences and clipping gradients.Word vectors are initialized to random values.We evaluate the encoder-decoder models using BLEU scores (Papineni et al., 2002), a popular ma-chine translation evaluation metric that is also used to evaluate auto-encoder models (Li et al., 2015).BLEU score measures how well the original sentence is recreated, and can be thought of as a proxyfor the quality of the encoded representation. We compare it with the performance of the modelson the three prediction tasks. The results of the higher-dimensional models are comparable to thosefound in the literature, which serves as a sanity check for the quality of the learned models.Auxiliary Task Classifier For the auxiliary task predictors, we use multi-layer perceptrons witha single hidden layer and ReLU activation, which were carefully tuned for each of the tasks. Weexperimented with several network architectures prior to arriving at this configuration.Further details regarding the training and architectures of both the sentence encoders and auxiliarytask classifiers are available in the Appendix.APPENDIX II: T ECHNICAL DETAILSENCODER DECODERParameters of the encoder-decoder were tuned on a dedicated validation set. We experienced withdifferent learning rates (0.1, 0.01, 0.001), dropout-rates (0.1, 0.2, 0.3, 0.5) (Hinton et al., 2012) andoptimization techniques (AdaGrad (Duchi et al., 2011), AdaDelta (Zeiler, 2012), Adam (Kingma &Ba, 2014) and RMSprop (Tieleman & Hinton, 2012)). We also experimented with different batchsizes (8, 16, 32), and found improvement in runtime but no significant improvement in performance.Based on the tuned parameters, we trained the encoder-decoder models on a single GPU (NVIDIATesla K40), with mini-batches of 32 sentences, learning rate of 0.01, dropout rate of 0.1, and theAdaGrad optimizer; training takes approximately 10 days and is stopped after 5 epochs with no lossimprovement on a validation set.PREDICTION TASKSParameters for the predictions tasks as well as classifier architecture were tuned on a dedicated vali-dation set. We experimented with one, two and three layer feed-forward networks using ReLU (Nair& Hinton, 2010; Glorot et al., 2011), tanh and sigmoid activation functions. We tried different hid-den layer sizes: the same as the input size, twice the input size and one and a half times the inputsize. We tried different learning rates (0.1, 0.01, 0.001), dropout rates (0.1, 0.3, 0.5, 0.8) and differ-ent optimization techniques (AdaGrad, AdaDelta and Adam).4https://radimrehurek.com/gensim11Published as a conference paper at ICLR 2017Our best tuned classifier, which we use for all experiments, is a feed-forward network with onehidden layer and a ReLU activation function. We set the size of the hidden layer to be the same sizeas the input vector. We place a softmax layer on top whose size varies according to the specific task,and apply dropout before the softmax layer. We optimize the log-likelihood using AdaGrad. Weuse a dropout rate of 0.8 and a learning rate of 0.01. Training is stopped after 5 epochs with no lossimprovement on the development set. Training was done on a single GPU (NVIDIA Tesla K40).10 A DDITIONAL EXPERIMENTS - CONTENT TASKHow well do the models preserve content when we increase the sentence length? In Fig. 5 we plotcontent prediction accuracy vs. sentence length for different models.5 10 15 20 25 30Sentence length0.650.700.750.800.850.900.951.00Content prediction accuracyCBOW 300CBOW 100ED 750ED 500ED 1000Figure 5: Content accuracy vs. sentence length for selected models.As expected, all models suffer a drop in content accuracy on longer sentences. The degradation isroughly linear in the sentence length. For the encoder-decoder, models with fewer dimensions seemto degrade slower.APPENDIX III: S IGNIFICANCE TESTSIn this section we report the significance tests we conduct in order to evaluate our findings. In orderto do so, we use the paired t-test (Rubin, 1973).All the results reported in the summery of findings are highly significant (p-value 0.0001). Theones we found to be not significant (p-value 0.03) are the ones which their accuracy does nothave much of a difference, i.e ED with size 500 and ED with size 750 tested on the word order task(p-value=0.11), or CBOW with dimensions 750 and 1000 (p-value=0.3).Dim. Length Word content Word order100 1.77e-147 0.0 1.83e-296300 0.0 0.0 0.0500 0.0 0.0 0.0750 0.0 0.0 0.01000 0.0 0.0 0.0Table 2: P-values for ED vs. CBOW over the different dimensions and tasks. For example, in the row wheredim equals 100, we compute the p-value of ED compared to CBOW with embed size of 100 on all three tasks.Dim. Length Word content Word order100 vs. 300 0.0 8.56e-190 0.0300 vs. 500 7.3e-71 4.20e-05 5.48e-56500 vs. 750 3.64e-175 4.46e-65 0.11750 vs. 1000 1.37e-111 2.35e-243 4.32e-61Table 3: P-values for ED models over the different dimensions and tasks.12Published as a conference paper at ICLR 2017Dim. Length Word content Word order100 vs. 300 0.0 0.0 1.5e-33300 vs. 500 1.47e-215 0.0 3.06e-64500 vs. 750 0.68 0.032 0.05750 vs. 1000 4.44e-32 0.3 0.08Table 4: P-values for CBOW models over the different dimensions and tasks.13
HkYhZDqxg
Published as a conference paper at ICLR 2017TREE-STRUCTURED DECODING WITH DOUBLY -RECURRENT NEURAL NETWORKSDavid Alvarez-Melis & Tommi S. JaakkolaComputer Science and Artificial Intelligence LabMITfdavidam,tommi g@csail.mit.eduABSTRACTWe propose a neural network architecture for generating tree-structured objectsfrom encoded representations. The core of the method is a doubly recurrent neu-ral network model comprised of separate width and depth recurrences that arecombined inside each cell (node) to generate an output. The topology of the treeis modeled explicitly together with the content. That is, in response to an encodedvector representation, co-evolving recurrences are used to realize the associatedtree and the labels for the nodes in the tree. We test this architecture in an encoder-decoder framework, where we train a network to encode a sentence as a vector,and then generate a tree structure from it. The experimental results show the ef-fectiveness of this architecture at recovering latent tree structure in sequences andat mapping sentences to simple functional programs.1 I NTRODUCTIONRecurrent neural networks have become extremely popular for modeling structured data. Key totheir success is their ability to learn long-range temporal dependencies, their flexibility, and ease ofcustomization. These architectures are naturally suited for modeling sequences since the underlyingstate evolution resulting from successive operations follows an inherently linear order (Williams &Zipser, 1995; Hochreiter & Schmidhuber, 1997). Indeed, they have been successfully adapted tolanguage modeling (Zaremba et al., 2015), machine translation (Sutskever et al., 2014) and conver-sational agents (Vinyals & Le, 2015), among other applications.Although sequences arise frequently in practice, other structures such as trees or graphs do notnaturally conform to a linear ordering. For example, natural language sentences or associated parsetrees, programs, hierarchical structures in biology, or molecules are not inherently linear structures.While sentences in natural language can be modeled as if they were linear sequences, the underlyingprocess is compositional (Frege, 1892). Models that construct sentences compositionally shouldderive an advantage from adopting a more appropriate inductive bias.The flexibility and success of recurrent neural networks in modeling and generating sequential datahas prompted efforts to adapt them to non-sequential data too. Recent work has focused on theapplication of neural architectures to hierarchical structures, albeit in limited ways. Much of thiswork has assumed that either the full tree structure is given (Socher et al., 2012; Tai et al., 2015) or atleast the nodes are (Socher & Lin, 2011; Chen & Manning, 2014; Kiperwasser & Goldberg, 2016).In the former scenario, the network aggregates the node information in a manner that is coherentwith a given tree structure while, in the latter, generation is reduced to an attachment problem, i.e.,sequentially deciding which pairs of nodes to join with an edge until a tree is formed.The full problem of decoding with structure , i.e., generating a tree-structured object with node labelsfrom a given vector representation, has remained largely unexplored until recently. Recent efforts toadapt RNNs to this context have so far remained relatively close to their sequential counterparts. Forexample, in order to capture depth and branching in the tree, one can introduce special tokens (Dong& Lapata, 2016) or use alternating RNNs coupled with external classifiers to predict branching(Zhang et al., 2016).1Published as a conference paper at ICLR 2017In this work, we propose a novel architecture tailored specifically to tree-structured decoding. At theheart of our approach is a doubly-recurrent (breadth and depth-wise recurrent) neural network whichseparately models the flow of information between parent and children nodes, and between siblings.Each of these relationships is modeled with a recurrent module whose hidden states are updatedupon observing node labels. Every node in the tree receives two hidden states, which are thencombined and used to predict a label for that node. Besides maintaining separate but simultaneousfraternal andpaternal recurrences, the proposed architecture departs from previous methods in thatit explicitly models tree topology. Each node in the network has modules that predict, based onthe cell state, whether the node is terminal, both in terms of depth and width. Decoupling thesedecisions from the label prediction allows for a more concise formulation, which does not requireartificial tokens to be added to the tree to simulate branching.We test this novel architecture in various encoder-decoder frameworks, coupling it with sequentialencoders to predict tree structure from encoded vector representations of sequences. The experimen-tal results show the effectiveness of this approach at recovering latent structure in flattened stringrepresentations of trees (Section 4.1) and at mapping from natural language descriptions of simpleprograms to abstract syntax trees (Section 4.2). In addition, we show that even for sequence-to-sequence tasks such as machine translation, the proposed architecture exhibits desirable properties,such as invariance to structural changes and coarse-to-fine generation (Section 4.3).To summarize, the main contributions of this paper are as follows:We propose a novel neural network architecture specifically tailored to tree-structured de-coding, which maintains separate depth and width recurrent states and combines them toobtain hidden states for every node in the tree.We equip this novel architecture with a mechanism to predict tree topology explicitly (asopposed to implicitly by adding nodes with special tokens).We show experimentally that the proposed method is capable of recovering trees fromencoded representations and that it outperforms state-of-the-art methods in a task consistingof mapping sentences to simple functional programs.2 R ELATED WORKRecursive Neural Networks. Recursive neural networks (Socher & Lin, 2011; Socher et al., 2012)were proposed to model data with hierarchical structures, such as parsed scenes and natural languagesentences. Though they have been most successfully applied to encoding objects when their tree-structured representation is given (Socher et al., 2013), the original formulation by Socher & Lin(2011) also considered using them to predict the structure (edges), albeit for the case where nodesare given. Thus, besides their limited applicability due to their assumption of binary trees, recursiveneural networks are not useful for fully generating trees from scratch.Tree-structured encoders. The Tree-LSTM of Tai et al. (2015) is a generalization of long short-term memory networks (Hochreiter & Schmidhuber, 1997) to tree-structured inputs. Their modelconstructs a sentence representation bottom-up, obtaining at every step the representation of a nodein the tree from those of its children. In this sense, this model can be seen as a generalization ofrecursive neural networks to trees with degree potentially greater than two, with the additional long-range dependency modeling provided by LSTMs. They propose two methods for aggregating thestates of the children, depending on the type of underlying tree: N-ary trees or trees with unknownand potentially unbounded branching factor. TreeLSTMs have shown promising results for compo-sitional encoding of structured data, though by construction they cannot be used for decoding, sincethey operate on a given tree structure.Tree-structured decoders. Proposed only very recently, most tree-structured decoders rely onstacked on intertwined RNNs, and use heuristic methods for topological decisions during genera-tion. Closest to our method is the Top-down Tree LSTM of Zhang et al. (2016), which generatesa tree from an encoded representation. Their method relies on 4 independent LSTMs, which act inalternation —as opposed to simultaneously in our approach—yielding essentially a standard LSTMthat changes the weights it uses based on the position of the current node. In addition, their method2Published as a conference paper at ICLR 2017provides children with asymmetric parent input : “younger” children receive information from theparent state only through the previous sibling’s state. Though most of their experiments focus onthe case where the nodes are given, they mention how to use their method for full prediction by in-troducing additional binary classifiers which predict which of the four LSTMs is to be used. Theseclassifiers are trained in isolation after the main architecture has been trained. Contrary to thisapproach, our method can be trained end-to-end in only one pass, has a simpler formulation andexplicitly incorporates topological prediction as part of the functioning of each neuron.A similar approach is proposed by Dong & Lapata (2016). They propose SEQ2TREE , an encoder-decoder architecture that maps sentences to tree structures. For the decoder, they rely on hierarchicaluse of an LSTM, similar to Tai et al. (2015), but in the opposite direction: working top-down fromthe root of the tree. To decide when to change levels in the hierarchy, they augment the training treeswith nonterminal nodes labeled with a special token <n> , which when generated during decodingtrigger the branching out into a lower level in the tree. Similar to our method, they feed nodes withhidden representations of their parent and sibling, but they do so by concatenating both states andrunning them through a single recurrent unit, as opposed to our method, where these two sourcesof information are handled separately. A further difference is that our approach does not requireartificial nodes with special tokens to be added to the tree, resulting in smaller trees.Hierarchical Neural Networks for Parsing. Neural networks have also been recently introducedto the problem of natural language parsing (Chen & Manning, 2014; Kiperwasser & Goldberg,2016). In this problem, the task is to predict a parse tree over a given sentence. For this, Kiperwasser& Goldberg (2016) use recurrent neural networks as a building block, and compose them recursivelyto obtain a tree-structured encoder. Starting from the leaves (words) they predict a parse tree with aprojective bottom-up strategy, which sequentially updates the encoded vector representation of thetree and uses it to guide edge-attaching decisions. Though conceptually similar to our approach,their method relies on having access to the nodes of the tree (words) and only predicts its topology,so—similar to recursive neural networks—it cannot be used for a fully generative decoding.3 D OUBLY RECURRENT NEURAL NETWORKSGenerating a tree-structured object from scratch using only an encoded representation poses severaldesign challenges. First, one must decide in which order to generate the tree. If the nodes on thedecoder side were given (such as in parsing), it would be possible to generate a tree bottom-up fromthese nodes (e.g. as Kiperwasser & Goldberg 2016 do). In the setting we are interested in, however,not even the nodes are known when decoding, so the natural choice is a top-down decoder, whichstarting from an encoded representation generates the root of the tree and then recursively generatesthe children (if any) of every node.The second challenge arises from the asymmetric hierarchical nature of trees. Unlike the sequence-to-sequence setting where encoding and decoding can be achieved with analogous procedures, whendealing with tree-structured data these two involve significantly different operations. For example,an encoder that processes a tree bottom-up using information of a node’s children to obtain itsrepresentation cannot be simply reversed and used as a decoder, since when generating the treetop-down, nodes have to be generated before their children are.An additional design constraint comes from deciding what information to feed to each node. Forsequences, the choice is obvious: a node should receive information from the node preceding orsucceeding it (or both), i.e. there is a one-dimensional flow of information. In trees, there is anevident flow of information from parent to children (or vice-versa), but when generating nodes ina top-down order it seems unnatural to generate children in isolation: the label of one of them willlikely influence what the states of the other children might be. For example, in the case of parsetrees, generating a verb will reduce the chances of other verbs occurring in that branch.With these considerations in mind, we propose an architecture tailored to tree decoding from scratch:top-down, recursive and doubly-recurrent , i.e. where both the ancestral (parent-to-children) andfraternal (sibling-to-sibling) flows of information are modeled with recurrent modules. Thus, thebuilding block of a doubly recurrent neural network (DRNN) is a cell with two types of input states,one coming from its parent, updated and passed on to its descendants, and another one received from3Published as a conference paper at ICLR 2017itsprevious sibling,1updated and passed on to the next one. We model the flow of information inthe two directions with separate recurrent modules.Formally, letT=fV;E;Xgbe a connected labeled tree, where Vis the set of nodes, Ethe set ofedges andXare node labels.2Letgaandgfbe functions which apply one step of the two separateRNNs. For a node i2V with parent p(i)and previous sibling s(i), the ancestral and fraternalhidden states are updated viahai=ga(hap(i);xp(i)) (1)hfi=gf(hfs(i);xs(i)) (2)where xs(j);xp(i)are the vectors representing the previous sibling’s and parent’s values, respec-tively. Once the hidden depth and width states have been updated with these observed labels, theyare combined to obtain a predictive hidden state :h(pred )i = tanhUfhfi+Uahai(3)where Uf2RnDfandUa2RnDaare learnable parameters. This state contains combinedinformation of the node’s neighborhood in the tree, and is used to predict a label for it. In itssimplest form, the network could compute the output of node iby sampling from distributionoi=softmax (Wh(pred )i ) (4)In the next section, we propose a slight modification to (4) whereby topological information isincluded in the computation of cell outputs. After the node’s output symbol xihas been obtained bysampling from oi, the cell passes haito all its children and hfito the next sibling (if any), enablingthem to apply Eqs (1) and (2) to realize their states. This procedure continues recursively, untiltermination conditions (explained in the next section) cause it to halt.3.1 T OPOLOGICAL PREDICTIONAs mentioned before, the central issue with free-form tree construction is to predict the topologyof the tree. When constructing the tree top-down, for each node we need to decide: (i) whether itis a leaf node (and thus it should not produce offspring) and (ii) whether there should be additionalsiblings produced after it. Answering these two questions for every node allows us to construct atree from scratch and eventual stop growing it.Sequence decoders typically rely on special tokens to terminate generation (Sutskever et al., 2014).The token is added to the vocabulary and treated as a regular word. During training, the examples arepadded with this token at the end of the sequence, and during testing, generation of this token signalstermination. These ideas has been adopted by most tree decoders (Dong & Lapata, 2016). Thereare two important downsides of using a padding strategy for topology prediction in trees. First,the size of the tree can grow considerably. While in the sequence framework only one stoppingtoken is needed, a tree with nnodes might need up to O(n)padding nodes to be added. This canhave important effects in training speed. The second reason is that a single stopping token selectedcompetitively with other tokens requires one to continually update the associated parameters inresponse to any changes in the distribution over ordinary tokens so as to maintain topological control.Based on these observations, we propose an alternative approach to stopping, in which topologicaldecisions are made explicitly (as opposed to implicitly, with stopping tokens). For this, we use thepredictive hidden state of the node h(pred )with a projection and sigmoid activation:pai=(uah(pred )i ) (5)The valuepai2[0;1]is interpreted as the probability that node ihas children. Analogously, we canobtain a probability of stopping fraternal branch growth after the current node as follows:pfi=(ufh(pred )i ) (6)1Unlike the “ancestral” line, the order within sibling nodes is ambiguous. While in abstract trees it isassumed that the there is no such ordering, we assume that for the structures were are interested in learningthere is always one: either chronological (the temporal order in which the nodes were generated) or latent(e.g. the grammatical order of the words in a parse tree with respect to their sentence representation).2We assume throughout that these values are given as class indicators xi2f1;:::;Ng.4Published as a conference paper at ICLR 2017+gahapxpgfhfsxsσσsoftmaxoipaipfih(pred )ihaihfi125 63 47 8 9Encoderha0ha1 ha1 ha1ha2 ha2 ha4 ha4 ha4hf2hf3hf5hf7hf8/////////Figure 1: Left: A cell of the doubly-recurrent neural network corresponding to node iwith parentpand siblings.Right : Structure-unrolled D RNN network in an encoder-decoder setting. The nodesare labeled in the order in which they are generated. Solid (dashed) lines indicate ancestral (fraternal)connections. Crossed arrows indicate production halted by the topology modules.Note that these stopping strategies depart from the usual padding methods in a fundamental property:the decision to stop is made before instead of in conjunction with the label prediction. The rationalebehind this is that the label of a node will likely be influenced not only by its context, but also bythe type of node (terminal or non-terminal) where it is to be assigned. This is the case in language,for example, where syntactic constraints restrict the type of words that can be found in terminalnodes. For this purpose, we include the topological information as inputs to the label predictionlayer. Thus, (4) takes the formoi=softmax (Wh(pred )i +iva+'ivf) (7)wherei;'i2f0;1gare binary variables indicating the topological decisions and va;vfare learn-able offset parameters. During training, we use gold-truth values in (7), i.e. i= 1 if nodeihaschildren and 'i= 1 if it has a succeeding sibling. During testing, these values are obtained frompa;pfby sampling or beam-search. A schematic representation of the internal structure of a DRNNcell and the flow of information in a tree are shown in Figure 1.3.2 T RAINING DRNN SWe train DRNNs with (reverse) back-propagation through structure (BPTS) (Goller & Kuechler,1996). In the forward pass, node outputs are computed in a top-down fashion on the structure-unrolled version of the network, following the natural3dependencies of the tree. We obtain errorsignal at the node level from the two types of prediction: label and topology. For the former, wecompute cross-entropy loss of oiwith respect to the true label of the node xi. For the topologicalvaluespaiandpfiwe compute binary cross entropy loss with respect to gold topological indicatorsi;'i2f0;1g. In the backward pass, we proceed in the reverse (bottom-up) direction, feeding intoevery node the gradients received from child and sibling nodes and computing internally gradientswith respect to both topology and label prediction. Further details on the backpropagation flow areprovided in the Appendix.Note that the way BPTS is computed implies and underlying decoupled loss functionL(bx) =Xi2VLlabel(xi;bxi) +Ltopo(pi;bpi) (8)The decoupled nature of this loss allows us to weigh these two objectives differently, to emphasizeeither topology or label prediction accuracy. Investigating the effect of this is left for future work.3The traversal is always breadth-first starting from the root, but the order in which sibling nodes are visitedmight depend on the specific problem. If the nodes of the tree have an underlying order (such as in dependencyparse trees), it is usually desirable to preserve this order.5Published as a conference paper at ICLR 2017N=500N=1000N=3500N=4000goldROOTBWIBOROOTBFROOTBVWROOTBWFWJROOTBWFJVFigure 2: Trees generated by the D RNN decoder trained on subset of size Nof the synthetic dataset,for a test example with description “ROOT B W F J V”.As is common with sequence generation, during training we perform teacher forcing : after predict-ing the label of a node and its corresponding loss, we replace it with its gold value, so that childrenand siblings receive the correct label for that node. Analogously, we obtain the probabilities paandpf, compute their loss, and replace them for ground truth variables i;'ifor all downstreamcomputations. Addressing this exposure bias by mixing ground truth labels with model predictionsduring training (Venkatraman et al., 2015) or by incremental hybrid losses (Ranzato et al., 2016) isleft as an avenue for future work.4 E XPERIMENTS4.1 S YNTHETIC TREE RECOVERYIn our first set of experiments we evaluate the effectiveness of the proposed architecture to recovertrees from flattened string representations. For this, we first generate a toy dataset consisting ofsimple labeled trees. To isolate the effect of label content from topological prediction, we take asmall vocabulary consisting of the 26 letters of the English alphabet. We generate trees in a top-downfashion, conditioning the label and topology of every node on the state of its ancestors and siblings.For simplicity, we use a Markovian assumption on these dependencies, modeling the probability ofa node’s label as depending only on the label of its parent and the last sibling generated before it (ifany). Conditioned on these two inputs, we model the label of the node as coming from a multinomialdistribution over the alphabet with a dirichlet prior. To generate the topology of the tree, we modelthe probability of a node having children and a next-sibling as depending only on its label and thedepth of the tree. For each tree we generate a string representation by traversing it in breadth-firstpreorder, starting from the root. The labels of the nodes are concatenated into a string in the orderin which they were visited, resulting in a string of jTjsymbols. We create a dataset of 5,000 treeswith this procedure, and split it randomly into train, validation and test sets (with a 80%,10%,10%split). Further details on the construction of this dataset are provided in the Appendix.The task consists of learning a mapping from strings to trees, and using this learned mapping torecover the tree structure of the test set examples, given only their flattened representation. Todo so, we use an encoder-decoder framework, where the strings are mapped to a fixed-size vectorrepresentation using a recurrent neural network. For the decoder, we use a DRNN with LSTMmodules, which given the encoded representation generates a tree. We choose hyper-parameterswith cross-validation. Full training details are provided in the Appendix.Measuring performance only in terms of exact recovery would likely yield near-zero accuracies formost trees. Instead, we opt for a finer-grained metric of tree similarity that gives partial credit forcorrectly predicted subtrees. Treating tree generation as a retrieval problem, we evaluate the qualityof the predicted tree in terms of the precision and recall of recovering nodes and edges present inthe gold tree. Thus, we penalize both missing and superfluous components. As baseline, we inducea probabilistic context-free grammar (PCFG) on the full training data and use it to parse the testsentences. Note that unlike the DRNN, this parser has direct access to the sentence representationand thus its task is only to infer the tree structure on top of it, so this is indeed a strong baseline.Figure 3 shows the results on the test set. Training on the full data yields node and edge retrievalF1-Scores of 75% and71%, respectively, the latter considerably above the baseline.4This 4%gapcan be explained by correct nodes being generated in the wrong part of the tree, as in the example in4Since the PCFG parser has access to the nodes by construction, node accuracy for the baseline method isirrelevant and thus omitted from the analysis.6Published as a conference paper at ICLR 2017500 1000 1500 2000 2500 3000 3500 4000Training examples50556065707580Macro-F1 ScoreBasline - EdgeNodeEdge23456789101112131415161819212224Tree Size (# nodes)0.00.20.40.60.81.0PrecisionNodeEdgeFigure 3: Left: F1-Score for models trained on randomly sampled subsets of varying size, averagedover 5 repetitions. Right : Node (first column) and edge (second) precision as a function of tree size.23456Tree Depth (# nodes)0.00.20.40.60.81.0PrecisionNodeEdge12345678912Tree Width (# nodes)0.00.20.40.60.81.0PrecisionNodeEdgeFigure 4: Node and edge precision as a function of tree depth (left figure) and width (right).Figure 2. The second plot in Figure 3 shows that although small trees are recovered more accurately,precision decays slowly with tree size, with depth accounting for the largest effect (Figure 4).4.2 M APPING SENTENCES TO FUNCTIONAL PROGRAMSTree structures arise naturally in the context of programs. A typical compiler takes human-readablesource code (expressed as sequences of characters) and transforms it into an executable abstractsyntax tree (AST). Source code, however, is already semi-structured. Mapping natural languagesentences directly into executable programs is an open problem, which has received considerableinterest in the natural language processing community (Kate et al., 2005; Branavan et al., 2009).The IFTTT dataset (Quirk et al., 2015) is a simple testbed for language-to-program mapping. Itconsists of if-this-then-that programs (called recipes ) crawled from the IFTTT website5, paired withnatural language descriptions of their purpose. The recipes consist of a trigger and an action, eachdefined in terms of a channel (e.g. “ Facebook ”), a function (e.g. “ Post a status update ”) and poten-tially arguments and parameters. An example of a recipe and its description are shown in Figure 5.The data is user-generated and extremely noisy, which makes the task significantly challenging.5www.ifttt.comRootIF (TRIGGER)FacebookYouaretagged inaphotoTHEN (ACTION)DropboxAdd filefrom URLFilename FileURL Dropbox Folder Path“{{CreatedAt }}}-{{From }}}-{{Caption }}”{{ImageSource }}} {{Facebook }}}“Save photos you’re tagged in on Facebook to Dropbox” Recipe(a) Channels(b) Functions(c) Arguments(b) ParametersFigure 5: Example recipe from the IFTTT dataset. The description (above) is a user-generatednatural language explanation of the if-this-then-that program (below).7Published as a conference paper at ICLR 2017Table 1: Results on the IFTTT task. Left: non-English and unintelligible examples removed (2,262recipes). Right : examples for which at least 3 +humans agree with gold (758 recipes).Method Channel +Func F1retrieval 36.8 25.4 49.0phrasal 27.8 16.4 39.9sync 26.7 15.4 37.6classifier 64.8 47.2 56.5posclass 67.2 50.4 57.7SEQ2SEQ 68.8 50.5 60.3SEQ2TREE 69.6 51.4 60.4GRU-DRNN 70.1 51.2 62.7LSTM-DRNN 74.9 54.3 65.2Method Channel +Func F1retrieval 43.3 32.3 56.2phrasal 37.2 23.5 45.5sync 36.5 23.5 45.5classifier 79.3 66.2 65.0posclass 81.4 71.0 66.5SEQ2SEQ 87.8 75.2 73.7SEQ2TREE 89.7 78.4 74.2GRU-DRNN 89.9 77.6 74.1LSTM-DRNN 90.1 78.2 77.4We approach this task using an encoder-decoder framework. We use a standard RNN encoder, eitheran LSTM or a GRU (Cho et al., 2014), to map the sentence to a vector representation, and we usea D RNN decoder to generate the AST representation of the recipe. We use the original data split,which consists of 77,495 training, 5,171 development and 4,294 test examples. For evaluation, weuse the same metrics as Quirk et al. (2015), who note that computing exact accuracy on such a noisydataset is problematic, and instead propose to evaluate the generated AST in terms of F1-score onthe set of recovered productions. In addition, they compute accuracy at the channel level (i.e. whenboth channels are predicted correctly) and at the function level (both channels andboth functionspredicted correctly).We compare our methods against the various extraction and phrased-based machine translation base-lines of Quirk et al. (2015) and the the methods of Dong & Lapata (2016): S EQ2SEQ, a sequence-to-sequence model trained on flattened representations of the AST, and S EQ2TREE, a token-drivenhierarchical RNN. Following these two works, we report results on two noise-filtered subsets of thedata: one with all non-English and unintelligible recipes removed and the other one with recipesfor which at least three humans agreed with the gold AST. The results are shown in Table 1. Inboth subsets, D RNNs perform on par or above previous approaches, with L STM-DRNN achievingsignificantly better results. The improvement is particularly evident in terms of F1-score, which isthe only metric used by previous approaches that measures global tree reconstruction accuracy. Tobetter understand the quality of the predicted trees beyond the function level (i.e. (b) in Figure 5),we computed node accuracy on the arguments level. Our best performing model, L STM-DRNN,achieves a Macro F1 score of 51% (0.71 precision, 0.40 recall) over argument nodes, which showsthat the model is reasonably successful at predicting structure even beyond depth three. The bestperforming alternative model, S EQ2TREE, achieves a corresponding F1 score of 46%.4.3 M ACHINE TRANSLATIONIn our last set of experiments, we offer a qualitative evaluation DRNNs in the context of machinetranslation. Obtaining state-of-the-art results in machine translation requires highly-optimized ar-chitectures and large parallel corpora. This is not our goal. Instead, we investigate whether decodingwith structure can bring benefits to a task traditionally approached as a sequence-to-sequence prob-lem. For this reason, we consider a setting with limited data: a subset of the WMT14 datasetconsisting of about 50K English $French sentence pairs (see the Appendix for details) along withdependency parses of the target (English) side.We train a sequence-to-tree model using an LSTM encoder and a DRNN decoder as in the previousexperiments. A slight modification here is that we distinguish left and right children in the tree,using two symmetric width-modules gfL;gfRthat produce children from the parent outwards. Withthis, children are lexically ordered, and therefore trees can be easily and un-ambiguously projectedback into sentences. We compare our model against a sequence-to-sequence architecture of similarcomplexity (in terms of number of parameters) trained on the same data using the optimized Open-NMT library (Klein et al., 2017). For decoding, we use a simple best-of-k sampling scheme for ourmodel, and beam search for the S EQ2SEQmodels.8Published as a conference paper at ICLR 20170 20 40 60 80 100Log-Likelihood relative change (%)Seq2Seq(Small)Seq2Seq(Large)DRNN(Large)DRNN(Small)Figure 6: Likelihood change un-der target structural perturbation.Source“ produit diff ́erentes r ́eponses quichangent avec le temps selon nosexp ́eriences et nos relations ”“je ne sais jamais quoidire dans ces cas l `a”SEQ2SEQ:l= 1 a Il= 4 with the different actions I dol= 8 with the different actions who change with I do not know what to sayDRNN:d= 1 answers knowd= 2 different answers change but i do not knowd= 3 product the different answers change . but i do not know to sayTable 2: Translations at different resolutions (size constraints im-posed during decoding) for two example sentences.First, we analyze the quality of translations as a function of the maximum allowed target sentence“size”. The notion of size for a sequence decoder is simply the length while for D RNN we usedepth instead so as to tap into the inherent granularity at which sentences can be generated fromthis architecture. Two such examples are shown in Table 2. Since D RNN topology has been trainedto mimic dependency parses top-down, the decoder tends to first generate the fundamental aspectsof the sentence (verb, nouns), leaving less important refinements for deeper structures down in thetree. The sequence decoder, in contrast, is trained for left-to-right sequential generation, and thusproduces less informative translations under max-length restrictions.In our second experiment we investigate the decoders’ ability to entertain natural paraphrases ofsentences. If we keep the semantic content of a sentence fixed and only change its grammaticalstructure, it is desirable that the decoder would assign nearly the same likelihood to the new sentence.One way to assess this invariance is to compare the relative likelihood that the model assigns to thegold sentence in comparison to its paraphrase. To test this, we take 50 examples from the WMTtest split and manually generate paraphrases with various types of structural alterations (see detailsin the Appendix). For each type of decoder, we measure the relative change (in absolute value) ofthe log-likelihood resulting from the perturbation. All the models we compare have similar standarddeviation ( 4020) of log-likelihood scores over these examples, so the relative changes in thelog-likelihood remain directly comparable. For each architecture we train two versions of differentsizes, where the sizes are balanced in terms of the number of parameters across the architectures. Theresults in Figure 6 show that D RNN’s exhibit significantly lower log-likelihood change, suggestingthat, as language models, they are more robust to natural structural variation than their S EQ2SEQcounterparts.5 D ISCUSSION AND FUTURE WORKWe have presented doubly recurrent neural networks , a natural extension of (sequential) recurrentarchitectures to tree-structured objects. This architecture models the information flow in a tree withtwo separate recurrent modules: one carrying ancestral information (received from parent and passedon to offspring) and the other carrying fraternal information (passed from sibling to sibling). Thetopology of the tree is modeled explicitly and separately from the label prediction, with modulesthat given the state of a node predict whether it has children and siblings.The experimental results show that the proposed method is able to predict reasonable tree structuresfrom encoded vector representations. Despite the simple structure of the IFTTT trees, the resultson that task suggest a promising direction of using D RNNs for generating programs or executablequeries from natural language. On the other hand, the results on the toy machine translation taskshow that even when used to generate sequences, D RNN’s exhibit desirable properties, such as in-variance over structural modifications and the ability to perform coarse-to-fine decoding. In orderto truly use this architecture for machine translation, the approach must be scaled by resorting tobatch processing in GPU. This is possible since forward and backward propagation are computedsequentially along tree traversal paths so that inputs and hidden states of parents and siblings can begrouped into tensors and operated in batch. We leave this as an avenue for future work.9Published as a conference paper at ICLR 2017ACKNOWLEDGEMENTSDA-M acknowledges support from a CONACYT fellowship. The authors would like to thank theanonymous reviewers for their constructive comments.REFERENCESSrk Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. Reinforcement learning formapping instructions to actions. Proc. Jt. Conf. 47th Annu. Meet. ACL 4th Int. Jt. Conf. Nat.Lang. Process. AFNLP Vol. 1-Volume 1 , (August):82–90, 2009. ISSN 1742206X. doi: 10.3115/1687878.1687892.Danqi Chen and Christopher D Manning. A Fast and Accurate Dependency Parser using Neu-ral Networks. Proc. 2014 Conf. Empir. Methods Nat. Lang. Process. , (i):740–750, 2014. URLhttps://cs.stanford.edu/{ ̃}danqi/papers/emnlp2014.pdf .Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the Proper-ties of Neural Machine Translation: Encoder–Decoder Approaches. Proc. SSST-8, Eighth Work.Syntax. Semant. Struct. Stat. Transl. , pp. 103–111, 2014. URL http://arxiv.org/pdf/1409.1259v2.pdf .Li Dong and Mirella Lapata. Language to Logical Form with Neural Attention. In ACL, pp. 33–43,2016. doi: 10.18653/v1/P16-1004. URL http://arxiv.org/abs/1601.01280 .Gottlob Frege. ̈Uber Sinn und Bedeutung. Zeitschrift f ̈ur Philos. und Philos. Krit. , (1):25–50, 1892.Christoph Goller and Andreas Kuechler. Learning task-dependent distributed representations bybackpropagation through structure. In Int. Conf. Neural Networks , pp. 347–352, 1996. ISBN0-7803-3210-5. doi: 10.1109/ICNN.1996.548916.Sepp Hochreiter and Jurgen J ̈urgen Schmidhuber. Long short-term memory. Neural Comput. , 9(8):1–32, 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735.Rj Kate, Yw Wong, and Rj Mooney. Learning to transform natural to formal languages. In Proc.Natl. Conf. Artif. Intell. , volume 20, pp. 1062–1068, 2005. ISBN 1-57735-236-x. URL http://www.aaai.org/Library/AAAI/2005/aaai05-168.php .Diederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. Int. Conf. Learn.Represent. , pp. 1–13, 2014. URL http://arxiv.org/abs/1412.6980 .Eliyahu Kiperwasser and Yoav Goldberg. Easy-First Dependency Parsing with Hierarchical TreeLSTMs. TACL , 2016. URL https://www.transacl.org/ojs/index.php/tacl/article/viewFile/798/208 .G. Klein, Y . Kim, Y . Deng, J. Senellart, and A. M. Rush. OpenNMT: Open-Source Toolkit forNeural Machine Translation. ArXiv e-prints , 2017.Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, andDavid McClosky. The Stanford CoreNLP natural language processing toolkit. In Associ-ation for Computational Linguistics (ACL) System Demonstrations , pp. 55–60, 2014. URLhttp://www.aclweb.org/anthology/P/P14/P14-5010 .Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global Vectors for WordRepresentation. In Proc. 2014 Conf. Empir. Methods Nat. Lang. Process. , 2014.Chris Quirk, Raymond Mooney, and Michel Galley. Language to Code: Learning SemanticParsers for If-This-Then-That Recipes. ACL-IJCNLP , (July):878–888, 2015. URL http://www.aclweb.org/anthology/P15-1085 .Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence Level Train-ing with Recurrent Neural Networks. In ICLR , pp. 1–15, 2016. URL http://arxiv.org/abs/1511.06732 .10Published as a conference paper at ICLR 2017R Socher and Cc Lin. Parsing natural scenes and natural language with recursive neural networks.InEMNLP , pp. 129–136, 2011. ISBN 9781450306195. doi: 10.1007/978-3-540-87479-9.Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. Semantic Composition-ality through Recursive Matrix-Vector Spaces. In EMNLP , number Mv, pp. 1201–1211, 2012.ISBN 9781937284435.Richard Socher, Alex Perelygin, and Jy Wu. Recursive deep models for semantic compositionalityover a sentiment treebank. Proc. . . . , pp. 1631–1642, 2013. ISSN 1932-6203. doi: 10.1371/journal.pone.0073791.Ilya Sutskever, Oriol Vinyals, and Quoc V . Le. Sequence to sequence learning with neural networks.InNIPS , pp. 9, 2014. ISBN 1409.3215. URL http://arxiv.org/abs/1409.3215 .Kai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved Semantic Representa-tions From Tree-Structured Long Short-Term Memory Networks. In Proc. 53rd Annu. Meet.Assoc. Comput. Linguist. 7th Int. Jt. Conf. Nat. Lang. Process. , pp. 1556–1566, 2015. ISBN9781941643723. URL http://arxiv.org/abs/1503.0075 .Arun Venkatraman, Martial Hebert, and J Andrew Bagnell. Improving Multi-step Prediction ofLearned Time Series Models. Twenty-Ninth AAAI Conf. Artif. Intell. , pp. 3024–3030, 2015.Orioi Vinyals and Quoc V . Le. A Neural Conversational Model. arXiv , 37, 2015.Ronald J. Williams and David Zipser. Gradient-based learning algorithms for recurrent networksand their computational complexity. Back-propagation Theory, Archit. Appl. , pp. 433–486, 1995.doi: 10.1080/02673039508720837.Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent Neural Network Regularization.ICLR , pp. 1–8, 2015. URL http://arxiv.org/abs/1409.2329 .Xingxing Zhang, Liang Lu, and Mirella Lapata. Top-down Tree Long Short-Term Memory Net-works. In NAACL-HLT-2016 , pp. 310–320, 2016.11Published as a conference paper at ICLR 2017A V ARIATIONS ON TOPOLOGY PREDICTIONBesides the topology prediction approach presented in Section 3.1, we experimented with two addi-tional variations of the proposed doubly-recurrent neuron: (i) using tokens to trigger both depth andwidth termination (i.e. implicit topology prediction) and (ii) using tokens for width-stopping deci-sion, but predict explicitly depth termination (single topology prediction). Recall that in the modelproposed in Section 3.1 both decisions are explicit (double topology prediction). The neurons ineach of these alternative formulations are depicted in Figure 7. In order to train these two alternativemodels, we add special stopping tokens to the vocabulary, and we pad the training with additionalnodes labeled with this token. Besides requiring larger trees and resulting in slower training, weempirically observed alternatives (i) and (ii) to result in worse performance. We hypothesize thatthis has to do with the fact that when using token-based stopping, topological and label predictiondecisions are confounded, which results in less efficient learning.+gahapxpgfhfsxssoftmaxoih(pred )ihaihfi+gahapxpgfhfsxsσsoftmaxoipaih(pred )ihaihfi+gahapxpgfhfsxsσσsoftmaxoipaipfih(pred )ihaihfiFigure 7: A single unit in each of the three alternative versions of the doubly-recurrent neural net-work, for node iwith parentpand siblings.Left: No explicit topology prediction, Middle : single(ancestral) topology prediction, Right : double (ancestral and fraternal) topology prediction. The top(left) incoming arrows represent the input and state received from the parent node (previous node,respectively).B T RAINING DETAILSB.1 B ACKPROPAGATION WITH DRNN’SDuring training, we do the forward pass over the trees in breadth-first preorder, feeding into everynode an ancestral and a fraternal state. For computational efficiency, before passing on the ancestralstate to the offspring, we update it through the RNN using the current node’s label, so as to avoidrepeating this step for every child node. After the forward pass is complete, we compute label(cross-entropy) and topological (binary cross-entropy) loss for every node. In the backward pass,we compute in this order:1. Gradient of the current node’s label prediction loss with respect to softmax layer parametersW;va;vf:rL(xi;bxi).2. Gradients of topological prediction variable loss with respect to sigmoid layer parameters:rL(pai;tai)andrL(pfi;tfi).3. Gradient of predictive state layer parameters with respect to h(pred ).4. Gradient of predicted ancestral and fraternal hidden states with respect to gfandga’s pa-rameters.The gradients of the input ancestral and fraternal hidden states are then passed on to the previoussibling and parent. When nodes have more than one child, we combine gradients from multiplechildren by averaging them. This procedure is repeated until the root note is reached, after which asingle (ancestral state) gradient is passed to the encoder.12Published as a conference paper at ICLR 2017B.2 M ODEL SPECIFICATION AND TRAINING PARAMETERSThe best parameters for all tasks are chosen by performance on the validation sets. We performearly stopping based on the validation loss. For the IFTTT task, we initialize word embeddingswith pretrained GloVe vectors (Pennington et al., 2014). For both tasks we clip gradients whenthe absolute value of any element exceeds 5. We regularize with a small penalty on thel2normof the parameters. We train all methods with A DAM (Kingma & Ba, 2014), with initial learningrate chosen by cross-validation. The parameter configurations that yielded the best results and wereused for the final models are shown in Table 3. Details about the four models used for the machinetranslation task are shown in Table 4.Table 3: Hyperparameter choice for DRNNs in the synthetic and IFTTT tasksTask Encoder Dim Batch Learning Rate Regularization synthetic LSTM 50 20 0.05 1 105IFTTT GRU 150 35 0.06 1 104IFTTT LSTM 150 35 0.05 5 104Table 4: Models used in the machine translation task.Model Encoder Decoder Dim RNN Layers BatchSEQ2SEQ(Small) LSTM LSTM 150 1 64SEQ2SEQ(Large) LSTM LSTM 300 3 64DRNN (Small) LSTM DRNN-GRU (Left-Right) 150 1 32DRNN (Large) LSTM DRNN-GRU (Left-Right) 300 1 32C D ATASET DETAILSC.1 S YNTHETIC TREE DATASET GENERATIONWe generate trees in a top-down fashion, conditioning the label and topology of every node onthe state of its ancestors and siblings. For simplicity, we use a Markovian assumption on thesedependencies, modeling the probability of a node’s label as depending only on the label of its parentp(i)and the last sibling s(i)generated before it (if any). Conditioned on these two inputs, we modelthe label of the node as coming from a multinomial distribution over the alphabet:P(wijT) =P(wjwp(i);ws(i))Multi (wp(i);ws(i)) (9)wherewp(i);ws(i)are class probabilities drawn from a Dirichlet prior with parameter v. On theother hand, we denote by baithe binary variable indicating whether node ihas descendants, and bybfithat indicating whether it has an ensuing sibling. We model these variables as depending only onthe label of the current node and its position in the tree:P(baijT) =P(baijwi;Di) =Bernoulli (pawiga(Di))P(bfijT) =P(bfijwi;Wi) =Bernoulli (pfwigf(Wi))whereDiis the depth of node iandWiits width, defined as its position among the children of its par-entp(i). Intuitively, we want to make P(bai= 1jT)decrease as we go deeper and further along thebranches of the tree, so as to control its growth. Thus, we model gaandgfas decreasing functionswith geometric decay, namely ga(D) = (a)Dandgf(W) = (f)W, witha;f2(0;1). For thelabel-conditioned branching probabilities P(baijwi)andP(bfijwi), we use Bernoulli distributionswith probabilities drawn from beta priors with parameters (a;a)and(f;f), respectively.In summary, we use the following generative procedure to grow the trees:13Published as a conference paper at ICLR 20171. For each wi2V, drawpawiBeta(a;a)andpfwiBeta(f;f)2. For each pair (wi;wj)drawwi;wjDir(V)3. While there is an unlabeled non-terminal node ido:Sample a label for ifromwP(wjwp(i);ws(i)) =Multi (wp(i);ws(i)).DrawbaP(bajw;D) = Bernoulli (Dapaw(i)), whereDis the current depth. Ifba= 1, generate an node k, setp(k) =i, and add it to the queue.DrawbaP(bfjw;D) =Bernoulli (Wfpfw(i)), whereWis the current width. Ifbf= 1, generate an node k, sets(k) =i, and add it to the queue.Note that this generative process does create a dependence between the topology and content of thetrees (since the variables baandbfdepend on the content of the tree via their dependence on thelabel of their corresponding node). However, the actual process by which labels and topologicaldecision is generated relies on separate mechanisms. This is natural assumption which is reasonableto expect in practice.The choice of prior parameters is done drawing inspiration from natural language parse trees. Wewant nodes to have low but diverse probabilities of generating children, so we seek a slow-decayingdistribution with most mass allocated in values close to 0. For this, we use (a;a) = (0:25;1). Forsibling generation, we use (f;f) = (7;2), which yields a distribution concentrated in values closeto1, so that nodes have on average a high and similar probability of producing siblings. Since weseek trees that are wider than they are deep, we use decay parameters a= 0:6;f= 0:9. Finally,we use av= 101for the parent-sibling probability prior, favoring non-uniform interactions.Using this configuration, we generate 5000 sentence-tree pairs, which we split into training (4000examples), validation (500) and test (500) sets. The characteristics of the trees in the dataset aresummarized in Table 5.Table 5: Synthetic tree dataset statistics. Tree size is measured in number of nodes, depth is thelargest path from the root node to a leaf and width is the maximum number of children for any nodein the tree. The values reported correspond to means with one standard deviation in parentheses.Fold Examples Size Depth Widthtrain 4000 3.94 (3.38) 1.42 (0.66) 2.89 (1.71)dev 500 4.13 (3.21) 1.46 (0.67) 2.91 (1.76)test 500 3.64 (3.21) 1.32 (0.61) 2.80 (1.71)C.2 IFTTTThe IFTTT dataset comes with a script to generate the data by crawling and parsing the recipes.Unfortunately, by the time we ran the script many recipes had been removed or changed. We there-fore resorted to the original dataset used by Quirk et al. (2015). We converted these recipes intoour tree format, assigning a node to each element in the first three levels (channels, functions andarguments, see figure 5). For the parameters level, many recipes have sentences instead of singletokens, so we broke these up creating one node per word. The last two layers are therefore the mosttopologically diverse, whereas the structure of the first two layers is constant (all trees have channelsand functions). A very small fraction ( <1%) of trees that could not by parsed into our format wasexcluded from the dataset.Table 6 shows various statistics about the topological characteristics of the recipes in the IFTTTdataset. The middle columns show percentage of trees that contain nonempty arguments and param-eters in trigger (IF) and action (THEN) branches. Almost all recipes have none empty arguments andparameters (and thus depth 4, excluding the root), and a lower percentage—but still a majority—hasarguments and parameters on the trigger side too. The last two columns show tree statistics pertain-ing to the complexity of trees after conversion to our format. The distribution of tree sizes is mostlyconcentrated between 4 and 30 nodes, with a slow-decaying tail of examples above this range (seeFigure 8).14Published as a conference paper at ICLR 2017Table 6: IFTTT dataset statistics. The middle columns show percentage of trees that containnonempty arguments and parameters in trigger (IF) and action (THEN) branches. The last columnshows average (with standard deviation) tree size and depth.Fold ExamplesHas args. (%) Has params. (%) Tree SizeTrigger Action Trigger Action # Nodes Depthtrain 67,444 69.10 98.46 65.47 96.77 16.93 (31.71) 3.99 (.13)dev 4,038 69.44 98.46 66.42 96.31 16.55 (8.75) 3.99 (.11)test 3,725 68.38 98.66 65.64 97.50 16.43 (8.18) 3.99 (.12)Figure 8: Tree size distribution in the IFTTT dataset.Regarding the content of the trees, the labels of the nodes in the first two levels (channels andfunctions) come from somewhat reduced vocabularies: 111 and 434 unique symbols for the triggerbranch, respectively, and 157 and 85 for the action branch. The lower layers of the tree have a muchmore diverse vocabulary, with about 60K unique tokens in total. On the source side, the vocabularyover the sentence descriptions is large too, with about 30K unique tokens. The average sentence sizeis 6.07 tokens, with 80% of the sentences having at most 12 tokens.C.3 M ACHINE TRANSLATIONStarting from a preprocessed62% sub-selection of the English-French section of the WMT14dataset, we further prune down the data by keeping only sentences of length between 5 and 20words, and for which every word is within the 20K most frequent. The reason for this is to simplifythe task by keeping only common words and avoiding out-of-vocabulary tokens. After this filtering,we are left with 53,607, 918 and 371 sentences for train, validation and test sets. After tokenizing,we obtain dependency parses for the target (English) sentences using the Stanford CoreNLP toolkit(Manning et al., 2014).For the perturbation experiments, we randomly selected 50 sentences from among those in the testthat could be easily restructured without significantly altering their meaning. The type of alterationswe perform are: subordinate clause swapping, alternative construction substitution, passive/activevoice change. In doing this, we try to keep the number of added/deleted words to a minimum, tominimize vocabulary-induced likelihood variations. When inserting new words, be verify that theyare contained in the original vocabulary of 20K words. In Table 7 we show a few examples of thesource, original target and perturbed target sentences.6http://www-lium.univ-lemans.fr/ schwenk/cslm joint paper/15Published as a conference paper at ICLR 2017Table 7: Example structural perturbations for likelihood robustness experiments.source “apr `es un accord de paix sign `e en 1992 elle est devenue un parti d opposition.”target “after a 1992 peace deal it became an opposition party.”perturbation “it became an opposition party after a 1992 peace deal.”source “cela repr ́esente environ 9 milliards de grains de ma ̈ıs.”target “that’s about 9 billion individual kernels of corn.”perturbation “this amounts to about 9 billion kernels of corn.”source “l’exercice de fonctions publiques est une question de service public.”target “public office is about public service.”perturbation “the exercise of public functions is a matter of public service.”source “nous avons ainsi effectu ́e depuis la fin de l’hiver dernier 64 interventions.”target “hence we have carried out 64 operations since last winter.”perturbation “we have therefore carried out 64 operations since last winter.”source “on estime qu’un enfant sur 2000 n ́es chaque ann ́ee n’est ni un garcon ni une fille.”target “an estimated one in 2000 children born each year is neither boy nor girl.”perturbation “it is estimated that one in every 2000 children born every year is neither a boy nor a girl.”16Published as a conference paper at ICLR 2017D A DDITIONAL EXAMPLE GENERATED TREESN=500N=1000N=1500N=3500goldROOTPRUUOROOTPRROOTPRPPNSYROOTPRCROOTPRC(a) Encoder sentence input: “ROOT P R C”N=500N=1000N=1500N=3500goldROOTZXRWJYROOTQZROOTQDTLAKWADAROOTQZTROOTQZTY(b) Encoder sentence input: “ROOT Z T Y Q”N=500N=1000N=1500N=3500goldROOTTVHWTQUZUTQDROOTKTROOTKTOTROOTTKVROOTKTV(c) Encoder sentence input: “ROOT K T V”N=500N=1500N=2500N=4000goldROOTAXVZGZROOTQFRZSAGROOTQFROOTQFDLRDROOTQFAVRDG(d) Encoder sentence input: “ROOT Q F V R G D A”Figure 9: Selected trees generated by the D RNN decoder from vector-encoded descriptions for testexamples of the synthetic tree dataset. Trees in the same row correspond to predictions by modelstrained on randomly sampled subsets of size Nof the training split. We present cases for which theprediction is accurate (a,c) and cases for which it is not (b,d). Note how in (d) the model predictsmany of the labels correctly, but confuses some of the dependencies (edges) in the tree.17
rJ8Je4clg
Published as a conference paper at ICLR 2017LEARNING TO PLAY IN A DAY: FASTER DEEP REIN-FORCEMENT LEARNING BY OPTIMALITY TIGHTENINGFrank S. HeDepartment of Computer ScienceUniversity of Illinois at Urbana-ChampaignZhejiang Universityfrankheshibi@gmail.comYang LiuDepartment of Computer ScienceUniversity of Illinois at Urbana-Champaignliu301@illinois.eduAlexander G. SchwingDepartment of Electrical and Computer EngineeringUniversity of Illinois at Urbana-Champaignaschwing@illinois.eduJian PengDepartment of Computer ScienceUniversity of Illinois at Urbana-Champaignjianpeng@illinois.eduABSTRACTWe propose a novel training algorithm for reinforcement learning which com-bines the strength of deep Q-learning with a constrained optimization approachto tighten optimality and encourage faster reward propagation. Our novel tech-nique makes deep reinforcement learning more practical by drastically reducingthe training time. We evaluate the performance of our approach on the 49 gamesof the challenging Arcade Learning Environment, and report significant improve-ments in both training time and accuracy.1 I NTRODUCTIONThe recent advances of supervised deep learning techniques (LeCun et al., 2015) in computer vision,speech recognition and natural language processing have tremendously improved the performanceon challenging tasks, including image processing (Krizhevsky et al., 2012), speech-based transla-tion (Sutskever et al., 2014) and language modeling (Hinton et al., 2012). The core idea of deeplearning is to use artificial neural networks to model complex hierarchical or compositional dataabstractions and representations from raw input data (Bengio et al., 2013). However, we are stillfar from building intelligent solutions for many real-world challenges, such as autonomous driv-ing, human-computer interaction and automated decision making, in which software agents need toconsider interactions with a dynamic environment and take actions towards goals. Reinforcementlearning (Bertsekas & Tsitsiklis, 1996; Powell, 2011; Sutton & Barto, 1998; Kaelbling et al., 1996)studies these problems and algorithms which learn policies to make decisions so as to maximize areward signal from the environment. One of the promising algorithms is Q-learning (Watkins, 1989;Watkins & Dayan, 1992). Deep reinforcement learning with neural function approximation (Tsit-siklis & Roy, 1997; Riedmiller, 2005; Mnih et al., 2013; 2015), possibly a first attempt to combinedeep learning and reinforcement learning, has been proved to be effective on a few problems whichclassical AI approaches were unable to solve. Notable examples of deep reinforcement learninginclude human-level game playing (Mnih et al., 2015) and AlphaGo (Silver et al., 2016).Despite these successes, its high demand of computational resources makes deep reinforcementlearning not yet applicable to many real-world problems. For example, even for an Atari game, thedeep Q-learning algorithm (also called deep Q-networks, abbreviated as DQN) needs to play up tohundreds of millions of game frames to achieve a reasonable performance (van Hasselt et al., 2015).AlphaGo trained its model using a database of game records of advanced players and, in addition,about 30 million self-played game moves (Silver et al., 2016). The sheer amount of required com-putational resources of current deep reinforcement learning algorithms is a major bottleneck for itsapplicability to real-world tasks. Moreover, in many tasks, the reward signal is sparse and delayed,thus making the convergence of learning even slower.1Published as a conference paper at ICLR 2017Here we propose optimality tightening, a new technique to accelerate deep Q-learning by fast rewardpropagation. While current deep Q-learning algorithms rely on a set of experience replays, they onlyconsider a single forward step for the Bellman optimality error minimization, which becomes highlyinefficient when the reward signal is sparse and delayed. To better exploit long-term high-rewardstrategies from past experience, we design a new algorithm to capture rewards from both forwardand backward steps of the replays via a constrained optimization approach. This encourages fasterreward propagation which reduces the training time of deep Q-learning.We evaluate our proposed approach using the Arcade learning environment (Bellemare et al., 2013)and show that our new strategy outperforms competing techniques in both accuracy and trainingtime on 30 out of 49 games despite being trained with significantly fewer data frames.2 R ELATED WORKThere have been a number of approaches improving the stability, convergence and runtime of deepreinforcement learning since deep Q-learning, also known as deep Q-network (DQN), was firstproposed (Mnih et al., 2013; 2015). DQN combined techniques such as deep learning, reinforcementlearning and experience replays (Lin, 1992; Wawrzynski, 2009).Nonetheless, the original DQN algorithm required millions of training steps to achieve human-level performance on Atari games. To improve the stability, recently, double Q-learning was com-bined with deep neural networks, with the goal to alleviate the overestimation issue observed inQ-learning (Thrun & Schwartz, 1993; van Hasselt, 2010; van Hasselt et al., 2015). The key idea isto use two Q-networks for the action selection and Q-function value calculation, respectively. Thegreedy action of the target is first chosen using the current Q-network parameters, then the targetvalue is computed using a set of parameters from a previous iteration. Another notable advance is“prioritized experience replay” (Schaul et al., 2016) or “prioritized sweeping” for deep Q-learning.The idea is to increase the replay probability of experience tuples that have a high expected learningprogress measured by temporal difference errors.In addition to the aforementioned variants of Q-learning, other network architectures have beenproposed. The dueling network architecture applies an extra network structure to learn the impor-tance of states and uses advantage functions (Wang et al., 2015). A distributed version of the deepactor-critic algorithm without experience replay was introduced very recently (Mnih et al., 2016).It deploys multiple threads learning directly from current transitions. The approach is applicable toboth value-based and policy-based methods, off-policy as well as on-policy methods, and in discreteas well as in continuous domains. The model-free episodic control approach evaluates state-actionpairs based on episodic memory using k-nearest neighbors with hashing functions (Blundell et al.,2016). Bootstrapped deep Q-learning carries out temporally-extended (or deep) exploration, thusleading to much faster learning (Osband et al., 2016).Our fast reward propagation differs from all of the aforementioned approaches. The key idea ofour method is to propagate delayed and sparse rewards during Q-network training, and thus greatlyimprove the efficiency and performance. We formulate this propagation step via a constrained pro-gram. Note that our program is also different from earlier work on off-policy Q()algorithmswith eligibility traces and n-step Q learning (Munos et al., 2016; Watkins, 1989; Mnih et al., 2016),which have been recently shown to perform poorly when used for training deep Q-networks on Atarigames.3 B ACKGROUNDReinforcement learning considers agents which are able to take a sequence of actions in an environ-ment. By taking actions and experiencing at most one scalar reward per action, their task is to learna policy which allows them to act such that a high cumulative reward is obtained over time.More precisely, consider an agent operating over time t2f1;:::;Tg. At timetthe agent is in anenvironment state stand reacts upon it by choosing action at2A. The agent will then observe anew statest+1and receive a numerical reward rt2R. Throughout, we assume the set of possibleactions, i.e., the setA, to be discrete.2Published as a conference paper at ICLR 2017A well established technique to address the aforementioned reinforcement learning task is Q-learning (Watkins, 1989; Watkins & Dayan, 1992). Generally, Q-learning algorithms maintain anaction-value function, often also referred to as Q-function, Q(s;a). Given a state s, the action-valuefunction provides a ‘value’ for each action a2A which estimates the expected future reward ifactiona2A is taken. The estimated future reward is computed based on the current state sor aseries of past states stif available.The core idea of Q-learning is the use of the Bellman equation as a characterization of the optimalfuture reward function Qvia a state-action-value functionQ(st;a) =E[rt+maxa0Q(st+1;a0)]: (1)Hereby the expectation is taken w.r.t. the distribution of state st+1and reward rtobtained aftertaking action a, andis a discount factor. Intuitively, reward for taking action aplus best futurereward should equal the best total return from the current state.The choice of Q-function is crucial for the success of Q-learning algorithms. While classical meth-ods use linear Q-functions based on a set of hand-crafted features of the state, more recent ap-proaches use nonlinear deep neural networks to automatically mine intermediate features from thestate (Riedmiller, 2005; Lange & Riedmiller, 2010; Mnih et al., 2013; 2015). This change hasbeen shown to be very effective for many applications of reinforcement learning. However, auto-matic mining of intermediate representations comes at a price: larger quantities of data and morecomputational resources are required. Even though it is sometimes straightforward to extract largeamounts of data, e.g., when training on video games, for successful optimization, it is crucial that thealgorithms operate on un-correlated samples from a dataset Dfor stability. A technique called “ex-perience replay” (Lin, 1992; Wawrzynski, 2009) encourages this property and quickly emerged as astandard step in the well-known deep Q-learning framework (Mnih et al., 2013; 2015). Experiencereplays are stored as a dataset D=f(sj;aj;rj;sj+1)gwhich contains state-action-reward-futurestate-tuples (sj;aj;rj;sj+1), including past observations from previous plays.The characterization of optimality given in Eq. (1) combined with an “experience replay” dataset Dresults in the following iterative algorithmic procedure (Mnih et al., 2013; 2015): start an episodein the initial state s0; sample a mini-batch of tuples B=f(sj;aj;rj;sj+1)gD ; compute andfix the targets yj=rj+maxaQ(sj+1;a)for each tuple using a recent estimate Q(themaximization is only considered if sjis not a terminal state); update the Q-function by optimizingthe following program w.r.t. the parameters typically via stochastic gradient descent:minX(sj;aj;rj;sj+1)2B(Q(sj;aj)yj)2: (2)After having updated the parameters of the Q-function we perform an action simulation either choos-ing an action at random with a small probability , or by following the strategy arg maxaQ(st;a)which is currently estimated. This strategy is also called the -greedy policy. We then obtain theactual reward rt. Subsequently we augment the replay memory with the new tuple (st;at;rt;st+1)and continue the simulation until this episode terminates or reaches an upper limit of steps, andwe restart a new episode. When optimizing w.r.t. the parameter , a recent Q-network is used tocompute the target yj=rj+maxaQ(sj+1;a). This technique is referred to as ‘semi-gradientdescent,’ i.e., the dependence of the target on the parameter is ignored.4 F AST REWARD PROPAGATION VIA OPTIMALITY TIGHTENINGInvestigating the cost function given in Eq. (2) more carefully, we observe that it operates on aset of short one-step sequences, each characterized by the tuple (sj;aj;rj;sj+1). Intuitively, eachstep encourages an update of the parameters , such that the action-value function for the chosenactionaj,i.e.,Q(sj;aj), is closer to the obtained reward plus the best achievable future value, i.e.,yj=rj+maxaQ(sj+1;a). As we expect from the Bellman optimality equation, it is instructiveto interpret this algorithm as propagating reward information from time j+ 1backwards to time j.To understand the shortcomings of this procedure consider a situation where the agent only receivesa sparse and delayed reward once reaching a target in a maze. Further let jPjcharacterize the short-est path from the agents initial position to the target. For a long time, no real reward is available3Published as a conference paper at ICLR 2017and the aforementioned algorithm propagates randomly initialized future rewards. Once the targetis reached, real reward information is available. Due to the cost function and its property of prop-agating reward time-step by time-step, it is immediately apparent that it takes at least an additionalO(jPj)iterations until the observed reward impacts the initial state.In the following we propose a technique which increases the speed of propagation and achievesimproved convergence for deep Q-learning. We achieve this improvement by taking advantage oflonger state-action-reward-sequences which are readily available in the “experience replay memory.”Not only do we propagate information from time instances in the future to our current state, butalso will we pass information from states several steps in the past. Even though we expect to seesubstantial improvements on sequences where rewards are sparse or only available at terminal states,we also demonstrate significant speedups for situations where rewards are obtained frequently. Thisis intuitive as the Q-function represents an estimate for any reward encountered in the future. Fasterpropagation of future and past rewards to a particular state is therefore desirable.Subsequently we discuss our technique for fast reward propagation, a new deep Q-learning algo-rithm that exploits longer state-transitions in experience replays by tightening the optimization viaconstraints. For notational simplicity, we assume that the environmental dynamics is deterministic,i.e., the new state and the reward are solely determined by the current state and action. It is possibleto show that mathematically our proposed approach also approximately works in stochastic environ-ments. Please see details in the appendix. From the Bellman optimality equation we know that thefollowing series of equalities hold for the optimal Q-function Q:Q(sj;aj) =rj+maxaQ(sj+1;a) =rj+maxarj+1+maxa0hrj+2+max~aQ(sj+3;~a)i:Evaluating such a sequence exactly is not possible in a reinforcement learning setting since theenumeration of intermediate states sj+irequires exponential time complexity O(jAji). It is howeverpossible to take advantage of the episodes available in the replay memory Dby noting that thefollowing sequence of inequalities holds for the optimal action-value function Q(with the greedypolicy), irrespective of whether a policy generating the sequence of actions aj,aj+1,etc., whichresults in rewards rj,rj+1,etc. is optimal or not:Q(sj;aj) =rj+maxaQ(sj+1;a)]:::kXi=0irj+i+k+1maxaQ(sj+k+1;a) =Lj;k:Note the definition of the lower bounds Lj;kfor samplejand time horizon kin the aforementionedseries of inequalities.We can also use this series of inequalities to define upper bounds. To see this note thatQ(sjk1;ajk1)kXi=0irjk1+ik+1Q(sj;aj)0;which follows from the definition of the lower bound by dropping the maximization over the actions,and a change of indices from j!jk1. Reformulating the inequality yields an upper boundUj;kfor samplejand time horizon kby fixing state sjand actionajas follows:Uj;k=k1Q(sjk1;ajk1)kXi=0ik1rjk1+iQ(sj;aj):In contrast to classical techniques which optimize the Bellman criterion given in Eq. (2), we proposeto optimize the Bellman equation subject to constraints Q(sj;aj)Lmaxj= maxk2f1;:::;KgLj;k,which defines the largest lower bound, and Q(sj;aj)Uminj= mink2f1;:::;KgUj;k, which speci-fies the smallest upper bound. Hereby, Lj;kandUj;kare computed using the Q-function Qwitha recent estimated parameter rather than the unknown optimal Q-function Q, and the integer Kspecifies the number of future and past time steps which are considered. Also note that the targetused in the Bellman equation is obtained from yj=Lj;0=rj+maxaQ(sj+1;a). In thisway, we ignore the dependence of the bounds and the target on the parameter to stabilize the train-ing. Taking all the aforementioned definitions into account, we propose the following program for4Published as a conference paper at ICLR 2017Output : Parametersof a Q-functionInitialize:randomly, set =forepisode 1toMdoinitializes1;fort 1toTdoChoose action ataccording to -greedy strategy;Observe reward rtand next state st+1;Store the tuple (st;at;rt;;st+1)in replay memoryD;Sample a minibatch of tuples B=f(sj;aj;rj;Rj;sj+1g)from replay memory D;Updatewith one gradient step of cost function given in Eq. (4);Reset=everyCsteps;endfort Tto1doComputeRt=rt+Rt+1;InsertRtinto the corresponding tuple in replay memory D;endendAlgorithm 1: Our algorithm for fast reward propagation in reinforcement learning tasks.reinforcement learning tasks:minX(sj;aj;sj+1;rj)2B(Q(sj;aj)yj)2s.t.Q(sj;aj)Lmaxj8(sj;aj)2BQ(sj;aj)Uminj8(sj;aj)2B:(3)This program differs from the classical approach given in Eq. (2) via the constraints, which is cru-cial. Intuitively, the constraints encourage faster reward propagation as we show next, and result intremendously better results as we will demonstrate empirically in Sec. 5.Before doing so we describe our optimization procedure for the constrained program in Eq. (3) morecarefully. The cost function is generally non-convex in the parameters , and so are the constraints.We therefore make use of a quadratic penalty method to reformulate the program intominX(sj;aj;rj;sj+1)2Bh(Q(sj;aj)yj)2+(LmaxjQ(sj;aj))2++(Q(sj;aj)Uminj)2+i;(4)whereis a penalty coefficient and (x)+= max(0;x)is the rectifier function. Augmenting the costfunction with (LmaxjQ(sj;aj))2+and/or(Q(sj;aj)Uminj)2+results in a penalty wheneverany optimality bounding constraint gets violated. The quadratic penalty function is chosen for sim-plicity. The penalty coefficient can be set as a large positive value or adjusted in an annealingscheme during training. In this work, we fix its value, due to time constraints. We optimize this costfunction with stochastic (sub-)gradient descent using an experience replay memory from which werandomly draw samples, as well as their successors and predecessors. We emphasize that the deriva-tives correcting the prediction of Q(sj;aj)not only depend on the Q-function from the immediatelysuccessive time step Q(sj+1;a)stored in the experience replay memory, but also on more distanttime instances if constraints are violated. Our proposed formulation and the resulting optimizationtechnique hence encourage faster reward propagation, and the number of time steps depends onthe constant Kand the quality of the current Q-function. We summarize the proposed method inAlgorithm 1.The computational complexity of the proposed approach increases with the number of consideredtime stepsK, since additional forward passes are required to compute the bounds LmaxjandUminj.However, we can increase the memory size on the GPU to compute both the bounds and targets ina single forward pass if Kis not too large. If at all a problem, we can further alleviate this increaseby randomly sampling a subset of the constraints rather than exhaustively using all of them. Moreinformed strategies regarding the choice of constraints are possible as well since we may expectlower bounds in the more distant future to have a larger impact early in the training. In contrast oncethe algorithm is almost converged we may expect lower bounds close to the considered time-step tohave bigger impact.To efficiently compute the discounted reward over multiple time steps we add a new element tothe experience replay structure. Specifically, in addition to state, action, reward and next state for5Published as a conference paper at ICLR 2017Figure 1: Improvements of our method trained on 10M frames compared to results of 200M frameDQN training presented by Mnih et al. (2015), using the metric given in Eq. (5).time-stepj, we also store the real discounted return Rjwhich is the discounted cumulative returnachieved by the agent in its game episode. Rjis computed via Rj=PT=jjr, whereTis theend of the episode and is the discount factor. Rjis then inserted in the replay memory after thetermination of the current episode or after reaching the limit of steps. All in all, the structure of ourexperience replay memory consists of tuples of the form (sj;aj;rj;Rj;sj+1). In practice, we alsofound that incorporating Rjin the lower bound calculation can further improve the stability of thetraining.We leave the questions regarding a good choice of penalty function and a good choice of the penaltycoefficients to future work. At the moment we use a quadratic penalty function and a constantpenalty coefficient identical for both bounds. More complex penalty functions and sophisticatedoptimization approaches may yield even better results than the ones we report in the following.5 E XPERIMENTSWe evaluate the proposed algorithm on a set of 49 games from the Arcade Learning Environ-ment (Bellemare et al., 2013) as suggested by Mnih et al. (2015). This environment is considered tobe one of the most challenging reinforcement learning task because of its high dimensional output.Moreover, the intrinsic mechanism varies tremendously for each game, making it extremely de-manding to find a single, general and robust algorithm and a corresponding single hyperparametersetting which works well across all 49 games.Following existing work (Mnih et al., 2015), our agent predicts an action based on only raw imagepixels and reward information received from the environment. A deep neural network is used asthe function approximator for the Q-function. The game image is resized to an 8484grayscaleimagest. The first layer is a convolutional layer with 32 filters of size 88and a stride of 4; thesecond layer is a convolutional layer with 64 filters of size 44and stride of 2; the third layer isa convolutional layer with 64 filters of size 33and a stride of 1; the next fully connected layertransforms the input to 512 units which are then transformed by another fully connected layer to anoutput size equal to the number of actions in each game. The rectified linear unit (ReLU) is used asthe activation function for each layer. We used the hyperparameters provided by Mnih et al. (2015)for annealing -greedy exploration and also applied RMSProp for gradient descent. As in previouswork we combine four frames into a single step for processing. We chose the hyperparamenterK= 4, for GPU memory efficiency when dealing with mini-batches. In addition, we also includethe discounted return Rj=Lj;1in the lower bound calculation to further stabilize the training. Weuse the penalty coefficient = 4which was obtained by coarsely tuning performance on the games‘Alien,’ ‘Amidar,’ ‘Assault,’ and ‘Asterix.’ Gradients are also rescaled so that their magnitudes arecomparable with or without penalty. All experiments are performed on an NVIDIA GTX Titan-X12GB graphics card.6Published as a conference paper at ICLR 2017Figure 2: Improvements of our method trained on 10M frames compared to results of 10M frameDQN training, using the metric given in Eq. (5).5.1 E VALUATIONIn previous work (Mnih et al., 2015; van Hasselt et al., 2015; Schaul et al., 2016; Wang et al., 2015),the Q-function is trained on each game using 200 million (200M) frames or 50M training steps. Wecompare to those baseline results obtained after 200M frames using our proposed algorithm whichran for only 10M frames or 2.5M steps, i.e., 20 times fewer data, due to time constraints. Instead oftraining more than 10 days we manage to finish training in less than one day. Furthermore, for a faircomparison, we replicate the DQN results and compare the performance of the proposed algorithmafter 10M frames to those obtained when training DQN on only 10M frames.We strictly follow the evaluation procedure in (Mnih et al., 2015) which is often referred to as ‘30no-op evaluation.’ During both training and testing, at the start of the episode, the agent alwaysperforms a random number of at most 30 no-op actions. During evaluation, our agent plays eachgame 30 times for up to 5 minutes, and the obtained score is averaged over these 30 runs. An -greedy policy with = 0:05is used. Specifically, for each run, the game episode starts with at most30 no-op steps, and ends with ‘death’ or after a maximum of 5 minute game-play, which correspondsto 18000 frames.Our training consists of M= 40 epochs, each containing 250000 frames, thus 10M frames intotal. For each game, we evaluate our agent at the end of every epoch, and, following commonpractice (van Hasselt et al., 2015; Mnih et al., 2015), we select the best agent’s evaluation as theresult of the game. So almost all hyperparameters are selected identical to Mnih et al. (2015) andNair et al. (2015).To compare the performance of our algorithm to the DQN baseline, we follow the approach of Wanget al. (2015) and measure the improvement in percent usingScore AgentScore BaselinemaxfScore Human;Score BaselinegScore Random: (5)We select this approach because the denominator choice of either human or baseline score preventsinsignificant changes or negative scores from being interpreted as large improvements.Fig. 1 shows the improvement of our algorithm over the DQN baseline proposed by Mnih et al.(2015) and trained for 200M frames, i.e., 50M steps. Even though our agent is only trained for 10Mframes, we observe that our technique outperforms the baseline significantly. In 30 out of 49 games,our algorithm exceeds the baseline using only 5%of the baseline’s training frames, sometimesdrastically, e.g., in games such as ‘Atlantis,’ ‘Double Dunk,’ and ‘Krull.’ The remaining 19 games,often require a long training time. Nonetheless, our algorithm still reaches a satisfactory level ofperformance.7Published as a conference paper at ICLR 2017Training Time Mean MedianOurs (10M) less than 1 day (1 GPU) 345.70% 105.74%DQN (200M) more than 10 days (1 GPU) 241.06% 93.52%D-DQN (200M) more than 10 days (1 GPU) 330.3% 114.7%Table 1: Mean and median human-normalized scores. DQN baseline and D-DQN results are fromMnih et al. (2015); van Hasselt et al. (2015) and trained with 200M frames while our method istrained with 10M frames. Note that our approach can be combined with the D-DQN method.Figure 3: Game scores for our algorithm (blue), DQN (black), DQN+return (red) and DQN( )(yellow) using 10M training frames. 30 no-op evaluation is used and moving average over 4 pointsis applied.In order to further illustrate the effectiveness of our method, we compare our results with our imple-mentation of DQN trained on 10M frames. The results are illustrated in Fig. 2. We observe a betterperformance on 46 out of 49 games, demonstrating in a fair way the potential of our technique.As suggested by van Hasselt et al. (2015), we use the following scoreScore Normalized =Score AgentScore RandomjScore HumanScore Randomj(6)to summarize the performance of our algorithm in a single number. We normalize the scores ofour algorithm, the baseline reported by Mnih et al. (2015), and double DQN (D-DQN) (van Hasseltet al., 2015), and report the training time, mean and median in Table 1. We observe our techniquewith 10M frames to achieve comparable scores to the D-DQN method trained on 200M frames (vanHasselt et al., 2015), while it outperforms the DQN method (Mnih et al., 2015) by a large margin. Webelieve that our method can be readily combined with other techniques developed for DQN, suchas D-DQN (van Hasselt et al., 2015), prioritized experience replay (Schaul et al., 2016), duelingnetworks (Wang et al., 2015), and asynchronous methods (Mnih et al., 2016) to further improve theaccuracy and training speed.In Fig. 3 we illustrate the evolution of the score for our algorithm and the DQN approach. In additionwe demonstrate two additional techniques: ‘DQN+return’ and ‘DQN( ).’ ‘DQN+return’ uses onlythe discounted future return as a bound, but does not take advantage of the additional constraintswe propose. ‘DQN( )’ combines TD- with the DQN algorithm. We illustrate the performance ofthose four algorithms on the six games ‘Frostbite,’ ‘Atlantis,’ ‘Zaxxon,’ ‘H.E.R.O,’ ‘Q*Bert,’ and‘Chopper Command.’ We observe our method to achieve higher scores than the three baselines onthe majority of the games. We refer the reader to the supplementary material for additional results.6 C ONCLUSIONIn this paper we proposed a novel program for deep Q-learning which propagates promising rewardsto achieve significantly faster convergence than the classical DQN. Our method significantly outper-forms competing approaches even when trained on a small fraction of the data on the Atari 2600domain. In the future, we plan to investigate the impact of penalty functions, advanced constrainedoptimization techniques and explore potential synergy with other techniques.8Published as a conference paper at ICLR 2017REFERENCESM. G. Bellemare, Y . Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluationplatform for general agents. J. of Artificial Intelligence Research , 2013.Y . Bengio, A. Courville, and P. Vincent. Representation Learning: A Review and New Perspectives. PAMI ,2013.D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming . Athena Scientific, 1996.C. Blundell, B. Uria, A. Pritzel, Y . Li, A. Ruderman, J. Z. Leibo, J. Rae, D. Wierstra, and D. Hassabis. Model-Free Episodic Control. In http://arxiv.org/pdf/1606.04460v1.pdf , 2016.G. E. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-R. Mohamed, N. Jaitly, A. Senior, V . Vanhoucke, P. Nguyen, T. N.Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The sharedviews of four research groups. IEEE Signal Processing Magazine , 2012.L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. JMLR , 1996.A. Krizhevsky, I. Sutskever, , and G. E. Hinton. Imagenet classification with deep convolutional neural net-works. In Proc. NIPS , 2012.S. Lange and M. Riedmiller. Deep auto-encoder neural networks in reinforcement learning. In Proc. Int. Jt.Conf. Neural. Netw. , 2010.Y . LeCun, Y . Bengio, and G. E. Hinton. Deep learning. Nature , 2015.L.-J. Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. MachineLearning , 1992.V . Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing Atariwith Deep Reinforcement Learning. In NIPS Deep Learning Workshop , 2013.V . Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K.Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra,S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature , 2015.V . Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asyn-chronous Methods for Deep Reinforcement Learning. In https://arxiv.org/abs/1602.01783 , 2016.R. Munos, T. Stepleton, A. Harutyunyan, and M. G. Bellemare. Safe and efficient off-policy reinforcementlearning. In Proc. NIPS , 2016.A. Nair, P. Srinivasan, S. Blackwell, C. Alcicek, R. Fearon, V . Panneershelvam A. De Maria, M. Suleyman,C. Beattie, S. Petersen, S. Legg, V . Mnih, K. Kavukcuoglu, and D. Silver. Massively Parallel Methods forDeep Reinforcement Learning. In https://arxiv.org/abs/1507.04296 , 2015.I. Osband, C. Blundell, A. Pritzel, and B. Van Roy. Deep Exploration via Bootstrapped DQN. Inhttp://arxiv.org/abs/1602.04621 , 2016.W. P. Powell. Approximate Dynamic Programming . Wiley, 2011.M. Riedmiller. Neural fitted Q iteration - first experiences with a data efficient neural reinforcement learningmethod. In Proc. ECML , 2005.T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized Experience Replay. In Proc. ICLR , 2016.D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou,V . Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap,M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of Go with deep neuralnetworks and tree search. Nature , 2016.I. Sutskever, O. Vinyals, and Q. V . Le. Sequence to sequence learning with neural networks. In Proc. NIPS ,2014.R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction . MIT Press, 1998.S. Thrun and A. Schwartz. Issues in using function approxima- tion for reinforcement learning. In Proc.Connectionist Models Summer School , 1993.J. N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with function approximation. 1997.H. van Hasselt. Double Q-learning. In Proc. NIPS , 2010.H. van Hasselt, A. Guez, and D. Silver. Deep Reinforcement Learning with Double Q-learning. Inhttps://arxiv.org/abs/1509.06461 , 2015.Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas. Dueling Network Architecturesfor Deep Reinforcement Learning. In https://arxiv.org/abs/1511.06581 , 2015.C. J. C. H. Watkins. Learning from delayed rewards . PhD thesis, University of Cambridge England, 1989.C. J. C. H. Watkins and P. Dayan. Q-learning. Machine Learning , 1992.P. Wawrzynski. Real-time reinforcement learning by sequential actor-critics and experience replay. NeuralNetworks , 2009.9Published as a conference paper at ICLR 2017A S UPPLEMENTARY MATERIALOPTIMALITY TIGHTENING FOR STOCHASTIC ENVIRONMENTSSimilar to the inequalities we obtained for deterministic environments, we can also derive the fol-lowing sequence of inequalities holds for the optimal action-value function Q(with the greedypolicy), under the expectation of the environmental dynamics:Q(sj;aj) = E[rj+maxaQ(sj+1;a)]:::E[kXi=0irj+i+k+1maxaQ(sj+k+1;a)]So we have the following expectation constraint, on trajectories from state sjand actionaj.E[Q(sj;aj)(kXi=0irj+i+k+1maxaQ(sj+k+1;a))]0E[Q(sj;aj)Lj;k]0We can also use this series of inequalities to define upper bounds, on trajectories to state sjandactionaj.E[Q(sj;aj)(k1Q(sjk1;ajk1)kXi=0ik1rjk1+i)]0E[Q(sj;aj)Uj;k]0With these expectation constraints, we can formulate a constrained optimization problem as follows:minX(sj;aj;sj+1;rj)2B(Q(sj;aj)yj)2s.t.minkE[Q(sj;aj)Lj;k]08(sj;aj)2BmaxkE[Q(sj;aj)Uj;k]08(sj;aj)2B:Applying the quadratic penalty function method, we obtain the objective:X(sj;aj;rj;sj+1)2B(Q(sj;aj)yj)2+(maxkE[Lj;kQ(sj;aj)]2++ maxkE[(Q(sj;aj)Uj;k)]2+)By applying the Jensen’s inequality, we are able to obtain an upper bound by first exchanging theexpectation with the max and then exchanging the expectation with the rectifier function, becauseboth the max function and the rectifier function are convex.X(sj;aj;rj;sj+1)2B(Q(sj;aj)yj)2+E[(maxkLj;kQ(sj;aj)2+] +E[(Q(sj;aj)maxkUj;k)2+)]It is easy to see that, since we have trajectory samples in the replay memory which were drawnunder the environmental dynamics, we can perform stochastic optimization using these trajectories.In this way, a sample of this upper bound is identical to that in the deterministic setting in Eq. (4).As a result, our proposed algorithm can be used to optimize an upper bound of the above constrainedoptimization in stochastic environments.Please note that here we provide a mathematical derivation of our approach for stochastic environ-ments. We expect that it would work in practice, but due to time constraints and the lack of goodstochastic simulators, we cannot provide any empirical results here.10Published as a conference paper at ICLR 2017B A DDITIONAL RESULTSWe present our quantitative results in Table S1 and Table S2. We also illustrate the normalized scoreprovided in Eq. (6) over the number of episodes in Fig. S1.Game Random Human DQN 200M Ours 10MAlien 227.80 6875 3069 1864Amidar 5.8 1676 739.5 565.67Assault 222.4 1496 3359 5142.37Asterix 210 8503 6012 5408.33Asteroids 719.1 13157 1629 1481.67Atlantis 12850 29028 85641 316766.67Bank Heist 14.2 734.4 429.7 596Battle Zone 2360 37800 26300 30800Beam Rider 363.9 5775 6846 8069Bowling 23.1 154.8 42.4 49.3Boxing 0.1 4.3 71.8 81.17Breakout 1.7 31.8 401.2 229.79Centipede 2091 11963 8309 4470.06Chopper Command 811 9882 6687 6360Crazy Climber 10781 35411 114103 114146Demon Attack 152.1 3401 9711 5738.67Double Dunk -18.6 -15.5 -18.1 -10.07Enduro 0 309.6 301.8 672.83Fishing Derby -91.7 5.5 -0.8 5.27Freeway 0 29.6 30.3 31.3Frostbite 65.2 4335 328.3 3974.11Gopher 257.6 2321 8520 4660Gravitar 173 2672 306.7 346.67H.E.R.O 1027 25763 19950 19975Ice Hockey -11.2 0.9 -1.6 -3.43Jamesbond 29 406.7 576.7 1088.33Kangaroo 52 3035 6740 11716.67Krull 1598 2395 3805 9461.1Kung-Fu Master 258.5 22736 23270 27820Montezuma’s Revenge 0 4376 0 23.33Ms. Pacman 307.3 15693 2311 1805Name This Game 2292 4076 7257 7314.67Pong -20.7 9.3 18.9 19.4Private Eye 24.9 69571 1788 342.37Q*Bert 163.9 13455 10596 12355River Raid 1339 13513 8316 8028.33Road Runner 11.5 7845 18257 29346.67Robotank 2.2 11.9 51.6 34.5Seaquest 68.4 20182 5286 4070Space Invaders 148 1652 1976 995Star Gunner 664 10250 57997 16653.95Tennis -23.8 -8.9 -2.5 -1Time Pilot 3568 5925 5947 5423.33Tutankham 11.4 167.6 186.7 232Up and Down 533.4 9082 8456 14406Venture 0 1188 380 286.67Video Pinball 16257 17298 42684 74873.2Wizard of Wor 563.5 4757 3393 4716.67Zaxxon 32.5 9173 4977 10598Table S1: Raw Scores across 49 games, using 30 no-op start evaluation (5 minutes emulator time,18000 frames, =0.05). Results of DQN is taken from Mnih et al. (2015)11Published as a conference paper at ICLR 2017Game DQN 200M Ours 10MAlien 42.74% 24.62%Amidar 43.93% 33.52%Assault 246.27% 386.31%Asterix 69.96% 62.68%Asteroids 7.32% 6.13%Atlantis 449.94% 1878.60%Bank Heist 57.69% 80.78%Battle Zone 67.55% 80.25%Beam Rider 119.79% 142.39%Bowling 14.65% 19.89%Boxing 1707.14% 1930.24%Breakout 1327.24% 757.77%Centipede 62.99% 24.10%Chopper Command 64.78% 61.17%Crazy Climber 419.50% 419.67%Demon Attack 294.22% 171.95%Double Dunk 16.13% 275.16%Enduro 97.48% 217.32%Fishing Derby 93.52% 99.76%Freeway 102.36% 105.74%Frostbite 6.16% 91.55%Gopher 400.43% 213.36%Gravitar 5.35% 6.95%H.E.R.O 76.50% 76.60%Ice Hockey 79.34% 64.22%Jamesbond 145.00% 280.47%Kangaroo 224.20% 391.04%Krull 276.91% 986.59%Kung-Fu Master 102.38% 122.62%Montezuma’s Revenge 0% 0.53%Ms. Pacman 13.02% 9.73%Name This Game 278.31% 281.54%Pong 132% 133.67%Private Eye 2.54% 0.46%Q*Bert 78.49% 91.73%River Raid 57.31% 54.95%Road Runner 232.92% 374.48%Robotank 509.28% 332.99%Seaquest 25.94% 19.90%Space Invaders 121.54% 56.31%Star Gunner 598.10% 166.81%Tennis 142.95% 153.02%Time Pilot 100.93% 78.72%Tutankham 112.23% 141.23%Up and Down 92.68% 162.38%Venture 31.99% 24.13%Video Pinball 2538.62% 5630.76%Wizard of Wor 67.47% 99.04%Zaxxon 54.09% 115.59%Table S2: Normalized results across 49 games, using the evaluation score given in Eq. (6)12Published as a conference paper at ICLR 2017Figure S1: Convergence of mean and median of normalized percentages on 49 games.13
HJ7O61Yxe
Under review as a conference paper at ICLR 2017MODELING RELATIONAL TIME SERIES USING GAUS-SIAN EMBEDDINGSLudovic Dos Santos, Ludovic Denoyer, Benjamin Piwowarski & Patrick GallinariSorbonne Universities, UPMC Univ Paris 06, CNRS, LIP6 UMR 76064 place Jussieu 75005 Paris, Francefirstname.lastname@lip6.frAli ZiatSorbonne Universities, UPMC Univ Paris 06, CNRS, LIP6 UMR 7606Institut VEDECOM, 77 rue des chantiers, 78000, Versaillesali.ziat@vedecom.frABSTRACTWe address the problem of modeling multiple simultaneous time series where theobservations are correlated not only inside each series, but among the differentseries. This problem happens in many domains such as ecology, meteorology, etc.We propose a new dynamical state space model, based on representation learn-ing, for modeling the evolution of such series. The joint relational and temporaldynamics of the series are modeled as Gaussian distributions in a latent space. Adecoder maps the latent representations to the observations. The two components(dynamic model and decoder) are jointly trained. Using stochastic representationsallows us to model the uncertainty inherent to observations and to predict unob-served values together with a confidence in the prediction.1 I NTRODUCTIONRelational time series, i.e. multiple time series where the observations are correlated both insideeach series and between series occur in many domains such as ecology, medicine, biology, earthobservation by satellite imagery or local measurements, multimedia or even social data analysis.The correlations between the different observed series can come from a proximity (e.g. earth obser-vation or epidemic diffusion) or from a similarity of behavior (e.g. user traces in social data). In thestatistical literature, the modeling of relational time series has been the topic of a dedicated field:spatio-temporal statistics (Cressie & Wikle (2011); Wikle & Hooten (2010)). Different method-ologies have been developed for handling a large variety of spatio-temporal phenomena, with anemphasis on the analysis of natural observations like weather prediction, ecology or remote sensing.In the machine learning domain, there exists a vast literature dedicated to sequence or time seriesprediction. Recently, deep recurrent neural networks have witnessed notable successes in differentsequence and time series modeling tasks leading to an increasing number of publications, e.g. (Bar-bounis et al. (2006); Hsieh et al. (2011); Cao et al. (2012); Hermans & Schrauwen (2013)). Despitea large number of recent developments, the modeling and analysis of relational time series has onlyattracted a few attention in the field of representation learning. In addition, most of the models aredeterministic in the sense that they are trained to learn a fixed mapping for modeling the dynamicsof the series.We propose a new state space model for relational time series able to model the uncertainty at theobservation and at the modeling levels. The principle of this approach is to associate each point ofa time series to a Gaussian distribution in a latent space, the distribution over the observed valuesbeing directly computed from these latent distributions. The model has two main components. Oneis responsible for the dynamics in the latent space. This component is thus modeling the evolutionof the Gaussian distribution considering both the temporal intra-series and the relational inter-seriesBoth authors contributed equally to this work1Under review as a conference paper at ICLR 2017dependencies. A second component acts as a decoder and maps the latent representations associatedwith each series to the corresponding observations in the output space.The contributions of the paper are thus: (i) a new dynamical model for relational time series in-spired by representation learning; (ii) a stochastic component for modeling the uncertainties at theobservation and dynamic levelsThe paper is organized as follows. In Section 2 we introduce some related work on forecastingin time series, representation learning for time series, and recent deep learning works focusing onmodeling uncertainty. The model is presented in Section 3 together with four different variants.Section 4 presents experimental results on four datasets, and section 5 concludes this work andgives some perspectives.2 R ELATED WORKThe classical topic of time series modeling and forecasting has given rise to an extensive literature.In statistics, classical linear models include many variations around auto-regressive and movingaverage models (De Gooijer & Hyndman (2006)). In machine learning, non linear extensions ofthese models based on neural networks have been proposed as early as the 90s, opening the way tomany other non linear models including kernel methods (Muller et al. (99)).Relational time series have mainly been studied in the field of spatio-temporal statistics (Cressie &Wikle (2011); Wikle & Hooten (2010)). The traditional method first relied on a descriptive approachusing the first and second-order moments of the process for modeling the spatio-temporal dependen-cies. More recently, dynamical state models, where the current state is conditioned on the past havebeen explored (Wikle (2015)). These models have been considered both for continuous/discretespace and time components. However, the most common way is to consider discrete time, leadingto the modeling of time series of spatial processes as we do here. When space is discrete, the modelcomes down to a general vectorial autoregressive formulation. These models face a curse of dimen-sionality in the case of a large number of sources. Different strategies have been adopted to solve thisproblem such as embedding the spatio-temporal process in a low-dimensional manifold or param-eter reduction (Wikle (2015)), leading to model families quite similar to the ones used in machinelearning for modeling dynamical phenomena. Also, for complex underlying processes, observationsonly provide an incomplete description of the process dynamics so that modeling uncertainty at thedata and model levels is an important topic.In the last 10 years, there has been a growing interest in learning latent representations for examplethrough neural networks and deep learning. Dynamical state space models such as recurrent neuralnetworks (RNN), which have been used for time series forecasting in different contexts since theearly nineties (Connor et al. (1994)), have recently witnessed important successes in different areasfor general sequence modeling problems, leading to breakthroughs in domains like speech (Graveset al. (2013)), language generation (Sutskever et al. (2011)), translation (Cho et al. (2014)), andmany others. Among this family, the model closest to ours is the dynamic factor graph model of(Mirowski & LeCun (2009)) designed for multiple series modeling for the tasks of forecasting andimputation. However this model does not consider relational dependencies which is the focus of ourapproach.Most of the above models make use of pointwise representations and do not model explicitly theuncertainties present in the process and/or in the observations. Recently, in the learning repre-sentation community, there has been a growing interest in using distributions as latent representa-tions instead of points. (Vilnis & McCallum (2015); He et al. (2015); Dos Santos et al. (2016)) allmake use of Gaussian distributions for representing different items like words (Vilnis & McCallum(2015)), nodes in knowledge graphs (He et al. (2015)) or nodes in graphs for transductive classifi-cation (Dos Santos et al. (2016)). Note that Gaussian processes have also been used for time seriesprediction, but they have mainly been considered for univariate time series prediction (Hachino &Kadirkamanathan (2011); Brahim-Belhouari & Bermak (2004)) and they do not use a state spaceformulation.Recent techniques in variational inference (Kingma & Welling (2014); Rezende et al. (2014)) dealwith uncertainty by modeling distributions in the observation space, mapping random variableswithin a latent space to observations with a deep neural network. Extension of the variational in-2Under review as a conference paper at ICLR 2017ference method to time series has been proposed (Fraccaro et al. (2016); Krishnan et al. (2015)) butcontrarily to those works, we take into account relationships (both temporal and relational). Fur-thermore, in our model, we work directly with random variables to predict observations from timeseries. This gives us direct access to the output distribution with no need to sample or work withintractable distributions.Our model is built on top of the model in (Ziat et al. (2016)) which proposes a deterministic dy-namical process model but does not consider any explicit modeling of uncertainty. In this paper, wepropose a model that uses Gaussian embeddings, and extend the dynamics and loss functions of themodel in (Ziat et al. (2016)).3 F ORECASTING OF RELATIONAL TIMESERIES3.1 N OTATIONS AND TASKSLet us consider a set of ntemporal sequences1x1;::;xnsuch thatxptqiPRis the value of the ithsequence at time tdefined by xipxp1qi;::;xpTqiq,Tbeing the number of observed time steps. Forsimplification, we consider that all the series have the same length, but this is not restrictive.We model the dependencies between the different series through a graph, the different series sourcesbeing the graph vertices and the links modeling explicit dependencies between the sources. Theselinks can reflect a spatial proximity between the sources of the series, a similarity of behavior be-tween users or any other predefined relation. These explicit relations will be modeled in the latentspace. Our hypothesis is that they will constrain the representation of linked sources to be similarone to another in the latent space, this similarity being controlled by the strength of the link betweenthe two time series, denoted ei;j. We assume that the graph structure is static in time and is providedas a prior information. The model can be extended to learn these static dependencies but this is notconsidered here.Let us denote the size of the prediction horizon. The forecasting problem considered here is tocompute for all series ithe valuesxpTkqi for allkinr1;s. Note that the model can be straightfor-wardly extended to the imputation problem that aims at predicting missing values.3.2 I NFORMAL DESCRIPTIONThe proposed model is a dynamic state space model: the dynamics is modeled in a continuous latentstate space and the observations are generated from states in this latent space. State space modelshave already been considered for multiple time series (e.g. Mirowski & LeCun (2009)) and forspatio-temporal processes (e.g. Wikle & Hooten (2010)).Both the observations and the dynamics are subject to uncertainties. Usually, the observations cor-respond to a partial view of the underlying generating process and the dynamics being hidden is notdirectly accessible and should be modeled as a stochastic process.To handle this uncertainty, we propose a model, namely Relational Dynamic model with Gaussianrepresentations ( RDG ), that represents latent factors as distributions in a latent space and learns theseries dynamics in this latent space. The distributions themselves are estimated using observationslike for any other representation learning model. Besides being more adapted to handling the noiseinherent to the process and to the observations, the model can be used to predict the posterior distri-bution of the variables associated to the series and in particular the confidence or variance associatedto the predictions.The model is an extension of the deterministic model of (Ziat et al. (2016)) and has two maincomponents: (i) Decoding component: we consider that each series corresponds to a particulartrajectory in an unknown latent space. Each series xp1qi;::::;xpTqiis thus associated to a series ofrandom variables in RddenotedZp1qi;::::;ZpTqi,Zptqibeing the latent factor explaining the observedvalue of the series iat timetanddthe size of the latent space. We model each Zptqias a multivariate1For simplicity, we consider univariate time series, but the model can be trivially extended to multivariatetime series.3Under review as a conference paper at ICLR 2017normal variable Npptqi;ptqiq. The observation can be computed from this latent distribution byusing a decoding function mappingZptqitoXptqifpZptqiq. (ii) Dynamic component: Thesecond component models the series dynamics in the latent space. We suppose that dynamics canbe captured for all series through a function hthat maps the latent random variable Zptqito the nextlatent variable Zpt1qihpZptqiq. The function his thus modeling the time dynamics. In addition,constraints are introduced to reflect prior knowledge about the relational dependency structure ofthe series. For any couple of series iandjwith a known dependency, i.e. such that ei;j¡0we adda corresponding constraint on ZptqiandZptqjas explained in Section 3.3.3.In the following, we explain how the distributions corresponding to the random variables Zptqiarelearned, jointly to the functions f(decoder component) and h(dynamic component).3.3 M ODEL DEFINITIONWe suppose that the random variables Zptqifollow a Gaussian distribution. Let us denote ZptqiNpptqi;ptqiqa distribution where ptqiandptqihave to be estimated from known observations. Forsimplicity, we consider in the following that ptqiis a diagonal matrix, with ptqi;jdenoting the jthvalue of the diagonal of ptqi.We define a global loss function Lp;;f;hqwhereandare the means and covariance matricesfor all the series and for all the time steps between 1andT. The loss is a sum of three terms: (i) adecoding loss De, (ii) a dynamical loss Dyand (iii) a structural loss R:Lp;;f;hqn ̧i1T ̧t1DepfpZptqiq;xptqiqDyn ̧i1T1 ̧t1DypZpt1qi;hpZptqiqqRn ̧j1T ̧t1ei;jRpZptqi;Zptqjq(1)whereDyandRare hyperparameters weighting the importance of the different elements in the lossfunction. The first term corresponds to the decoding component , and forces both fand the learneddistributions of variables Zto “explain” the observations, the second term, the dynamic component ,encourageshto model the time dynamics in the latent space, while the third term captures therelations between the pairs of series. In the following, we use for falinear function andhwill beeither a linear or non-linear function (see Section 3.3.2).Learning: Learning the model is performed through the minimization of the loss functionLp;;f;hqwith respect to ,,fandh. To simplify the notations, the parameters of fandhare not made explicit in the notations – fandhare supposed to be differentiable. At the end ofthe learning process, all the latent distributions for each of the time steps are known for the trainingdata, as well as the decoding function fand the dynamical one h. We used ADAM (Kingma & Ba(2015)) as a stochastic gradient descent technique. This optimization can be easily made on a largescale dataset, and/or by using GPUs.3.3.1 F ROM LATENT SPACE TO OBSERVATIONSThe mapping onto the latent space is learned so that the values xptqiof each series can be predictedfrom their respective Gaussian embedding Zptqithrough the ffunction. We define below two al-ternative decoding loss functions De, used in the experiments for measuring the error between thepredicted distribution fpZptqiqand the observation xptqi. Other losses could be used with the samemodel.Thefirst loss measures the difference between the expected value of fand the observation using amean-square error:De1pfpZptqiq;xptqiqdefEfpZptqiqxptqi2(2)4Under review as a conference paper at ICLR 2017When considering a linear decoding function such as fpq ;¡,being the set of parametersoff,De1can be rewritten as as:De1pfpZptqiq;xptqiqp ;ptqi¡xptqiq2(3)Thesecond loss aims at measuring the distance between the random variable modeling the predictedobservations and the observations. This is the expectation of the mean squared error between thepredictions and the observations:De2pfpZptqiq;xptqiqdefEpfpZptqiqxptqiq2(4)Whenfis a linear function, this loss can be written as:De2pfpZptqiq;xptqiqd ̧k12kptqi;k ;ptqi¡xptqi2(5)Minimizing De1only updates the mean of the distributions, whereas minimizing De2updates boththe mean and the variance. More specifically, an observed value with De2will pull the variancesptqidown. This is an interesting property since observing values should reduce the variance of therepresentation. Moreover, this effect will be higher for the dimensions of the latent space where thevalue ofis higher. This is sensible since variance is reduced for the dimensions that are importantfor the prediction.3.3.2 M ODELING DYNAMICSThe loss function Dyaims at finding values Zp:qiand a dynamic model h, that will be used topredict the representation of the next state of time series i,Zpt1qi . The function hmaps a dis-tribution Npptqi;ptqiqtoNppt1qi;pt1qiq. Based on (Vilnis & McCallum (2015); Dos Santoset al. (2016)), we use a Kullback-Leibler divergence (noted DKLp||q ) to compare the distributionatpt1qto the distribution predicted by h.We propose in the following two alternative functions for h. For the first one, we consider that thelatent representation at time pt1qis a linear transformation of the latent distribution at time t. Thetransformed variable is also a Gaussian and its parameters can be easily computed. In this case, hisa linear function from RdtoRdwhich is represented by a matrix PMd;dpRq:Dy1pZpt1qi;hpZptqiqqdefDKLpZpt1qi||ZptqiqDKLpZpt1qi||Npptqi;ptqiTqq (6)Linear transformations of random vectors might be too restrictive to model complex processes.As an alternative transformation, we used two non linear multilayer perceptrons (MLP), one hmfor predicting the means and one for hcfor predicting the variance: the next mean is given bypt1qihmpptqi;ptqiq, and the next variance by pt1qihcpptqi;ptqiq. This gives:Dy2pZpt1qi;hpZptqiqqdefDKLpZpt1qi||Nphmpptqi;ptqiq;hcpptqi;ptqiqqq (7)Note hat in the second case, we also make the hypothesis that the resulting distribution (for Zpt1qi )is Gaussian. In the two cases, the KL divergence between the two Gaussian distributions has asimple analytic form from which the gradient can be easily computed2.3.3.3 S TRUCTURAL REGULARIZATION TERMAt last, Rcorresponds to a structural regularization over the graph structure that encourages themodel to learn similar representations for time series that are interdependent. This forces the modelto learn representations that reflect the structure dependencies between the series. Recall that these2DKLpZptqi||Zptqjq12ptrpptqj1ptqiqpptqjptqiqTptqj1pptqjptqiqdlogp|ptqi||ptqj|qq5Under review as a conference paper at ICLR 2017dependencies are supposed to be provided as priors for this model. We define this regularization lossas:RpZptqi||ZptqjqDKLpZptqi||Zptqjq (8)which again has, for Gaussian random variables, a simple analytical form that can be used forlearning.Minimizing the regularization term Rhas a direct impact on the distributions of the predictedobservations for connected times series. More precisely, we have the following inequality:dTVfpZptqiq;fpZptqjq¤ddDKLpZptqi||Zptqjq2(9)withdTVbeing the total variation distance of probability measures, i.e.:dTVpX;Yq supAPBorelp|DXpAqDYpAq|q (10)withXandYbeing to random variables of density distribution respectively DXandDY, andBorel being the Borel set of Rn(roughly, cuboids in Rn). This means that having relatively similarrepresentations (regarding the KL-divergence) constrains the predicted values to be similar. Formore details see Appendix A.3.4 I NFERENCEDuring inference when forecasting values, the latent distributions at pT1qare deduced from theones at time Tand follow NphppTqi;pTqiqq, distributions at pT2qfollow NphhppTqi;pTqiqq,and so on...4 E XPERIMENTS4.1 D ATASETS AND BASELINESExperiments have been performed on four datasets respectively extracted from Google Flu Trends3,WHO4and from two datasets from Grand Lyon5(GL) (respectively data from traffic conditionsand from car parks occupancy). All the series are normalized. For all datasets, we used binarydependency relations indicating whether two series are related or not. The Google Flu Trend(GFT) dataset is composed of an aggregation of weekly Google search queries related to the flu in29 countries. This dataset spans about ten years of time. The binary relations between series aredefined a priori so that the series of two countries iandjare linked, i.e. ei;j1in Equation (1),only if the countries have a common frontier. There are 96 relations in all. The GL Traffic (GL-T)dataset corresponds to the traffic conditions of the 50 busiest roads of the city of Lyon (France).Data is aggregated on 20 minutes windows spanning 15 days. The binary relations between seriesare based on the geographical proximity of roads. There are 130 relations in total. The GL Park(GL-P) dataset represents the occupancy of public car parks in Lyon. The series correspond to theoccupancy of the 30 busiest car parks. It has the same window and period of time as the previousdataset, and the binary relations between series are based on the geographical proximity of carparks. There are 74 relations in total. The WHO dataset provides the number of deaths caused bydiphtheria over 91 different countries, giving rise to 91 time series. The binary relations betweenseries are defined so that two series are linked if the corresponding countries share a commonfrontier. There are 228 links in total.We compare our approach with five baselines : Auto-Regressive ( AR), a monovariate linearauto-regressive model. It computes its predictions based on a learned linear function of a fixednumberpof past values of the series. The order pof the model is a hyperparameter of the modelselected by a grid search. Feed Forward Neural Network ( FFNN ), representative of non-linear3http://www.google.org/flutrends4http://www.who.int5http://data.grandlyon.com6Under review as a conference paper at ICLR 2017(a) RMSE from T+1 to T+5 on GL-T.Model GL-T GL-P GFT WHOAR 0.0752 0.0892 0.0626 0.0832FFNN 0.0751 0.0894 0.045 0.0838RNN 0.0709 0.0890 0.0431 0.0795KF 0.0711 0.0833 0.0388 0.0799DFG 0.0712 0.0911 0.0592 0.0795RDG 1;1 0.0742 0.0902 0.0607 0.0848RDG 1;2 0.0707 0.0834 0.0434 0.0796RDG 2;1 0.0765 0.0896 0.0589 0.0831RDG 2;2 0.0718 0.0828 0.0429 0.0795(b) RMSE at T+1 on the four datasets.Figure 1: Quantitative comparison between baselines and our proposed model (RDG) on the predic-tion task. RDG k;lcorresponds to the variant with losses ( Dek,Dyl).auto-regressive models of order pwhere the non-linear function is modeled as a feed-forward neuralnetwork with one hidden layer of size s. In this case, pandsare hyperparameters selected by gridsearch. RNN , a recurrent neural network with one hidden layer of size sof recurrent units and tanhnon-linearities. The RNN model is a state space non-linear auto-regressive model with exogenousinputs (the past values of the series). Note that this model should in principle be able to learnthe inter-series dependencies, but the dependencies are not modeled explicitly as they are in ourmodel. Also the RNN does not introduce explicit modeling of uncertainties. KF(Kalman (1960)),is a classic Kalman Filter with linear transformations from one state to another. DFG (Mirowski& LeCun (2009)), a state of the art model that learns continuous deterministic latent variablesby modeling the dynamics and the joint probabilities between series. All the hyperparameters ofthe baselines have been set using a validation set by grid search, including the best architecturesfor the dynamic model hwhen it is a multi-layer perceptron with one hidden layer or a linear model. .For the evaluation we have considered a root-mean-square error (RMSE) criterion. Regarding theexperimental protocol, models are evaluated using cross-validation with rolling origin.4.2 R ESULTSLet us first present the performance of our model w.r.t. the baselines for prediction at horizon 1inFigure 1b We have tested the four variants of our approach i.e combinations of De1orDe2withDy1orDy2. The proposed model obtains the best results on all the datasets except GFT where KFperforms better. Otherwise it outperforms the baselines on two datasets (GL-P -Grand Lyon Parks-and GFT -Google Flu Trends- on the table) and gets results similar to the RNN on the two others(GL-T -Grand yon Traffic- and WHO). The non linear dynamical model used for Dy2usually getsbetter results than other models, the best combination being the use of the MSE expectation errorfor the decoder and the non-linear model for the dynamics (denoted RDG 2;2on the figure). Thenon linear dynamical model used for Dy2usually gets better results than other models, the bestcombination being the use of the MSE expectation error for the decoder and the non-linear modelfor the dynamics (denoted RDG 2;2on the figure).Figure 1a shows the prediction quality (RMSE) at pT1q,pT2q,pT3q,pT4qandpT5qandillustrates the ability of RDG to predict correctly at different horizons. Here again, the performanceof RDG is very close to the performance of the Recurrent Neural Network. One can remark that atpT5qKF does not goes the distance since it performs well at pT1qbut quite badly at pT5qin comparison to other baselines.RDG has the additional property of modeling the uncertainty associated to its predictions, which isnot the case for a RNN. Let us consider the curves presented in Figure 2. They illustrate, the pre-dictions made by our model together with their associated variance computed through the Gaussianembeddings. First, one can see that the ground truth values are always within the confidence intervalprovided by our model, which means that RDG computes relevant minimum and maximum possiblevalues. Another aspect is that the size of the interval increases with the prediction horizon, which is7Under review as a conference paper at ICLR 2017Figure 2: Forecasts on GFT (two different time series of the dataset) with the RDG 2;2model showingits range of confidence: EpfpZptqqqvarpfpZptqqq. Prediction at 25ncorresponds to fphnpZp25qq.what is expected from such a model. The latter is then able to predict relevant confidence values forits predictions.Comparison between RDG with/without structural regularization or uncertainty We com-pare in Table 1 the results between our model when taking into account the neighborhood graph(R0) or not (R0): forecasts are uniformly worse for all datasets when we do not takeinto account the neighborhood graph, it suggests that the regularizer improves the model when theinput graph is relevant. Furthermore, we give the results obtained without uncertainty, which cor-responds to the model described in (Ziat et al. (2016)) (denoted Rainstorm): here again, our modeloutperforms the previous one for all the datasets.ModelDatasetGL-T GL-P GFT WHORainstorm 0.0710 0.0886 0.0440 0.0804RDG (R0) 0.0719 0.900 0.0441 0.0807RDG 0.0707 0.0828 0.0388 0.0795Table 1: RMSE at T1on the four datasets5 C ONCLUSION AND FUTURE WORKWe have proposed a model for relational time series forecasting. Our model (RDG) is based onlatent Gaussian embeddings, and has shown competitive performance on four different datasetscompared to state-of-the-art models. Moreover, RDG allows us to model the uncertainty of predic-tions, providing for example confidence intervals for each prediction. Future work will investigatemore complex dynamic and prediction functions, as well as observing the behavior of the model forimputation tasks.REFERENCESTG Barbounis, JB Theocharis, MC Alexiadis, and PS Dokopoulos. Long-term wind speed andpower forecasting using local recurrent neural network models. IEEE TEC , 2006.Sofiane Brahim-Belhouari and Amine Bermak. Gaussian process for non-stationary time series prediction. Computational Statistics & Data Analy-sis, 47(4):705–712, November 2004. doi: 10.1016/j.csda.2004.02.006. URLhttp://linkinghub.elsevier.com/retrieve/pii/S0167947304000301 .Qing Cao, Bradley T Ewing, and Mark A Thompson. Forecasting wind speed with recurrent neuralnetworks. European Journal of Operational Research , 221(1):148–154, 2012.8Under review as a conference paper at ICLR 2017K Cho, B Van Merri ̈enboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, and Y Bengio.Learning phrase representations using rnn encoder-decoder for statistical machine translation.EMNLP , 2014.Jerome T Connor, R Douglas Martin, and Les E Atlas. Recurrent neural networks and robust timeseries prediction. Neural Networks, IEEE Transactions on , 1994.Noel A. C. Cressie and Christopher K. Wikle. Statistics for spatio-temporal data . Wiley series inprobability and statistics. Hoboken, N.J. Wiley, 2011. ISBN 978-0-471-69274-4.Jan G De Gooijer and Rob J Hyndman. 25 years of time series forecasting. International journal offorecasting , 2006.Ludovic Dos Santos, Benjamin Piwowarski, and Patrick Gallinari. Multilabel classification on het-erogeneous graphs with gaussian embeddings. In ECML-KDD , 2016.Marco Fraccaro, Søren Kaae Sønderby, Ulrich Paquet, and Ole Winther. Sequential neural modelswith stochastic layers. Advances in neural information processing systems 2016 , 2016.Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-rent neural networks. In IIIE ICASSP , 2013.Tomohiro Hachino and Visakan Kadirkamanathan. Multiple Gaussian process models for direct timeseries forecasting. IEEJ Transactions on Electrical and Electronic Engineering , 6(3):245–252,May 2011. doi: 10.1002/tee.20651.S. He, K. Liu, G. Ji, and J. Zhao. Learning to represent knowledge graphs with gaussian embedding.InProceedings of the 24th ACM CIKM , pp. 623–632. ACM, 2015.Michiel Hermans and Benjamin Schrauwen. Training and analysing deep recurrent neural networks.InAdvances in Neural Information Processing Systems , pp. 190–198, 2013.TJ Hsieh, HF Hsiao, and WC Yeh. Forecasting stock markets using wavelet transforms and recurrentneural networks: An integrated system based on artificial bee colony algorithm. Applied softcomputing , 2011.Rudolph Emil Kalman. A new approach to linear filtering and prediction problems. Transactions ofthe ASME–Journal of Basic Engineering , 82(Series D):35–45, 1960.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR , 2015.DP Kingma and M Welling. Auto-encoding variational bayes. In ICLR , 2014.Rahul G Krishnan, Uri Shalit, and David Sontag. Deep kalman filters. NIPS 2015 Workshop , 2015.Piotr Mirowski and Yann LeCun. Dynamic factor graphs for time series modeling. In MachineLearning and Knowledge Discovery in Databases . Springer, 2009.KR Muller, A J Smola, G Ratsch, B Scholkopf, J Kohlmorgen, and V Vapnik. Using support vectormachines for time series prediction. Kernel methods—support vector learning , 99.Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation andvariational inference in deep latent gaussian models. In International Conference on MachineLearning , 2014.Ilya Sutskever, James Martens, and Geoffrey E. Hinton. Generating text with recurrent neural net-works. In Proceedings of ICML , 2011.Luke Vilnis and Andrew McCallum. Word representations via gaussian embedding. ICLR , 2015.Christopher K. Wikle. Modern perspectives on statistics for spatio-temporal data. Wiley Interdisci-plinary Reviews: Computational Statistics , 7(1):86–98, 2015.Christopher K Wikle and Mevin B Hooten. A general science-based framework for dynamicalspatio-temporal models. Test, 19(3):417–451, 2010.Ali Ziat, Gabriella Contardo, Nicolas Baskiotis, and Ludovic Denoyer. Learning embeddings forcompletion and prediction of relational multivariate time-series. In ESANN , 2016.9Under review as a conference paper at ICLR 2017A I MPACT OF MINIMIZING THE KL- DIVERGENCE ON PREDICTED VALUESIn this section, we show that the structural regularization term between two time series bounds thedifference predicted observations.Since we use diagonal covariance matrices and that the KL-divergence is invariant by multiplyingboth random variables by the same scalar, we can show that:DKLpZptqi||Zptqjqd ̧k1DKLpZptqi;k||Zptqj;kqd ̧k1DKLpkZptqi;k||kZptqj;kq (11)withZptqi;kbeing thek-th component of the gaussian vector Zptqi.Then, using Pinsker’s inequality one can see that minimizing the KL-divergence also minimize thetotal variation norm (which can be more intuitive in some cases), leading to:2d ̧k1dTVpkZptqi;k;kZptqj;kq2¤d ̧k1DKLpkZptqi;k||kZptqj;kq (12)withdTVbeing the total variation distance of probability measures.Using the Cauchy–Schwarz inequality:1dd ̧k1dTVpkZptqi;k;kZptqj;kq2¤d ̧k1dTVpkZptqi;k;kZptqj;kq2(13)Finally, each component of the random vectors Zptqbeing pairwise independent, we have:dTVpd ̧k1kZptqi;k;d ̧k1kZptqj;kq¤d ̧k1dTVpkZptqi;k;kZptqj;kq (14)Combining the the inequalities above, we can straightforwardly show the following inequality:dTVfpZptqiq;fpZptqjq¤ddDKLpZptqi||Zptqjq2(15)10
HJrDIpiee
Under review as a conference paper at ICLR 2017INVESTIGATING RECURRENCE AND ELIGIBILITYTRACES IN DEEPQ-N ETWORKSJean Harb, Doina PrecupSchool of Computer ScienceMcGill UniversityMontreal, QC, Canadafjharb,dprecup g@cs.mcgill.caABSTRACTEligibility traces in reinforcement learning are used as a bias-variance trade-offand can often speed up training time by propagating knowledge back over time-steps in a single update. We investigate the use of eligibility traces in combinationwith recurrent networks in the Atari domain. We illustrate the benefits of bothrecurrent nets and eligibility traces in some Atari games, and highlight also theimportance of the optimization used in the training.1 I NTRODUCTIONDeep reinforcement learning has had many practical successes in game playing (Mnih et al.(2015),Silver et al. (2016)) and robotics (Levine & Abbeel (2014)). Our interest is in further explor-ing these algorithms in the context of environments with sparse rewards and partial observability. Tothis end, we investigate the use of two methods that are known to mitigate these problems: recurrentnetworks, which provide a form of memory summarizing past experiences, and eligibility traces,which allow information to propagate over multiple time steps. Eligibility traces have been shownempirically to provide faster learning (Sutton & Barto (2017), in preparation) but their use with deepRL has been limited so far (van Seijen & Sutton (2014), Hausknecht & Stone (2015)). We provideexperiments in the Atari domain showing that eligibility traces boost the performance of Deep RL.We also demonstrate a surprisingly strong effect of the optimization method on the performance ofthe recurrent networks.The paper is structured as follows. In Sec. 2 we provide background and notation needed for thepaper. Sec. 3 describes the algorithms we use. In sec. 4 we present and discuss our experimentalresults. In Sec. 5 we conclude and present avenues for future work.2 B ACKGROUNDA Markov Decision Process (MDP) consists of a tuple hS;A;r;P;i, whereSis the set of states,Ais the set of actions, r:SA7! Ris the reward function, P(s0js;a)is the transition function(giving the next state distribution, conditioned on the current state and action), and 2[0;1)is thediscount factor. Reinforcement learning (RL) (Sutton & Barto, 1998) is a framework for solvingunknown MDPs, which means finding a good (or optimal) way of behaving, also called a policy. RLworks by obtaining transitions from the environment and using them, in order to compute a policythat maximizes the expected return, given by EP1t=0trt.The state-value function for a policy :SA! [0;1],V(s), is defined as the expected returnobtained by starting at state sand picking actions according to . State-action values Q(s;a)aresimilar to state values, but conditioned also on the initial action a. A policy can be derived from theQvalues by picking always the action with the best estimated value at any state.Monte Carlo (MC) and Temporal Difference (TD) are two standard methods for updating the valuefunction from data. In MC, an entire trajectory’s return is used as the target value of the current1Under review as a conference paper at ICLR 2017state.MC error =1Xi=0irt+iV(st) (1)In TD, the estimate of the next state’s value is used to correct the current state’s estimate:TD error =rt+V(st+1)V(st) (2)Q-learning is an RL algorithm that allows an agent to learn by imagining that it will take the bestpossible action in the following step:TD error =rt+maxa0Q(st+1;a0)Q(st;at) (3)This is an instance of off-policy learning, in which the agent gathers data with an exploratory policy,which randomizes the choice of action, but updates its estimates by constructing targets accordingto a differnet policy (in this case, the policy that is greedy with respect to the current value estimates.2.1 E LIGIBILITY TRACESEligibility traces are a fundamental reinforcement learning mechanism which allows a trade-offbetween TD and MC. MC methods suffer from high variance, as many trajectories can be takenfrom any given state and stochasticity is often present in the MDP. TD suffers from high bias, as itupdates values based on its own estimates.Using eligibility traces allows one to design algorithms that cover the middle-ground between MCand TD. The central notion for these are n-step returns, which provide a way of calculating the targetby using the value estimate for the state which occurs nsteps in the future (compared to the currentstate):R(n)t=n1Xi=0irt+i+nV(st+n): (4)Whennis 1, the results is the TD target, and taking n!1 yields the MC target.Eligibility traces use a geometric weighting of these n-step returns, where the weight of the k-stepreturn istimes the weight of the k1-step return. Using a = 0 reduces to using TD, as alln-steps forn>1have a weight of 0. One of the appealing effects of using eligibility traces is thata single update allows states many steps behind a reward signal to receive credit. This propagatesknowledge back at a faster rate, allowing for accelerated learning. Especially in environments whererewards are sparse and/or delayed, eligibility traces can help assign credit to past states and actions.Without traces, seeing a sparse reward will only propagate the value back by one step, which in turnneeds to be sampled to send the value back a second step, and so on.Rt= (1)1Xi=0iR(i)t= (1)1Xi=1i1i1Xj=0jrj+i+1V(st+i) (5)This way of viewing eligibility traces is called the forward view, as states are looking ahead at therewards received in the future. The forward view is rarely used, as it requires a state to wait for thefuture to unfold before calculating an update, and requires memory to store the experience. There isan equivalent view called the backward view, which allows us to calculate updates for every previousstate as we take a single action. This requires no memory and lets us perform updates without havingto wait for the future. However, this view has had limited success in the neural network setting asit requires using a trace on each neuron of the network, which tend to be dense and heavily used ateach step resulting in noisy signals. For this reason, eligibility traces aren’t heavily used when usingdeep learning, despite their potential benefits.2.1.1 Q()Q() is a variant of Q-learning where eligibility traces are used to calculate the TD error. As men-tioned previously, the backwards view of traces is traditionally used.2Under review as a conference paper at ICLR 2017A few versions of Q( ) exist, but the most used one is Watkins’s Q( ). As Q-learning is off-policy,the sequence of actions used in the past trajectory used to calculate the trace might be different fromthe actions that the current policy might take. In that case, one should not be using the trajectorypast the point where actions differ. To handle such a case, Watkins’s Q( ) sets the trace to 0 if theaction that the current policy would select is different from the one used in the past.2.2 D EEPQ-N ETWORKSMnih et al. (2015) introduced deep Q-networks (DQN), one of the first successful reinforcementlearning algorithms that use deep learning for function approximation in a way general enoughwhich is applicable to a variety of environments. Applying it to a set of Atari games, they useda convolutional neural network (CNN) which took as input the last four frames of the game, andoutput Q-values for each possible action.Equation 6 shows the DQN cost function, where we are optimizing the parameters. The parameters represent frozen Q-value weights which are update at a chosen frequency.L(st;atj) = (rt+maxa0Q(st+1;a0j)Q(st;atj))2(6)2.2.1 D EEPRECURRENT Q-N ETWORKSAs introduced in Hausknecht & Stone (2015), deep recurrent Q-networks (DRQN) are a modifica-tion on DQN, where single frames are passed through a CNN, which generates a feature vector thatis then fed to an RNN which finally outputs Q-values. This architecture gives the agent a mem-ory, allowing it to learn long-term temporal effects and handle partial observability, which is thecase in many environments. The authors showed that randomly blanking out frames was difficult toovercome for DQN, but that DRQN learned to handle without issue.To train DRQN, they proposed two variants of experience replay. The first was to sample entiretrajectories and run the RNN from end to end. However this is very computationally demandingas some trajectories can be over 10000 steps long. The second alternative was to sample sub-trajectories instead of single transitions. This is required as the RNN needs to fill its hidden stateand to allow it to understand the temporal aspect of the data.2.3 O PTIMIZERSStochastic gradient descent (SGD) is generally the algorithm used to optimize neural networks.However, some information is lost during the process as past gradients might signal that a weightdrastically needs to change, or that it is oscillating, requiring a decrease in learning rate. AdaptiveSGD algorithms have been built to use this information.RMSprop (Tieleman & Hinton (2012)), uses a geometric averaging over gradients squared, anddivides the current gradient by its square root. To perform RMSprop, first we calculate the averagingasg=g+ (1)r2and then update the parameters +rpg+.DQN (Graves (2013)) introduced a variant of RMSprop where the gradient is instead divided by thestandard deviation of the running average. First we calculate the running averages m=m+ (1)randg=g+ (1)r2, and then update the parameters using +rpgm2+. Inthe rest of the paper, when mentioning RMSprop, we’ll be referring to this version.Finally, Kingma & Ba (2014) introduced Adam, which is essentially RMSprop coupled with Nes-terov momentum, along with the running averages being corrected for bias. We have a term for therate of momentum of each of the running averages. To calculate the update with Adam, we startwith the updating the averages m=1m+ (11)r,v=2v+ (12)r2, the correct theirbiases ^m=m=(1t1),^v=v=(1t2)and finally calculate the gradient update +^mp^v+.3Under review as a conference paper at ICLR 2017Figure 1: This graph illustrates how a sample from experience replay is used in training. We use anumber of frames to fill the hidden state of the RNN. Then, for the states used for training, we havethe RNN output the Q-values. Finally, we calculate each n-step return and weight them accordingto, where the arrows represent the forward view of each trace. All states are passed though theCNN before entering the RNN.3 E XPERIMENTAL SETUPAs explained, the forward view of eligibility traces can be useful, but is computationally demandingin terms of memory and time. One must store all transitions and apply the neural network to eachstate in the trajectory. By using DRQN, experience replay is already part of the algorithm, whichremoves the memory requirement of the traces. Then, by training on sub-trajectories of data, thestates must be run through the RNN with all state values as the output, which eliminates the compu-tational cost. Finally, all that’s left to use eligibility traces is simply to calculate the weighted sumof the targets, which is very cheap to do.In this section we analyze the use of eligibility traces when training DRQN and try both RMSpropand Adam as optimizers. We only tested the algorithms on fully observable games as to comparethe learning capacities without the unfair advantage of having a memory, as would be the case onpartially observable environments.3.1 A RCHITECTUREWe tested the algorithms on two Atari 2600 games, part of the Arcade Learning Environment (Belle-mare et al. (2012)), Pong and Tennis. The architecture used is similar to the one used in Hausknecht& Stone (2015). The frames are converted to gray-scale and re-sized to 84x84. These are then fedto a CNN with the first layer being 32 8x8 filters and a stride of 4, followed by 64 4x4 filters with astride of 2, then by 64 3x3 filters with a stride of 1. The output of the CNN is then flattened beforebeing fed to a single dense layer of 512 output neurons, which is finally fed to an LSTM (Hochreiter& Schmidhuber (1997)) with 512 cells. We then have a last linear layer that takes the output ofthe recurrent layer to output the Q-values. All layers before the LSTM are activated using rectifiedlinear units (ReLU).As mentioned in subsection 2.2.1, we also altered experience replay to sample sub-trajectories. Weuse backprop through time (BPTT) to train the RNN, but only train on a sub-trajectory of experience.In runtime, the RNN will have had a large sequence of inputs in its hidden state, which can beproblematic if always trained with an empty hidden state. Like in Lample & Singh Chaplot (2016),we therefore sample a slightly longer length of trajectory and use the first mstates to fill the hiddenstate. In our experiments, we selected trajectory lengths of 32, where the first 10 states are used asfiller and the remaining 22 are used for the traces and TD costs. We used a batch size of 4.All experiments using eligibility traces use = 0:8. Furthermore, we use Watkins’s Q( ). To limitcomputation costs of using traces, we cut the trace off once it becomes too small. In our experiments,we choose the limit of 0.01, which allows the traces to affect 21 states ahead (when = 0:8). We4Under review as a conference paper at ICLR 2017calculate the trace for every state in the trajectory, except for a few in the beginning, use to fill in thehidden state of the RNN.When using RMSprop, we used a momentum of 0.95, an epsilon of 0.01 and a learning rate of0.00025. When using Adam, we used a momentum of gradients of 0.9, a momentum of squaredgradients of 0.999, an epsilon of 0.001 and a learning rate of 0.00025.Testing phases are consistent across all models, with the score being the average over each gameplayed during 125000 frames. We also use an of 0.05 for action selection.Choosekas number of trace steps and mas RNN-filler stepsInitialize weights , experience replay D s s0repeatInitialize RNN hidden state to 0.repeatChooseaaccording to greedy policy on Q(s;aj)Take actionains, observes0,rStores,a,r,s0in Experience ReplaySample 4 sub-trajectories of m+ksequential transitions ( s,a,r,s0) fromD^y=(r s’ is terminal ;r+maxaQ(s0;aj)otherwiseforeach transition sampled dot= at= arg maxa(st;aj);0otherwiseendforlfrom 0tok1do^Rt+l=Pks=lQsi=lt+iR(sl+1)t+s=Pks=lQsi=lt+iendPerform gradient descent on@(^RQ(s;aj))2@Every 10000 steps s s0untils0is terminaluntil training completeAlgorithm 1: Deep Recurrent Q-Networks with forward view eligibility traces on Atari. The eli-gibility traces are calculated using the n-step return function R(n)tfor time-step twas described insection 2.1.4 E XPERIMENTAL RESULTSWe describe experiments in two Atari games: Pong and Tennis. We chose Pong because it permitsquick experimentation, and Tennis because it is one of the games that has proven difficult in allpublished results on Atari.4.1 P ONGFirst, we tested an RNN model both with = 0and= 0:8, trained with RMSprop. Figure 2 showsthat the model without a trace ( = 0) learned at the same rate as DQN, while the model with traces(= 0:8) learned substantially faster and with more stability, without exhibiting any epochs withdepressed performance. This is probably due to the eligibility traces propagating rewards back bymany steps in a single update. In Pong, when the agent hits the ball, it must wait several time-stepsbefore the ball gets either to or past the opponent. Once this happens, the agent must assign thecredit of the event back to the time when it hit the ball, and not to the actions performed after theball had already left. The traces clearly help send this signal back faster.5Under review as a conference paper at ICLR 20170 5 10 15 20 25 30epochs−25−20−15−10−505101520scoresRMSprop0 5 10 15 20 25 30epochs−25−20−15−10−505101520scoresAdamRNN trace=0.0RNN trace=0.8DQNFigure 2: Test scores on Pong by training models with RMSprop vs Adam.We then tested the same models but using Adam as the optimizer instead of RMSprop. All modelslearn much faster with this setting. However, the model with no trace gains significantly more thanthe model with the trace. Our current intuition is that some hyper-parameters, such as the frozennetwork’s update frequency, are limiting the rate at which the model can learn. Note also that theDQN model also learns faster with Adam as the optimizer, but remains quite unstable, in comparisonwith the recurrent net models.Finally, the results in Table 1 show that both using eligibility traces and Adam provide performanceimprovements. While training with RMSProp, the model with traces gets to near optimal perfor-mance more than twice as quickly as the other models. With Adam, the model learns to be optimalin just 6 epochs.RMSprop AdamDQN 23 12RNN= 0 28 8RNN= 0:8 10 6Table 1: Number of epochs before getting to 18 points in Pong. We chose 18 points as the thresh-old because it represents a near-optimal strategy. Testing is performed with a 5% -greedy policy,stopping the agent from having a perfect score.4.2 T ENNISThe second Atari 2600 game we tested was Tennis. A match consists of only one set, which is wonby the player who is the first to win 6 ”games” (as in regular tennis). The score ranges from 24 to-24, given as the difference between the number of balls won by the two players.As in Pong, we first tried an RNN trained with RMSprop and the standard learning rate of 0.00025,both with and without eligibility traces (using again = 0:8and= 0). Figure 3 shows that bothRNN models learned to get optimal scores after about 50 epochs. This is in contrast with DQN,which never seems to be able to pass the 0 threshold, with large fluctuations ranging from -24 to0. After visually inspecting the games played in the testing phase, we noticed that the DQN agentgets stuck in a loop, where it exchanges the ball with the opponent until the timer runs out. Insuch a case, the agent minimizes the number of points scored against, but never learns to beat theopponent. The score fluctuations depend on how few points the agent allows before entering theloop. We suspect that the agent gets stuck in this policy because the reward for trying to defeat theopponent is delayed, waiting for the ball to reach the opponent and get past it. Furthermore, theexperiences of getting a point are relatively sparse. Together, it makes it difficult to propagate thereward back to the action of hitting the ball correctly.6Under review as a conference paper at ICLR 2017We also notice that both the RNN with and without eligibility traces manage to learn a near-optimalpolicy without getting stuck in the bad policy. The RNN has the capacity of sending the signal backto past states with BPTT, allowing it to do credit assignment implicitly, which might explain theirability to escape the bad policy. Remarkably, this is the only algorithm that succeeds in gettingnear-optimal scores on Tennis, out of all variants of DQN (Mnih et al. (2015), Munos et al. (2016),Wang et al. (2015), Mnih et al. (2016), Schaul et al. (2015)), which tend to get stuck in the policyof delaying. The model without traces learned at a faster pace than the one with traces, arriving toa score of 20 in 45 epochs as opposed to 62 for its counterpart. It’s possible that the updates formodel with traces were smaller, due to the weighting of target values, indirectly leading to a lowerlearning rate. We also trained the models with RMSprop and a higher learning rate of 0.001. Thisled to the model with traces getting to 20 points in just 27 epochs, while the model without traceslost its ability to get optimal scores and never passed the 0 threshold.0 10 20 30 40 50 60 70 80epochs−30−20−100102030scoresRMSprop lr=0.000250 5 10 15 20 25 30epochs−30−20−100102030scoresRMSprop lr=0.0010 5 10 15 20 25 30epochs−30−20−100102030scoresAdamRNN trace=0.0 RNN trace=0.8 DQNFigure 3: Test scores on Tennis comparing RMSprop and Adam.RMSprop lr=0.00025 RMSprop lr=0.001 Adam lr=0.00025DQN N/A N/A N/ARNN= 0 45 N/A 19RNN= 0:8 62 27 13Table 2: Number of epochs before getting to 20 points in Tennis. N/A represents the inability toreach such a level.We then tried using Adam as the optimizer, with the original learning rate of 0.00025. Both RNNmodels learned substantially faster than with RMSprop, with the RNN with traces getting to near-optimal performance in just 13 epochs. With Adam, the gradient for the positive TD is stored inthe momentum part of the equation for quite some time. Once in momentum, the gradient is part ofmany updates, which makes it enough to overtake the safe strategy. We also notice that the modelwith traces was much more stable than its counterpart. The model without traces fell back to thepolicy of delaying the game on two occasions, after having learned to beat the opponent. Finally,we trained DQN with Adam, but the model acted the same way as DQN trained with RMSprop.5 D ISCUSSION AND CONCLUSIONIn this paper, we analyzed the effects of using eligibility traces and different optimization functions.We showed that eligibility traces can improve and stabilize learning and using Adam can stronglyaccelerate learning.As shown in the Pong results, the model using eligibility traces didn’t gain much performance fromusing Adam. One possible cause is the frozen network. While it has a stabilizing effect in DQN,by stopping policies from drastically changing from a single update, it also stops newly learnedvalues from being propagated back. Double DQN seems to partially go around this issue, allowing7Under review as a conference paper at ICLR 2017the policy of the next state to change, while keeping the values frozen. In future experiments,we must consider eliminating or increasing the frozen network’s update frequency. It would also beinteresting to reduce the size of experience replay, as with increased learning speed, old observationscan become too off-policy and barely be used in eligibility traces.REFERENCESMarc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi-ronment: An evaluation platform for general agents. Journal of Artificial Intelligence Research ,2012.Alex Graves. Generating sequences with recurrent neural networks. arXiv preprintarXiv:1308.0850 , 2013.Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps.arXiv preprint arXiv:1507.06527 , 2015.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Guillaume Lample and Devendra Singh Chaplot. Playing fps games with deep reinforcement learn-ing. arXiv preprint arXiv:1609.05521 , 2016.Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search underunknown dynamics. In Advances in Neural Information Processing Systems , pp. 1071–1079,2014.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle-mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-levelcontrol through deep reinforcement learning. Nature , 518(7540):529–533, 2015.V olodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, TimHarley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcementlearning. arXiv preprint arXiv:1602.01783 , 2016.R ́emi Munos, Tom Stepleton, Anna Harutyunyan, and Marc G Bellemare. Safe and efficient off-policy reinforcement learning. arXiv preprint arXiv:1606.02647 , 2016.Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXivpreprint arXiv:1511.05952 , 2015.David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche,Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Masteringthe game of go with deep neural networks and tree search. Nature , 529(7587):484–489, 2016.Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction , volume 1. MITpress Cambridge, 1998.Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, In Preparation .MIT press Cambridge, 2017.Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a runningaverage of its recent magnitude. COURSERA: Neural Networks for Machine Learning , 4(2),2012.Harm van Seijen and Rich Sutton. True online td (lambda). In Proceedings of The 31st InternationalConference on Machine Learning , pp. 692–700, 2014.Ziyu Wang, Nando de Freitas, and Marc Lanctot. Dueling network architectures for deep reinforce-ment learning. arXiv preprint arXiv:1511.06581 , 2015.8
HkuVu3ige
Under review as a conference paper at ICLR 2017ON ORTHOGONALITY AND LEARNING RECURRENTNETWORKS WITH LONG TERM DEPENDENCIESEugene Vorontsov1,2, Chiheb Trabelsi1,2, Samuel Kadoury1,3, Chris Pal1,21 ́Ecole Polytechnique de Montr ́eal, Montr ́eal, Canada2Montreal Institute for Learning Algorithms, Montr ́eal, Canada3CHUM Research Center, Montr ́eal, Canadafeugene.vorontsov, chiheb.trabelsi,samuel.kadoury, christopher.pal g@polymtl.caABSTRACTIt is well known that it is challenging to train deep neural networks and recur-rent neural networks for tasks that exhibit long term dependencies. The vanishingor exploding gradient problem is a well known issue associated with these chal-lenges. One approach to addressing vanishing and exploding gradients is to useeither soft or hard constraints on weight matrices so as to encourage or enforce or-thogonality. Orthogonal matrices preserve gradient norm during backpropagationand can therefore be a desirable property; however, we find that hard constraintson orthogonality can negatively affect the speed of convergence and model per-formance. This paper explores the issues of optimization convergence, speed andgradient stability using a variety of different methods for encouraging or enforcingorthogonality. In particular we propose a weight matrix factorization and parame-terization strategy through which we can bound matrix norms and therein controlthe degree of expansivity induced during backpropagation.1 I NTRODUCTIONThe depth of deep neural networks confers representational power, but also makes model optimiza-tion more challenging. Training deep networks with gradient descent based methods is known to bedifficult as a consequence of the vanishing and exploding gradient problem (Hochreiter & Schmid-huber, 1997). Typically, exploding gradients are avoided by clipping large gradients (Pascanu et al.,2013) or introducing an L2orL1weight norm penalty. The latter has the effect of bounding thespectral radius of the linear transformations, thus limiting the maximal gain across the transforma-tion. Krueger & Memisevic (2015) attempt to stabilize the norm of propagating signals directlyby penalizing differences in successive norm pairs in the forward pass and Pascanu et al. (2013)propose to penalize successive gradient norm pairs in the backward pass. These regularizers affectthe network parameterization with respect to the data instead of penalizing weights directly.Both expansivity and contractivity of linear transformations can also be limited by more tightlybounding their spectra. By limiting the transformations to be orthogonal, their singular spectra arelimited to unitary gain causing the transformations to be norm-preserving. Le et al. (2015) andHenaff et al. (2016) have respectively shown that identity initialization and orthogonal initializationcan be beneficial. Arjovsky et al. (2015) have gone beyond initialization, building unitary recurrentneural network (RNN) models with transformations that are unitary by construction which theyachieved by composing multiple basic unitary transformations. The resulting transformations, forsome n-dimensional input, cover only some subset of possible nnunitary matrices but appearto perform well on simple tasks and have the benefit of having low complexity in memory andcomputation.The entire set of possible unitary or orthogonal parameterizations forms the Stiefel manifold. At amuch higher computational cost, gradient descent optimization directly along this manifold can bedone via geodesic steps (Nishimori, 2005; Tagare, 2011). Recent work (Wisdom et al., 2016) hasproposed the optimization of unitary matrices along the Stiefel manifold using geodesic gradientdescent. To produce a full-capacity parameterization for unitary matrices they use some insights1Under review as a conference paper at ICLR 2017from Tagare (2011), combining the use of a canonical inner products and Cayley transformations.Their experimental work indicates that full capacity unitary RNN models can solve the copy memoryproblem whereas both LSTM networks and restricted capacity unitary RNN models having similarcomplexity appear unable to solve the task for a longer sequence length ( T= 2000).In contrast, here we explore the optimization of real valued matrices within a configurable marginabout the Stiefel manifold. We suspect that a strong constraint of orthogonality limits the model’srepresentational power, hindering its performance, and may make optimization more difficult. Weexplore this hypothesis empirically by employing a factorization technique that allows us to limit thedegree of deviation from the Stiefel manifold. While we use geodesic gradient descent, we simulta-neously update the singular spectra of our matrices along Euclidean steps, allowing optimization tostep away from the manifold while still curving about it.1.1 V ANISHING AND EXPLODING GRADIENTSThe issue of vanishing and exploding gradients as it pertains to the parameterization of neural net-works can be illuminated by looking at the gradient back-propagation chain through a network.A neural network with nhidden layers has pre-activationsai(hi1) =Wihi1+bi; i2f2;;ng (1)For notational convenience, we combine parameters Wiandbito form an affine matrix . We cansee that for some loss function Lat layer n, the derivative with respect to parameters iis:@L@i=@an+1@i@L@an+1(2)The partial derivatives for the pre-activations can be decomposed as follows:@ai+1@i=@ai@i@hi@ai@ai+1@hi=@ai@iDiWi+1!@ai+1@ai=DiWi+1;(3)where Diis the Jacobian corresponding to the activation function, containing partial derivatives ofthe hidden units at layer i+1 with respect to the pre-activation inputs. Typically, Dis diagonal.Following the above, the gradient in equation 2 can be fully decomposed into a recursive chain ofmatrix products:@L@i=@ai@inYj=i(DjWj+1)@L@an+1(4)In (Pascanu et al., 2013), it is shown that the 2-norm of@ai+1@aiis bounded by the product of thenorms of the non-linearity’s Jacobian and transition matrix at time t(layer i), as follows:@at+1@atjjDtjjjjWtjjDtWt=t;Dt;Wt2R:(5)whereDtandWtare the largest singular values of the non-linearity’s Jacobian Dtand the tran-sition matrix Wt. In RNNs, Wtis shared across time and can be simply denoted as W.Equation 5 shows that the gradient can grow or shrink at each layer depending on the gain of eachlayer’s linear transformation Wand the gain of the Jacobian D. The gain caused by each layeris magnified across all time steps or layers. It is easy to have extreme amplification in a recurrentneural network where Wis shared across time steps and a non-unitary gain in Wis amplifiedexponentially. The phenomena of extreme growth or contraction of the gradient across time steps orlayers are known as the exploding and the vanishing gradient problems, respectively. It is sufficientfor RNNs to have t1 at each time tto enable the possibility of vanishing gradients, typicallyfor some large number of time steps T. The rate at which a gradient (or forward signal) vanishes2Under review as a conference paper at ICLR 2017depends on both the parameterization of the model and on the input data. The parameterizationmay be conditioned by placing appropriate constraints on W. It is worth keeping in mind that theJacobian Dis typically contractive, thus tending to be norm-reducing) and is also data-dependent,whereas Wcan vary from being contractive to norm-preserving, to expansive and applies the samegain on the forward signal as on the back-propagated gradient signal.2 O UR APPROACHVanishing and exploding gradients can be controlled to a large extent by controlling the maximumand minimum gain ofW. The maximum gain of a matrix Wis given by the spectral norm whichis given byjjWjj2= max"jjWxjjjjxjj#: (6)By keeping our weight matrix Wclose to orthogonal, one can ensure that it is close to a norm-preserving transformation (where the spectral norm is equal to one, but the minimum gain is alsoone). One way to achieve this is via a simple soft constraint or regularization term of the form:XijjWTiWiIjj2: (7)However, it is possible to formulate a more direct parameterization or factorization for Wwhich per-mits hard bounds on the amount of expansion and contraction induced by W. This can be achievedby simply parameterizing Waccording to its singular value decomposition, which consists of thecomposition of orthogonal basis matrices UandVwith a diagonal spectral matrix Scontaining thesingular values which are real and positive by definition. We haveW=USVT: (8)Since the spectral norm or maximum gain of a matrix is equal to its largest singular value, thisdecomposition allows us to control the maximum gain or expansivity of the weight matrix by con-trolling the magnitude of the largest singular value. Similarly, the minimum gain or contractivity ofa matrix can be obtained from the minimum singular value.We can keep the bases UandVorthogonal via geodesic gradient descent along the set of weightsthat satisfy UTU=IandVTV=Irespectively. The submanifolds that satisfy these constraintsare called Stiefel manifolds. We discuss how this is achieved in more detail below, then discuss ourconstruction for bounding the singular values.During optimization, in order to maintain the orthogonality of an orthogonally-initialized matrixM, i.e. where M=U,M=VorM=Wif so desired, we employ a Cayley transformationof the update step onto the Stiefel manifold of (semi-)orthogonal matrices, as in Nishimori (2005)and Tagare (2011). Given an orthogonally-initialized parameter matrix Mand its Jacobian, Gwithrespect to the objective function, an update is performed as follows:A=GMTMGTMnew=M+ (I+2A)1(I2A);(9)where Ais a skew-symmetric matrix (that depends on the Jacobian and on the parameter matrix)which is mapped to an orthogonal matrix via a Cayley transform and is the learning rate.While the update rule in (9) allows us to maintain an orthogonal hidden to hidden transition matrixWif desired, we are interested in exploring the effect of stepping away from the Stiefel manifold. Assuch, we parameterize the transition matrix Win factorized form, as a singular value decompositionwith orthogonal bases UandVupdated by geodesic gradient descent using the Cayley transformapproach above.IfWis an orthogonal matrix, the singular values in the diagonal matrix Sare all equal to one.However, in our formulation we allow these singular values to deviate from one and employ asigmoidal parameterization to apply a hard constraint on the maximum and minimum amount of3Under review as a conference paper at ICLR 2017deviation. Specifically, we define a margin maround 1 within which the singular values must lie.This is achieved with the parameterizationsi= 2m((pi)0:5) + 1; s i2fdiag(S)g; m2[0;1]: (10)The singular values are thus restricted to the range [1m;1+m]and the underlying parameterspiare updated freely via stochastic gradient descent. Note that this parameterization strategy alsohas implications on the step sizes that gradient descent based optimization will take when updatingthe singular values – they tend to be smaller compared to models with no margin constraining theirvalues. Specifically, a singular value’s progression toward a margin is slowed the closer it is to themargin. The sigmoidal parameterization can also impart another effect on the step size along thespectrum which needs to be accounted for. Considering 10, the gradient backpropagation of somelossLtoward parameters piis found asdLdpi=dsidpidLdsi= 2md(pi)dpidLdsi: (11)From (11), it can be seen that the magnitude of the update step for piis scaled by the marginhyperparameter m. This means for example that for margins less than one, the effective learningrate for the spectrum is reduced in proportion to the margin. Consequently, we adjust the learningrate along the spectrum to be independent of the margin by renormalizing it by 2m.This margin formulation both guarantees singular values lie within a well defined range and slowsdeviation from orthogonality. Alternatively, one could enforce the orthogonality of UandVandimpose a regularization term corresponding to a mean one Gaussian prior on these singular values.This encourages the weight matrix Wto be norm preserving with a controllable strength equivalentto the variance of the Gaussian. We also explore this approach further below.3 E XPERIMENTSIn this section, we explore hard and soft orthogonality constraints on factorized weight matricesfor recurrent neural network hidden to hidden transitions. With hard orthogonality constraints onUandV, we investigate the effect of widening the spectral margin or bounds on convergenceand performance. Loosening these bounds allows increasingly larger margins within which thetransition matrix Wcan deviate from orthogonality. We confirm that orthogonal initialization isuseful as noted in Henaff et al. (2016), and we show that although strict orthogonality guaranteesstable gradient norm, loosening orthogonality constraints can increase the rate of gradient descentconvergence. We begin our analyses on tasks that are designed to stress memory: a sequence copyingtask and a basic addition task (Hochreiter & Schmidhuber, 1997). We then move on to tasks on realdata that require models to capture long-range dependencies: digit classification based on sequentialand permuted MNIST vectors (Le et al., 2015; LeCun et al., 1998). Finally, we look at a basiclanguage modeling task using the Penn Treebank dataset (Marcus et al., 1993).The copy and adding tasks, introduced by Hochreiter & Schmidhuber (1997), are synthetic bench-marks with pathologically hard long distance dependencies that require long-term memory in mod-els. The copy task consists of an input sequence that must be remembered by the network, followedby a series of blank inputs terminated by a delimiter that denotes the point at which the network mustbegin to output a copy of the initial sequence. We use an input sequence of T+ 20 elements thatbegins with a sub-sequence of 10 elements to copy, each containing a symbol ai2fa1;:::;apgoutofp=8possible symbols. This sub-sequence is followed by T1elements of the blank categorya0which is terminated at step Tby a delimiter symbol ap+1and 10 more elements of the blankcategory. The network must learn to remember the initial 10 element sequence for Ttime steps andoutput it after receiving the delimiter symbol.The goal of the adding task is to add two numbers together after a long delay. Each number israndomly picked at a unique position in a sequence of length T. The sequence is composed ofTvalues sampled from a uniform distribution in the range [0;1), with each value paired with anindicator value that identifies the value as one of the two numbers to remember (marked 1) or as avalue to ignore (marked 0). The two numbers are positioned randomly in the sequence, the first inthe range [0;T21]and the second in the range [T2;T1], where 0 marks the first element. Thenetwork must learn to identify and remember the two numbers and output their sum.4Under review as a conference paper at ICLR 2017The sequential MNIST task from Le et al. (2015), MNIST digits are flattened into vectors that canbe traversed sequentially by a recurrent neural network. The goal is to classify the digit based onthe sequential input of pixels. The simple variant of this task is with a simple flattening of the imagematrices; the harder variant of this task includes a random permutation of the pixels in the inputvector that is determined once for an experiment. The latter formulation introduces longer distancedependencies between pixels that must be interpreted by the classification model.The English Penn Treebank (PTB) dataset from Marcus et al. (1993) is an annotated corpus of En-glish sentences, commonly used for benchmarking language models. We employ a sequential char-acter prediction task: given a sentence, a recurrent neural network must predict the next character ateach step, from left to right. We use input sequences of variable length, with each sequence contain-ing one sentence. We model 49 characters including lowercase letters (all strings are in lowercase),numbers, common punctuation, and an unknown character placeholder. In our experiments on twosubsets of the data: in the first, we first use 23% of the data with strings with up to 75 characters andin the second we include over 99% of the dataset, picking strings with up to 300 characters.3.1 L OOSENING HARD ORTHOGONALITY CONSTRAINTSIn this section, we experimentally explore the effect of loosening hard orthogonality constraintsthrough loosening the spectral margin defined above for the hidden to hidden transition matrix.In all experiments, we employed RMSprop (Tieleman & Hinton, 2012) when not using geodesicgradient descent. We used minibatches of size 50 and for generated data (the copy and addingtasks), we assumed an epoch length of 100 minibatches. We cautiously introduced gradient clippingat magnitude 100 (unless stated otherwise) in all of our RNN experiments although it may not berequired and we consistently applied a small weight decay of 0.0001. Unless otherwise specified,we trained all simple recurrent neural networks with the hidden to hidden matrix factorization asin (8) using geodesic gradient descent on the bases (learning rate 106) and RMSprop on the otherparameters (learning rate 0.0001), using a tanh transition nonlinearity, and clipping gradients of 100magnitude. The neural network code was built on the Theano framework (Theano DevelopmentTeam, 2016). When parameterizing a matrix in factorized form, we apply the weight decay on thecomposite matrix rather than on the factors in order to be consistent across experiments. For MNISTand PTB, test set metrics were computed based on the parameterization that gave the best validationset accuracy.3.1.1 C ONVERGENCE ON SYNTHETIC MEMORY TASKSFor different sequence lengths Tof the copy and adding tasks, we trained a factorized RNN with 128hidden units and various spectral margins m. For the copy task, we used Elman networks withouta transition non-linearity as in Henaff et al. (2016). We discuss our investigations into the use of anon-linearity on the copy task in the Appendix.As shown in Figure 1 we see an increase in the rate of convergence as we increase the spectralmargin. This observation generally holds across the tested sequence lengths ( T= 200 ,T= 500 ,T= 1000 ,T= 10000 ); however, large spectral margins hinder convergence on extremely longsequence lengths. At sequence length T= 10000 , parameterizations with spectral margins largerthan 0.001 converge slower than when using a margin of 0.001. In addition, the experiment withouta margin failed to converge on the longest sequence length. This follows the expected pattern wherestepping away from the Stiefel manifold may help with gradient descent optimization but looseningorthogonality constraints can reduce the stability of signal propagation through the network.For the adding task, we trained a factorized RNN on T= 1000 length sequences, using a ReLUactivation function on the hidden to hidden transition matrix. The mean squared error (MSE) isshown for different spectral margins in Figure 5 in the Appendix. Testing spectral margins m= 0,m= 1,m= 10 ,m= 100 , and no margin, we find that the models with the purely orthogonal(m= 0) and the unconstrained (no margin) transition matrices failed to begin converging beyondbaseline MSE within 2000 epochs.5Under review as a conference paper at ICLR 20170 20 40 60 80 100number of epochs0.00.20.40.60.81.0accuracy020406080100120140160number of epochs0.00.20.40.60.81.0accuracy0 50 100 150 200number of epochs0.00.20.40.60.81.0accuracy0 50 100 150 200 250 300number of epochs0.00.20.40.60.81.0accuracym=0m=0.001m=0.01m=0.1m=1no marginFigure 1: Accuracy curves on the copy task for sequence lengths of (from left to right) T=200,T=500, T=1000, T=10000 given different spectral margins. Convergence speed increases with mar-gin size; however, large margin sizes are ineffective at longer sequence lengths (T=10000, right).margin initialization accuracy0 orthogonal 77.180.001 orthogonal 79.260.01 orthogonal 85.470.1 orthogonal 94.101 orthogonal 93.84none orthogonal 93.24none Glorot normal 66.71none identity 53.53LSTM 97.30Table 1: Ordered sequential MNIST classifica-tion with different margin sizes and an LSTM.margin initialization accuracy0 orthogonal 83.560.001 orthogonal 84.590.01 orthogonal 89.630.1 orthogonal 91.441 orthogonal 90.83none orthogonal 90.51none Glorot normal 79.33none identity 42.72LSTM 92.62Table 2: Permuted sequential MNIST classifica-tion with different margin sizes and an LSTM.3.1.2 P ERFORMANCE ON REAL DATAHaving confirmed that an orthogonality constraint can negatively impact convergence rate, we seekto investigate the effect on model performance for tasks on real data. We show the results of experi-ments on permuted sequential MNIST in Table 2 and ordered sequential MNIST in Table 1. The losscurves are shown in Figure 6 in the Appendix and reveal an increased convergence rate for largerspectral margins. We trained the factorized RNN models with 128 hidden units for 120 epochs. Wealso trained an LSTM with 128 hidden units on both tasks for 150 epochs, configured with peep-hole connections, orthogonally initialized (and forget gate bias initialized to one), and trained withRMSprop (learning rate 0.0001, clipping gradients of magnitude 1).We show the results of experiments on PTB character prediction, in terms of bits per character (bpc)and prediction accuracy, for a subset of short sequences (up to 75 characters; 23% of data) in Table3 and for a subset of long sequences (up to 300 characters; 99% of data) in Table 4. We trainedfactorized RNN models with 512 hidden units for 200 epochs with geodesic gradient descent on thebases (learning rate 106) and RMSprop on the other parameters (learning rate 0.001), using a tanhtransition nonlinearity, and clipping gradients of 30 magnitude.Interestingly, for both the ordered and permuted sequential MNIST tasks, models with a non-zeromargin significantly outperform those that are constrained to have purely orthogonal transition matri-margin initialization bpc accuracy0 orthogonal 2.16 55.310.01 orthogonal 2.16 55.330.1 orthogonal 2.12 55.371 orthogonal 2.06 57.07100 orthogonal 2.04 57.51none orthogonal 2.06 57.38none Glorot normal 2.08 57.37none identity 2.25 53.83Table 3: Character prediction on PTB sentencesof to 75 characters, using different margins.margin initialization bpc accuracy0 orthogonal 2.20 54.880.01 orthogonal 2.20 54.830.1 orthogonal 2.24 54.101 orthogonal 2.36 51.12100 orthogonal 2.36 51.20none orthogonal 2.34 51.30none Glorot normal 2.34 51.04none identity 2.68 45.35Table 4: Character prediction on PTB sentencesof up to 300 characters, using different margins.6Under review as a conference paper at ICLR 2017ces (margin of zero). The best results on both the ordered and sequential MNIST tasks were yieldedby models with a spectral margin of 0.1, at 94.10% accuracy and 91.44% accuracy, respectively. AnLSTM outperformed the RNNs in both tasks; nevertheless, RNNs with hidden to hidden transitionsinitialized as orthogonal matrices performed admirably without a memory component and withoutall of the additional parameters associated with gates. Indeed, orthogonally initialized RNNs per-formed almost on par with the LSTM in the permuted sequential MNIST task which presents longerdistance dependencies than the ordered task. Although the optimal margin appears to be 0.1, RNNswith large margins perform almost identically to an RNN without a margin, as long as the transitionmatrix is initialized as orthogonal. On these tasks, orthogonal initialization appears to significantlyoutperform Glorot normal initialization (Glorot & Bengio, 2010) or initializing the matrix as iden-tity. It is interesting to note that for the MNIST tasks, orthogonal initialization appears useful whileorthogonality constraints appear mainly detrimental. This suggests that while orthogonality helpsearly training by stabilizing gradient flow across many time steps, orthogonality constraints mayneed to be loosened on some tasks so as not to over-constrain the model’s representational ability.Curiously, larger margins and even models without sigmoidal constraints on the spectrum (no mar-gin) performed well as long as they were initialized to be orthogonal, suggesting that evolution awayfrom orthogonality is not a serious problem on MNIST. It is not surprising that orthogonality is use-ful for the MNIST tasks since they depend on long distance signal propagation with a single output atthe end of the input sequence. On the other hand, character prediction with PTB produces an outputat every time step. Constraining deviation from orthogonality proved detrimental for short sentences(Table 3) and beneficial when long sentences were included (Table 4). Furthermore, Glorot normalinitialization did not perform worse than orthogonal initialization for PTB. Since an output is gen-erated for every character in a sentence, short distance signal propagation is possible. Thus it ispossible that the RNN is first learning very local dependencies between neighbouring characters andthat given enough context, constraining deviation from orthogonality can help force the network tolearn longer distance dependencies.3.1.3 S PECTRAL AND GRADIENT EVOLUTIONIt is interesting to note that even long sequence lengths (T=1000) in the copy task can be solvedefficiently with rather large margins on the spectrum. In Figure 2 we look at the gradient propaga-tion of the loss from the last time step in the network with respect to the hidden activations. We cansee that for a purely orthogonal parameterization of the transition matrix (when the margin is zero),the gradient norm is preserved across time steps, as expected. We further observe that with increas-ing margin size, the number of update steps over which this norm preservation survives decreases,though surprisingly not as quickly as expected.Figure 2: The norm of the gradient of the loss from the last time step with respect to the hiddenunits at a given time step for a length 220 RNN over 1000 update iterations for different margins.Iterations are along the abscissa and time steps are denoted along the ordinate. The first columnmargins are: 0, 0.001, 0.01. The second column margins are: 0.1, 1, no margin. Gradient norms arenormalized across the time dimension.Although the deviation of singular values from one should be slowed by the sigmoidal parameteriza-tions, even parameterizations without a sigmoid (no margin) can be effectively trained for all but thelongest sequence lengths. This suggests that the spectrum is not deviating far from orthogonality andthat inputs to the hidden to hidden transitions are mostly not aligned along the dimensions of great-7Under review as a conference paper at ICLR 2017est expansion or contraction. We evaluated the spread of the spectrum in all of our experiments andfound that indeed, singular values tend to stay well within their prescribed bounds and only reachthe margin when using a very large learning rate that does not permit convergence. Furthermore,when transition matrices are initialized as orthogonal, singular values remain near one throughouttraining even without a sigmoidal margin for tasks that require long term memory (copy, adding,sequential MNIST). On the other hand, singular value distributions tend to drift away from one forPTB character prediction which may help explain why enforcing an orthogonality constraint canbe helpful for this task, when modeling long sequences. Interestingly, singular values spread outless for longer sequence lengths (nevertheless, the T=10000 copy task could not be solved with nosigmoid on the spectrum).We visualize the spread of singular values for different model parameterizations on the permuted se-quential MNIST task in Figure 3. Curiously, we find that the distribution of singular values tends toshift upward to a mean of approximately 1.05 on both the ordered and permuted sequential MNISTtasks. We note that in those experiments, a tanh transition nonlinearity was used which is contractivein both the forward signal pass and the gradient backward pass. An upward shift in the distributionof singular values of the transition matrix would help compensate for that contraction. Indeed, (Saxeet al., 2013) describe this as a possibly good regime for learning in deep neural networks. That themodel appears to evolve toward this regime suggests that deviating from it may incur a cost. Thisis interesting because the cost function cannot take into account numerical issues such as vanish-ing or exploding gradients (or forward signals); we do not know what could make this deviationcostly. That the transition matrix may be compensating for the contraction of the tanh is supportedby further experiments: applying a 1.05 pre-activation gain appears to allow a model with a marginof 0 to nearly match the top performance reached on both of the MNIST tasks. Furthermore, whenusing the OPLU norm-preserving activation function (Chernodub & Nowicki, 2016), we found thatorthogonally initialized models performed equally well with all margins, achieving over 90% ac-curacy on the permuted sequential MNIST task. Unlike orthgonally initialized models, the RNNon the bottom right of Figure 3 with Glorot normal initialized transition matrices, begins and endswith a wide singular spectrum. While there is no clear positive shift in the distribution of singularvalues, the mean value appears to very gradually increase for both the ordered and permuted sequen-tial MNIST tasks. If the model is to be expected to positively shift singular values to compensatefor the contractivity of the tanh nonlinearity, it is not doing so well for the Glorot-initialized case;however, this may be due to the inefficiency of training as a result of vanishing gradients, given thatinitialization.0 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.800.850.900.951.001.051.101.151.200 50 100 150 200number of epochs0.00.51.01.52.02.5Figure 3: Singular value evolution on the permuted sequential MNIST task for factorized RNNswith different margin sizes. Margins are, from left to right: top row : 0.001, 0.01, 0.1; bottom row : 1,no margin, no margin. The singular value distributions are summarized with the mean (green line,center) and standard deviation (green shading about mean), minimum (red, bottom) and maximum(blue, top) values. All models are initialized with orthogonal hidden to hidden transition matricesexcept for the model on the bottom right where Glorot normal initialization is used.8Under review as a conference paper at ICLR 20173.2 E XPLORING SOFT ORTHOGONALITY CONSTRAINTSHaving established that it may indeed be useful to step away from orthogonality, here we exploretwo forms of soft constraints (rather than hard bounds as above) on hidden to hidden transitionmatrix orthogonality. The first is a simple penalty that directly encourages a transition matrix Wtobe orthogonal, of the form jjWTWIjj22. This is similar to the orthogonality penalty introducedby Henaff et al. (2016). In the first two subfigures on the left of Figure 4, we explore the effectof weakening this form of regularization. We trained both a regular non-factorized RNN on theT= 200 copy task and a factorized RNN with orthogonal bases on the T= 500 copy task. Forthe regular RNN, we had to reduce the learning rate to 105. Here again we see that weakening thestrength of the orthogonality-encouraging penalty can increase convergence speed.0 200 400 600 800 1000number of epochs0.00.20.40.60.81.0accuracy0 20 40 60 80 100number of epochs0.00.20.40.60.81.0accuracy0.0010.010.11101000 50 100 150 200 250 300number of epochs0.00.20.40.60.81.0accuracy0 50 100 150 200 250 300number of epochs0.00.20.40.60.81.0accuracy0.00010.0010.010.1110100Figure 4: Accuracy curves on the copy task for different strengths of soft orthogonality constraints.A soft orthogonality constraint is applied to the transition matrix Wfor a regular RNN on T= 200(Left) and the same is applied on a factorized RNN on T= 500 (Left center). Another constraintin the form of a mean one Gaussian prior on the singular values is applied to a factorized RNN onT= 200 (Right center); the same is applied to a factorized RNN with a sigmoidal parameterizationof the spectrum, using a large margin of 1 (Right). Loosening orthogonality speeds convergence.The second approach we explore replaces the sigmoidal margin parameterization with a mean oneGaussian prior on the singular values. In the two right subfigures of Figure 4, we visualize the accu-racy on the length 200 copy task, using geoSGD (learning rate 106)to keep UandVorthogonaland different strengths of a Gaussian prior with mean one on the singular values. We trained theseexperiments with regular SGD on the spectrum and other non-orthogonal parameter matrices, usinga105learning rate. We see that priors which are too strong lead to slow convergence. Looseningthe strength of the prior makes the optimization more efficient. Furthermore, we compare a directparameterization of the spectrum (no sigmoid) in Figure 4 with a sigmoidal parameterization, usinga large margin of 1. Without the sigmoidal parameterization, optimization quickly becomes unsta-ble; on the other hand, the optimization also becomes unstable if the prior is removed completely inthe sigmoidal formulation (margin 1). These results further motivate the idea that parameterizationsthat deviate from orthogonality may perform better than purely orthogonal ones, as long as they aresufficiently constrained to avoid instability during training.4 C ONCLUSIONSWe have explored a number of methods for controlling the expansivity of gradients during backprop-agation based learning in RNNs through manipulating orthogonality constraints and regularizationon matrices. Our experiments indicate that while orthogonal initialization may be beneficial, main-taining constraints on orthogonality can be detrimental. Indeed, moving away from hard constraintson matrix orthogonality can help improve optimization convergence rate and model performance.However, we also observe with synthetic tasks that relaxing regularization which encourages thespectral norms of weight matrices to be close to one, or allowing bounds on the spectral norms ofweight matrices to be too wide, can reverse these gains and may lead to unstable optimization.ACKNOWLEDGMENTSWe thank the Natural Sciences and Engineeering Research Council (NSERC) of Canada and Sam-sung for supporting this research.9Under review as a conference paper at ICLR 2017REFERENCESMartin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks.arXiv preprint arXiv:1511.06464 , 2015.Artem Chernodub and Dimitri Nowicki. Norm-preserving orthogonal permutation linear unit acti-vation functions (oplu). arXiv preprint arXiv:1604.02313 , 2016.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neuralnetworks. In Aistats , volume 9, pp. 249–256, 2010.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassinghuman-level performance on imagenet classification. In Proceedings of the IEEE internationalconference on computer vision , pp. 1026–1034, 2015.Mikael Henaff, Arthur Szlam, and Yann LeCun. Orthogonal rnns and long-memory tasks. arXivpreprint arXiv:1602.06662 , 2016.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.David Krueger and Roland Memisevic. Regularizing rnns by stabilizing activations. arXiv preprintarXiv:1511.08400 , 2015.Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networksof rectified linear units. arXiv preprint arXiv:1504.00941 , 2015.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotatedcorpus of english: The penn treebank. Computational linguistics , 19(2):313–330, 1993.Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. InProceedings of the 27th International Conference on Machine Learning (ICML-10) , pp. 807–814,2010.Yasunori Nishimori. A note on riemannian optimization methods on the stiefel and the grassmannmanifolds. dim, 1:2, 2005.Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neuralnetworks. ICML (3) , 28:1310–1318, 2013.Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynam-ics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 , 2013.Hemant D Tagare. Notes on optimization on stiefel manifolds. Technical report, Tech. Rep., YaleUniversity, 2011.Theano Development Team. Theano: A Python framework for fast computation of mathematicalexpressions. arXiv e-prints , abs/1605.02688, May 2016. URL http://arxiv.org/abs/1605.02688 .T. Tieleman and G. Hinton. Lecture 6.5—RmsProp: Divide the gradient by a running average of itsrecent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.Scott Wisdom, Thomas Powers, John R. Hershey, Jonathan Le Roux, and Les Atlas. Full-capacityunitary recurrent neural networks. To appear in NIPS , 2016.10Under review as a conference paper at ICLR 20175 A PPENDIX5.1 A DDITIONAL FIGURES0 500 1000 1500 2000number of epochs0.000.050.100.150.200.250.30MSEm=0m=1m=10m=100no marginFigure 5: Mean squared error (MSE) curves on the adding task for different spectral margins m.For a trivial baseline solution of always outputting the same number, the expected baseline MSE is0.167.0 50 100 150 200number of epochs0.00.51.01.52.02.5cost0 50 100 150 200number of epochs0.00.51.01.52.02.5costm=0m=0.001m=0.01m=0.1m=1no marginglorotidentityFigure 6: Loss curves for different factorized RNN parameterizations on the sequential MNISTtask (left) and the permuted sequential MNIST task (right). The spectral margin is denoted by m;models with no margin have singular values that are directly optimized with no constraints; Glorotrefers to a factorized RNN with no margin that is initialized with Glorot normal initialization.5.2 C OPY TASK NONLINEARITYWe found that nonlinearities such as a rectified linear unit (ReLU) (Nair & Hinton, 2010) or hy-perbolic tangent (tanh) made the copy task far more difficult to solve. Using tanh, a short se-quence length ( T= 100 ) copy task required both a soft constraint that encourages orthogonalityand thousands of epochs for training. It is worth noting that in the unitary evolution recurrent neu-ral network of Arjovsky et al. (2015), the non-linearity (referred to as the ”modReLU”) is actuallyinitialized as an identity operation that is free to deviate from identity during training. Further-more, Henaff et al. (2016) derive a solution mechanism for the copy task that drops the non-linearityfrom an RNN. To explore this further, we experimented with a parametric leaky ReLU activationfunction (PReLU) which introduces a trainable slope for negative valued inputs x, producingf(x) =max(x;0) +min (x;0)(He et al., 2015). Setting the slope to one would make thePReLU equivalent to an identity function. We experimented with clamping to 0.5, 0.7 or 1 in afactorized RNN with a spectral margin of 0.3 and found that only the model with = 1solved theT= 1000 length copy task. We also experimented with a trainable slope , initialized to 0.7 andfound that it converges to 0.96, further suggesting the optimal solution for the copy task is withouta transition nonlinearity. Since the copy task is purely a memory task, one may imagine that a tran-sition nonlinearity such as a tanh or ReLU may be detrimental to the task as it can lose information.Thus, we also tried a recent activation function that preserves information, called an orthogonal per-mutation linear unit (OPLU) (Chernodub & Nowicki, 2016). The OPLU preserves norm, makinga fully norm-preserving RNN possible. Interestingly, this activation function allowed us to recoveridentical results on the copy task to those without a nonlinearity for different spectral margins.11Under review as a conference paper at ICLR 20175.3 M ETHOD RUNNING TIMEAlthough the method proposed in section 2 relies on a matrix inversion, an operation with O(n3)complexity for an n n matrix, the running time of an RNN factorized in such a way actuallyremains reasonable. This running time is summarized in Table 5 and includes all computationsin the graph, together with the matrix inversion. As this method is meant to be used only for theanalysis in this work, we find the running times acceptable for that purpose. Models were run on anNvidia GTX-770 GPU and were run against the T=100 length copy task.hidden units SGD geoSGD128 21.90.2 40.40.1500 46.70.2 161.40.21000 95.40.3 711.20.8Table 5: Run time in seconds for 1000 itera-tions on a T=100 copy task of a regular RNNtrained with stochastic gradient descent (SGD)compared against a factorized RNN trained withgeodesic SGD on the bases (geoSGD) and reg-ular SGD for other parameters.12
HyxQzBceg
Published as a conference paper at ICLR 2017DEEPVARIATIONAL INFORMATION BOTTLENECKAlexander A. Alemi, Ian Fischer, Joshua V . Dillon, Kevin MurphyGoogle Researchfalemi,iansf,jvdillon,kpmurphy g@google.comABSTRACTWe present a variational approximation to the information bottleneck of Tishbyet al. (1999). This variational approach allows us to parameterize the informa-tion bottleneck model using a neural network and leverage the reparameterizationtrick for efficient training. We call this method “Deep Variational InformationBottleneck”, or Deep VIB. We show that models trained with the VIB objectiveoutperform those that are trained with other forms of regularization, in terms ofgeneralization performance and robustness to adversarial attack.1 I NTRODUCTIONWe adopt an information theoretic view of deep networks. We regard the internal representation ofsome intermediate layer as a stochastic encoding Zof the input source X, defined by a parametricencoderp(zjx;).1Our goal is to learn an encoding that is maximally informative about our targetY, measured by the mutual information between our encoding and the target I(Z;Y;), whereI(Z;Y;) =Zdxdyp (z;yj) logp(z;yj)p(zj)p(yj):2(1)Given the data processing inequality, and the invariance of the mutual information to reparameteriza-tions, if this was our only objective we could always ensure a maximally informative representationby taking the identity encoding of our data (Z=X), but this is not a useful representation of ourdata. Instead we would like to find the best representation we can obtain subject to a constraint onits complexity. A natural and useful constraint to apply is on the mutual information between ourencoding and the original data, I(X;Z)Ic, whereIcis the information constraint. This suggeststhe objective:maxI(Z;Y;)s.t.I(X;Z;)Ic: (2)Equivalently, with the introduction of a Lagrange multiplier , we can maximize the objective func-tionRIB() =I(Z;Y;)I(Z;X;): (3)Here our goal is to learn an encoding Zthat is maximally expressive about Ywhile being maximallycompressive about X, where0controls the tradeoff.3This approach is known as the informa-tion bottleneck (IB), and was first proposed in Tishby et al. (1999). Intuitively, the first term in RIBencouragesZto be predictive of Y; the second term encourages Zto “forget”X. Essentially itforcesZto act like a minimal sufficient statistic of Xfor predicting Y.The IB principle is appealing, since it defines what we mean by a good representation, in terms of thefundamental tradeoff between having a concise representation and one with good predictive power(Tishby & Zaslavsky, 2015a). The main drawback of the IB principle is that computing mutualinformation is, in general, computationally challenging. There are two notable exceptions: the first1In this work, X;Y;Z are random variables, x;y;z andx;y;zare instances of random variables, andF(;)andf(;)are functionals or functions parameterized by .2Note that in the present discussion, Yis the ground truth label which is independent of our parameters sop(yj) =p(y).3Note that, in our notation, large results in a highly compressed representation. In some works, the IBprinciple is formulated as the minimization of I(Z;X )I(Z;Y ), in which case large corresponds to highmutual information between ZandY, and hence low compression.1Published as a conference paper at ICLR 2017is whenX,YandZare all discrete, as in Tishby et al. (1999); this can be used to cluster discretedata, such as words. The second case is when X,YandZare all jointly Gaussian (Chechik et al.,2005). However, these assumptions both severely constrain the class of learnable models.In this paper, we propose to use variational inference to construct a lower bound on the IB objectivein Equation 3. We call the resulting method VIB (variational information bottleneck). By using thereparameterization trick (Kingma & Welling, 2014), we can use Monte Carlo sampling to get anunbiased estimate of the gradient, and hence we can optimize the objective using stochastic gradientdescent. This allows us to use deep neural networks to parameterize our distributions, and thus tohandle high-dimensional, continuous data, such as images, avoiding the previous restrictions to thediscrete or Gaussian cases.We also show, by a series of experiments, that stochastic neural networks, fit using our VIB method,are robust to overfitting, since VIB finds a representation Zwhich ignores as many details of theinputXas possible. In addition, they are more robust to adversarial inputs than deterministic modelswhich are fit using (penalized) maximum likelihood estimation. Intuitively this is because each inputimage gets mapped to a distribution rather than a unique Z, so it is more difficult to pass small,idiosyncratic perturbations through the latent bottleneck.2 R ELATED WORKThe idea of using information theoretic objectives for deep neural networks was pointed out inTishby & Zaslavsky (2015b). However, they did not include any experimental results, since theirapproach for optimizing the IB objective relied on the iterative Blahut Arimoto algorithm, which isinfeasible to apply to deep neural networks.Variational inference is a natural way to approximate the problem. Variational bounds on mutualinformation have previously been explored in Agakov (2004), though not in conjunction with theinformation bottleneck objective. Mohamed & Rezende (2015) also explore variational bounds onmutual information, and apply them to deep neural networks, but in the context of reinforcementlearning. We recently discovered Chalk et al. (2016), who independently developed the same varia-tional lower bound on the IB objective as us. However, they apply it to sparse coding problems, anduse the kernel trick to achieve nonlinear mappings, whereas we apply it to deep neural networks,which are computationally more efficient. In addition, we are able to handle large datasets by usingstochastic gradient descent, whereas they use batch variational EM.In the supervised learning literature, our work is related to the recently proposed confidence penalty(entropy regularization) method of (Pereyra et al., 2016). In this work, they fit a deterministicnetwork by optimizing an objective that combines the usual cross entropy loss with an extra termwhich penalizes models for having low entropy predictive distributions. In more detail, their costfunction has the formJCP=1NNXn=1[H(p(yjyn);p(yjxn))H(p(yjxn))] (4)whereH(p;q) =Pyp(y) logq(y)is the cross entropy, H(p) =H(p;p)is the entropy,p(yjyn) =yn(y)is a one-hot encoding of the label yn, andNis the number of training exam-ples. (Note that setting = 0corresponds to the usual maximum likelihood estimate.) In (Pereyraet al., 2016) they show that CP performs better than the simpler technique of label smoothing, inwhich we replace the zeros in the one-hot encoding of the labels by >0, and then renormalizeso that the distribution still sums to one. We will compare our VIB method to both the confidencepenalty method and label smoothing in Section 4.1.In the unsupervised learning literature, our work is closely related to the work in Kingma & Welling(2014) on variational autoencoders. In fact, their method is a special case of an unsupervised versionof the VIB, but with the parameter fixed at 1.0, as we explain in Appendix B. The V AE objective,but with different values of , was also explored in Higgins et al. (2016), but from a differentperspective.The method of Wang et al. (2016b) proposes a latent variable generative model of both xandy;their variational lower bound is closely related to ours, with the following differences. First, we do2Published as a conference paper at ICLR 2017not have a likelihood term for x, since we are in the discriminative setting. Second, they fix = 1,since they do not consider compression.Finally, the variational fair autoencoder of Louizos et al. (2016) shares with our paper the idea ofignoring parts of the input. However, in their approach, the user must specify which aspects of theinput (the so-called “sensitive” parts) to ignore, whereas in our method, we can discover irrelevantparts of the input automatically.3 M ETHODFollowing standard practice in the IB literature, we assume that the joint distribution p(X;Y;Z )factors as follows:p(X;Y;Z ) =p(ZjX;Y )p(YjX)p(X) =p(ZjX)p(YjX)p(X) (5)i.e., we assume p(ZjX;Y ) =p(ZjX), corresponding to the Markov chain Y$X$Z. Thisrestriction means that our representation Zcannot depend directly on the labels Y. (This opensthe door to unsupervised representation learning, which we will discuss in Appendix B.) Besidesthe structure in the joint data distribution p(X;Y ), the only content at this point is our model forthe stochastic encoder p(ZjX), all other distributions are fully determined by these and the Markovchain constraint.Recall that the IB objective has the form I(Z;Y)I(Z;X). We will examine each of theseexpressions in turn. Let us start with I(Z;Y). Writing it out in full, this becomesI(Z;Y) =Zdydzp (y;z) logp(y;z)p(y)p(z)=Zdydzp (y;z) logp(yjz)p(y): (6)wherep(yjz)is fully defined by our encoder and Markov Chain as follows:p(yjz) =Zdxp(x;yjz) =Zdxp(yjx)p(xjz) =Zdxp(yjx)p(zjx)p(x)p(z): (7)Since this is intractable in our case, let q(yjz)be a variational approximation to p(yjz). This is ourdecoder, which we will take to be another neural network with its own set of parameters. Using thefact that the Kullback Leibler divergence is always positive, we haveKL[p(YjZ);q(YjZ)]0 =)Zdyp(yjz) logp(yjz)Zdyp(yjz) logq(yjz); (8)and henceI(Z;Y)Zdydzp (y;z) logq(yjz)p(y)(9)=Zdydzp (y;z) logq(yjz)Zdyp(y) logp(y) (10)=Zdydzp (y;z) logq(yjz) +H(Y): (11)Notice that the entropy of our labels H(Y)is independent of our optimization procedure and so canbe ignored.Focusing on the first term in Equation 11, we can rewrite p(y;z)asp(y;z) =Rdxp(x;y;z ) =Rdxp(x)p(yjx)p(zjx)(leveraging our Markov assumption), which gives us a new lower bound onthe first term of our objective:I(Z;Y)Zdxdydzp (x)p(yjx)p(zjx) logq(yjz): (12)This only requires samples from both our joint data distribution as well as samples from our stochas-tic encoder, while it requires we have access to a tractable variational approximation in q(yjz).We now consider the term I(Z;X):I(Z;X) =Zdzdxp (x;z) logp(zjx)p(z)=Zdzdxp (x;z) logp(zjx)Zdzp(z) logp(z):(13)3Published as a conference paper at ICLR 2017In general, while it is fully defined, computing the marginal distribution of Z,p(z) = Rdxp(zjx)p(x), might be difficult. So let r(z)be a variational approximation to this marginal.Since KL[p(Z);r(Z)]0 =)Rdzp(z) logp(z)Rdzp(z) logr(z), we have the followingupper bound:I(Z;X)Zdxdzp (x)p(zjx) logp(zjx)r(z): (14)Combining both of these bounds we have thatI(Z;Y)I(Z;X)Zdxdydzp (x)p(yjx)p(zjx) logq(yjz)Zdxdzp (x)p(zjx) logp(zjx)r(z)=L: (15)We now discuss how to compute the lower bound Lin practice. We can approximate p(x;y) =p(x)p(yjx)using the empirical data distribution p(x;y) =1NPNn=1xn(x)yn(y), and hence wecan writeL1NNXn=1Zdzp(zjxn) logq(ynjz)p(zjxn) logp(zjxn)r(z): (16)Suppose we use an encoder of the form p(zjx) =N(zjfe(x);fe(x)), wherefeis an MLP whichoutputs both the K-dimensional mean ofzas well as the KKcovariance matrix . Then wecan use the reparameterization trick (Kingma & Welling, 2014) to write p(zjx)dz=p()d, wherez=f(x;)is a deterministic function of xand the Gaussian random variable . This formulationhas the important advantage that the noise term is independent of the parameters of the model, so itis easy to take gradients.Assuming our choice of p(zjx)andr(z)allows computation of an analytic Kullback-Leibler di-vergence, we can put everything together to get the following objective function, which we try tominimize:JIB=1NNXn=1Ep()[logq(ynjf(xn;))] +KL [p(Zjxn);r(Z)]: (17)As in Kingma & Welling (2014), this formulation allows us to directly backpropagate through asingle sample of our stochastic code and ensure that our gradient is an unbiased estimate of the trueexpected gradient.44 E XPERIMENTAL RESULTSIn this section, we present various experimental results, comparing the behavior of standard deter-ministic networks to stochastic neural networks trained by optimizing the VIB objective.4.1 B EHAVIOR ON MNISTWe start with experiments on unmodified MNIST (i.e. no data augmentation). In order to pick amodel with some “headroom” to improve, we decided to use the same architecture as in the (Pereyraet al., 2016) paper, namely an MLP with fully connected layers of the form 784 - 1024 - 1024- 10, and ReLu activations. (Since we are not exploiting spatial information, this correpsonds tothe “permutation invariant” version of MNIST.) The performance of this baseline is 1.38% error.(Pereyra et al., 2016) were able to improve this to 1.17% using their regularization technique. Wewere able to improve this to 1.13% using our technique, as we explain below.In our method, the stochastic encoder has the form p(zjx) =N(zjfe(x);fe(x)), wherefeis anMLP of the form 784102410242K, whereKis the size of the bottleneck. The first Koutputs from feencode, the remaining Koutputs encode (after a softplus transform).4Even if our choice of encoding distribution and variational prior do not admit an analytic KL, we couldsimilarly reparameterize through a sample of the divergence (Kingma & Welling, 2014; Blundell et al., 2015).4Published as a conference paper at ICLR 2017Model errorBaseline 1.38%Dropout 1.34%Dropout (Pereyra et al., 2016) 1.40%Confidence Penalty 1.36%Confidence Penalty (Pereyra et al., 2016) 1.17%Label Smoothing 1.40%Label Smoothing (Pereyra et al., 2016) 1.23%VIB (= 103)1.13%Table 1: Test set misclassification rate on permutation-invariant MNIST using K= 256 . We com-pare our method (VIB) to an equivalent deterministic model using various forms of regularization.The discrepancy between our results for confidence penalty and label smoothing and the numbersreported in (Pereyra et al., 2016) are due to slightly different hyperparameters.The decoder is a simple logistic regression model of the form q(yjz) =S(yjfd(z)), whereS(a) =[exp(ac)=PCc0=1exp(ac0)]is the softmax function, and fd(z) =Wz+bmaps theKdimensionallatent code to the logits of the C= 10 classes. (In later sections, we consider more complexdecoders, but here we wanted to show the benefits of VIB in a simple setting.)Finally, we treat r(z)as a fixedK-dimensional spherical Gaussian, r(z) =N(zj0;I).We compare our method to the baseline MLP. We calso consider the following deterministic limitof our model, when = 0. In this case, we obtain the following objective function:JIB0=1NNXn=1EzN(fe(xn);fe(xn))[logS(ynjfd(z)] (18)When!0, we observe the VIB optimization process tends to make fe(x)!0, so the networkbecomes nearly deterministic. In our experiments we also train an explicitly deterministic modelthat has the same form as the stochastic model, except that we just use z=fe(x)as the hiddenencoding, and drop the Gaussian layer.4.1.1 H IGHER DIMENSIONAL EMBEDDINGTo demonstrate that our VIB method can achieve competitive classification results, we comparedagainst a deterministic MLP trained with various forms of regularization. We use a K= 256dimensional bottleneck and a diagonal Gaussian for p(zjx). The networks were trained using Ten-sorFlow for 200 epochs using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of0.0001. Full hyperparameter details can be found in Appendix A.The results are shown in Table 1. We see that we can slightly outperform other forms of regulariza-tion that have been proposed in the literature while using the same network for each. Of course, theperformance varies depending on . These results are not state of the art, nor is our main focus ofour work to suggest that VIB is the best regularization method by itself, which would require muchmore experimentation. However, using the same architecture for each experiment and comparingto VIB as the only source of regularization suggests VIB works as a decent regularizer in and ofitself. Figure 1(a) plots the train and test error vs , averaged over 5 trials (with error bars) for thecase where we use a single Monte Carlo sample of zwhen predicting, and also for the case wherewe average over 12 posterior samples (i.e., we use p(yjx) =1SPSs=1q(yjzs)forzsp(zjx),whereS= 12 ). In our own investigations, a dozen samples seemed to be sufficient to capture anyadditional benefit the stochastic evaluations had to offer in this experiment5.We see several interesting properties in Figure 1(a). First, we notice that the error rate shoots uponcerises above the critical value of 102. This corresponds to a setting where the mutualinformation between XandZis less than log2(10) bits, so the model can no longer represent thefact that there are 10 different classes. Second, we notice that, for small values of , the test error5A dozen samples wasn’t chosen for any particular reason, except the old addage that a dozen samples aresufficient, as mirrored in David MacKay’s book (MacKay, 2003). They proved sufficient in this case.5Published as a conference paper at ICLR 2017is higher than the training error, which indicates that we are overfitting. This is because the networklearns to be more deterministic, forcing 0, thus reducing the benefits of regularization. Third,we notice that for intermediate values of , Monte Carlo averaging helps. Interestingly, the regionwith the best performance roughly corresponds to where the added benefit from stochastic averaginggoes away, suggesting an avenue by which one could try to optimize using purely statistics on thetraining set without a validation set. We have not extensively studied this possibility yet.In Figure 1(c), we plot the IB curve, i.e., we plot I(Z;Y)vsI(Z;X)as we vary. As we allowmore information from the input through to the bottleneck (by lowering ), we increase the mutualinformation between our embedding and the label on the training set, but not necessarily on the testset, as is evident from the plot.In Figure 1(d) we plot the second term in our objective, the upper bound on the mutual informationbetween the images Xand our stochastic encoding Z, which in our case is simply the relativeentropy between our encoding and the fixed isotropic unit Gaussian prior. Notice that the y-axis is alogarithmic one. This demonstrates that our best results (when is between 103and102) occurwhere the mutual information between the stochastic encoding and the images is on the order of 10to 100 bits.10−910−810−710−610−510−410−310−210−1100101b0.0000.0050.0100.0150.020errortest 1 shot evaltest avg evaltrain 1 shot evaltrain avg eval10−910−810−710−610−510−410−310−210−1100101b0.000.010.020.030.040.05errortest 1 shot evaltest avg evaltrain 1 shot evaltrain avg eval(a) (b)101102103104I(Z,X)2.82.93.03.13.23.3I(Z,Y)traintest10−910−810−710−610−510−410−310−210−1100101b10−310−210−1100101102103I(Z,X)traintest(c) (d)Figure 1: Results of VIB model on MNIST. (a) Error rate vs forK= 256 on train and test set.“1 shot eval” means a single posterior sample of z, “avg eval” means 12 Monte Carlo samples. Thespike in the error rate at 102corresponds to a model that is too highly regularized. Plottedvalues are the average over 5 independent training runs at each . Error bars show the standarddeviation in the results. (b) Same as (a), but for K= 2. Performance is much worse, since we passthrough a very narrow bottleneck. (c) I(Z;Y)vsI(Z;X)as we varyforK= 256 . We see thatincreasingI(Z;X)helps training set performance, but can result in overfitting. (d) I(Z;X)vsforK= 256 . We see that for a good value of , such as 102, we only need to store about 10 bitsof information about the input.4.1.2 T WO DIMENSIONAL EMBEDDINGTo better understand the behavior of our method, we refit our model to MNIST using a K= 2dimensional bottleneck, but using a full covariance Gaussian. (The neural net predicts the mean andthe Cholesky decomposition of the covariance matrix.) Figure 1(b) shows that, not surprisingly, theclassification performance is worse (note the different scaled axes), but the overall trends are the6Published as a conference paper at ICLR 2017same as in the K= 256 dimensional case. The IB curve (not shown) also has a similar shape tobefore, except now the gap between training and testing is even larger.Figure 2 provides a visualization of what the network is doing. We plot the posteriors p(zjx)as a 2dGaussian ellipse (representing the 95% confidence region) for 1000 images from the test set. Colorscorrespond to the true class labels. In the background of each plot is the entropy of the variationalclassifierq(yjz)evaluated at that point.−15−10−5 0 5 10 15−15−10−5051015(a)= 103, errmc= 3:18% ,err1= 3:24%−4−2 0 2 4−4−2024(b)= 101, errmc= 3:44% ,err1= 4:32%−3−2−1 0 1 2 3−3−2−10123(c)= 100, errmc= 33:82% ,err1= 62:81% .Figure 2: Visualizing embeddings of 1000 test images in two dimensions. We plot the 95% confi-dence interval of the Gaussian embedding p(zjx) =N(;)as an ellipse. The images are coloredaccording to their true class label. The background greyscale image denotes the entropy of the vari-ational classifier evaluated at each two dimensional location. As becomes larger, we forget moreabout the input and the embeddings start to overlap to such a degree that the classes become indis-tinguishable. We also report the test error using a single sample, err 1, and using 12 Monte Carlosamples, err mc. For “good” values of , a single sample suffices.We see several interesting properties. First, as increases (so we pass less information through),the embedding covariances increase in relation to the distance between samples, and the classesstart to overlap. Second, once passes a critical value, the encoding “collapses”, and essentiallyall the class information is lost. Third, there is a fair amount of uncertainty in the class preditions(q(yjz)) in the areas between the class embeddings. Fourth, for intermediate values of (say101in Figure 2(b)), predictive performance is still good, even though there is a lot of uncertainty aboutwhere any individual image will map to in comparison to other images in the same class. This meansit would be difficult for an outside agent to infer which particular instance the model is representing,a property which we will explore more in the following sections.4.2 B EHAVIOR ON ADVERSARIAL EXAMPLESSzegedy et al. (2013) was the first work to show that deep neural networks (and other kinds ofclassifiers) can be easily “fooled” into making mistakes by changing their inputs by imperceptiblysmall amounts. In this section, we will show how training with the VIB objective makes modelssignificantly more robust to such adversarial examples.4.2.1 T YPES OF ADVERSARIESSince the initial work by Szegedy et al. (2013) and Goodfellow et al. (2014), many different adver-saries have been proposed. Most attacks fall into three broad categories: optimization-based attacks(Szegedy et al., 2013; Carlini & Wagner, 2016; Moosavi-Dezfooli et al., 2016; Papernot et al., 2015;Robinson & Graham, 2015; Sabour et al., 2016), which directly run an optimizer such as L-BFGSor ADAM (Kingma & Ba, 2015) on image pixels to find a minimal perturbation that changes themodel’s classification; single-step gradient-based attacks (Goodfellow et al., 2014; Kurakin et al.,2016; Huang et al., 2015), which choose a gradient direction of the image pixels at some loss andthen take a single step in that direction; and iterative gradient-based attacks (Kurakin et al., 2016),7Published as a conference paper at ICLR 2017which take multiple small steps along the gradient direction of the image pixels at some loss, recom-puting the gradient direction after each step.6Many adversaries can be formalized as either untargeted or targeted variants. An untargeted ad-versary can be defined as A(X;M )!X0, whereA(:)is the adversarial function, Xis the inputimage,X0is the adversarial example, and Mis the target model. Ais considered successful ifM(X)6=M(X0). Recently, Moosavi-Dezfooli et al. (2016) showed how to create a “universal”adversarial perturbation that can be added to any image Xin order to make M(X+)6=M(X)for a particular target model.A targeted adversary can be defined as A(X;M;l )!X0, wherelis an additional target label, andAis only considered successful if M(X0) =l.7Targeted attacks usually require larger magnitudeperturbations, since the adversary cannot just “nudge” the input across the nearest decision boundary,but instead must force it into a desired decision region.In this work, we focus on the Fast Gradient Sign (FGS) method proposed in Goodfellow et al.(2014) and the L2optimization method proposed in Carlini & Wagner (2016). FGS is a standardbaseline attack that takes a single step in the gradient direction to generate the adversarial example.As originally described, FGS generates untargeted adversarial examples. On MNIST, Goodfellowet al. (2014) reported that FGS could generate adversarial examples that fooled a maxout networkapproximately 90% of the time with = 0:25, whereis the magnitude of the perturbation at eachpixel. TheL2optimization method has been shown to generate adversarial examples with smallerperturbations than any other method published to date, which were capable of fooling the targetnetwork 100% of the time. We consider both targeted attacks and untargeted attacks for the L2optimization method.84.2.2 A DVERSARIAL ROBUSTNESSThere are multiple definitions of adversarial robustness in the literature. The most basic, which weshall use, is accuracy on adversarially perturbed versions of the test set, called adversarial examples.It is also important to have a measure of the magnitude of the adversarial perturbation. Since ad-versaries are defined relative to human perception, the ideal measure would explicitly correspond tohow easily a human observer would notice the perturbation. In lieu of such a measure, it is commonto compute the size of the perturbation using L0,L1,L2, andL1norms (Szegedy et al., 2013;Goodfellow et al., 2014; Carlini & Wagner, 2016; Sabour et al., 2016). In particular, the L0normmeasures the number of perturbed pixels, the L2norm measures the Euclidean distance between XandX0, and theL1norm measures the largest single change to any pixel.4.2.3 E XPERIMENTAL SETUPWe used the same model architectures as in Section 4.1, using a K= 256 bottleneck. The archi-tectures included a deterministic (base) model trained by MLE; a deterministic model trained withdropout (the dropout rate was chosen on the validation set); and a stochastic model trained with VIBfor various values of .For the VIB models, we use 12 posterior samples of Zto compute the class label distribution p(yjx).This helps ensure that the adversaries can get a consistent gradient when constructing the perturba-tion, and that they can get a consistent evaluation when checking if the perturbation was successful6There are also other adversaries that don’t fall as cleanly into those categories, such as “fooling im-ages” from Nguyen et al. (2014), which remove the human perceptual constraint, generating regular geometricpatterns or noise patterns that networks confidently classify as natural images; and the idea of generating ad-versaries by stochastic search for images near the decision boundary of multiple networks from Baluja et al.(2015).7Sabour et al. (2016) proposes a variant of the targeted attack, A(XS;M;X T;k)!X0S, whereXSis thesource image, XTis a target image, and kis a target layer in the model M.AproducesX0Sby minimizing thedifference in activations of Mat layerkbetweenXTandX0S. The end result of this attack for a classificationnetwork is still that M(X0S)yields a target label implicitly specified by XTin a successful attack.8Carlini & Wagner (2016) shared their code with us, which allowed us to perform the attack with exactlythe same parameters they used for their paper, including the maximum number of iterations and maximum Cvalue (see their paper for details).8Published as a conference paper at ICLR 2017(i.e., it reduces the chance that the adversary “gets lucky” in its perturbation due to an untypicalsample). We also ran the VIB models in “mean mode”, where the s are forced to be 0. This had nonoticeable impact on the results, so all reported results are for stochastic evaluation with 12 samples.4.2.4 MNIST R ESULTS AND DISCUSSIONWe selected the first 10 zeros in the MNIST test set, and use the L2optimization adversary of Carlini& Wagner (2016) to try to perturb those zeros into ones.9Some sample results are shown in Figure3. We see that the deterministic models are easily fooled by making small perturbations, but for theVIB models with reasonably large , the adversary often fails to find an attack (indicated by thegreen borders) within the permitted number of iterations. Furthermore, when an attack is succesful,it needs to be much larger for the VIB models. To quantify this, Figure 4 plots the magnitude of theperturbation (relative to that of the deterministic and dropout models) needed for a successful attackas a function of . Asincreases, the L0norm of the perturbation decreases, but both L2andL1norms increase, indicating that the adversary is being forced to put larger modifications into fewerpixels while searching for an adversarial perturbation.Figure 5 plots the accuracy on FGS adversarial examples of the first 1000 images from the MNISTtest set as a function of . Each point in the plot corresponds to 3 separate executions of threedifferent models trained with the same value of . All models tested achieve over 98.4% accuracy onthe unperturbed MNIST test set, so there is no appreciable measurement distortion due to underlyingmodel accuracy.Figure 6 plots the accuracy on L2optimization adversarial examples of the first 1000 images fromthe MNIST test set as a function of . The same sets of three models per were tested three times,as with the FGS adversarial examples.We generated both untargeted and targeted adversarial examples for Figure 6. For targeting, wegenerate a random target label different from the source label in order to avoid biasing the resultswith unevenly explored source/target pairs. We see that for a reasonably broad range of values,the VIB models have significantly better accuracy on the adversarial examples than the deterministicmodels, which have an accuracy of 0% (the L2optimization attack is very effective on traditionalmodel architectures).Figure 6 also reveals a surprising level of adversarial robustness even when !0. This can beexplained by the theoretical framework of Fawzi et al. (2016). Their work proves that quadraticclassifiers (e.g., xTAx, symmetric A) have a greater capacity for adversarial robustness than linearclassifiers. As we show in Appendix C, our Gaussian/softmax encoder/decoder is approximatelyquadratic for all <1.4.2.5 I MAGE NETRESULTS AND DISCUSSIONVIB improved classification accuracy and adversarial robustness for toy datasets like MNIST. Wenow investigate if VIB offers similar advantages for ImageNet, a more challenging natural imageclassification. Recall that ImageNet has approximately 1M images spanning 1K classes. We pre-process images such that they are 299x299 pixels.ArchitectureWe make use of publicly available, pretrained checkpoints10of Inception Resnet V2 (Szegedy et al.,2016) on ImageNet (Deng et al., 2009). The checkpoint obtains 80.4% classification accuracy on theImageNet validation set. Using the checkpoint, we transformed the original training set by applyingthe pretrained network to each image and extracting the representation at the penultimate layer.This new image representation has 1536 dimensions. The higher layers of the network continue toclassify this representation with 80.4% accuracy; conditioned on this extraction the classification9We chose this pair of labels since intuitively zeros and ones are the digits that are least similar in terms ofhuman perception, so if the adversary can change a zero into a one without much human-noticeable perturba-tion, it is unlikely that the model has learned a representation similar to what humans learn.10Available at the Tensorflow Models repository in the Slim directory: https://github.com/tensorflow/models/tree/master/slim9Published as a conference paper at ICLR 2017Orig: Det: Dropout = 0= 1010= 108= 106= 104= 103= 102Figure 3: The adversary is trying to force each 0 to be classified as a 1. Successful attacks have a redbackground. Unsuccessful attacks have a green background. In the case that the label is changedto an incorrect label different from the target label (i.e., the classifier outputs something other than0 or 1), the background is purple. The first column is the original image. The second column isadversarial examples targeting our deterministic baseline model. The third column is adversarialexamples targeting our dropout model. The remaining columns are adversarial examples targetingour VIB models for different .10-1110-1010-910-810-710-610-510-410-310-2β1.01.52.02.53.0All L*/Deterministic Model L*Deterministic Model L*Targeted L2 Optimization (0->1):L0Targeted L2 Optimization (0->1):L2Targeted L2 Optimization (0->1):L∞10-1110-1010-910-810-710-610-510-410-310-2β0.60.81.01.21.41.61.82.0All L*/Dropout Model L*Dropout Model L*Targeted L2 Optimization (0->1):L0Targeted L2 Optimization (0->1):L2Targeted L2 Optimization (0->1):L∞(a) (b)Figure 4: (a) Relative magnitude of the adversarial perturbation, measured using L0,L2, andL1norms, for the images in Figure 3 as a function of . (We normalize all values by the correspondingnorm of the perturbation against the base model.) As increases,L0decreases, but both L2andL1increase, indicating that the adversary is being forced to put larger modifications into fewer pixelswhile searching for an adversarial perturbation. (b) Same as (a), but with the dropout model as thebaseline. Dropout is more robust to the adversarial perturbations than the base deterministic model,but still performs much worse than the VIB model as increases.10Published as a conference paper at ICLR 201710-810-710-610-510-410-310-210-1β246810Relative Accuracy on Adversarial ExamplesDeterministic ModelFGS, epsilon=0.350FGS, epsilon=0.400FGS, epsilon=0.450FGS, epsilon=0.50010-810-710-610-510-410-310-210-1β12345Relative Accuracy on Adversarial ExamplesDropout ModelFGS, epsilon=0.350FGS, epsilon=0.400FGS, epsilon=0.450FGS, epsilon=0.500(a) (b)Figure 5: Classification accuracy of VIB classifiers, divided by accuracy of baseline classifiers, onFGS-generated adversarial examples as a function of . Higher is better, and the baseline is alwaysat1:0. For the FGS adversarial examples, when = 0(not shown), the VIB model’s performance isalmost identical to when = 108. (a) FGS accuracy normalized by the base deterministic modelperformance. The base deterministic model’s accuracy on the adversarial examples ranges fromabout 1% when = 0:5to about 5% when = 0:35. (b) Same as (a), but with the dropout modelas the baseline. The dropout model is more robust than the base model, but less robust than VIB,particularly for stronger adversaries (i.e., larger values of ). The dropout model’s accuracy on theadversarial examples ranges from about 5% when = 0:5to about 16% when = 0:35. As inthe other results, relative performance is more dramatic as increases, which seems to indicate thatthe VIB models are learning to ignore more of the perturbations caused by the FGS method, eventhough they were not trained on any adversarial examples.10-1110-1010-910-810-710-610-510-410-310-210-1β0.00.10.20.30.40.50.60.7Accuracy on Adversarial ExamplesDeterministic and Dropout Models (Targeted and Untargeted)Targeted L2 OptimizationUntargeted L2 OptimizationFigure 6: Classification accuracy (from 0 to 1) on L2adversarial examples (of all classes) as afunction of . The blue line is for targeted attacks, and the green line is for untargeted attacks(which are easier to resist). In this case, = 1011has performance indistinguishable from = 0.The deterministic model and dropout model both have a classification accuracy of 0% in both thetargeted and untargeted attack scenarios, indicated by the horizontal red dashed line at the bottom ofthe plot. This is the same accuracy on adversarial examples from this adversary reported in Carlini& Wagner (2016) on a convolutional network trained on MNIST.11Published as a conference paper at ICLR 2017(a) (b)(c) (d)Figure 7: The results of our ImageNet targeted L2optimization attack. In all cases we target anew label of 222 (“soccer ball”). Figure (a) shows the 30 images from the first 40 images in theImageNet validation set that the VIB network classifies correctly. The class label is shown in greenon each image. The predicted label and targeted label are shown in red. Figure (b) shows adversarialexamples of the same images generated by attacking our VIB network with = 0:01. While allof the attacks change the classification of the image, in 13 out of 30 examples the attack fails tohit the intended target class (“soccer ball”). Pink crosses denote cases where the attack failed toforce the model to misclassify the image as a soccer ball. Figure (c) shows the same result butfor our deterministic baseline operating on the whitened precomputed features. The attack alwayssuccceeds. Figure (d) is the same but for the original full Inception ResNet V2 network withoutmodification. The attack always succceeds. There are slight variations in the set of adversarialexamples shown for each network because we limited the adversarial search to correctly classifiedimages. In the case of the deterministic baseline and original Inception ResNet V2 network, theperturbations are hardly noticable in the perturbed images, but in many instances, the perturbationsfor the VIB network can be percieved.12Published as a conference paper at ICLR 2017Figure 8: Shown are the absolute differences between the original and final perturbed images forall three networks. The left block shows the perturbations created while targeting the VIB network.The middle block shows the perturbations needed for the deterministic baseline using precomputedwhitened features. The right block shows the perturbations created for the unmodified InceptionResNet V2 network. The contrast has been increased by the same amount in all three columns toemphasize the difference in the magnitude of the perturbations. The VIB network required muchlarger perturbations to confuse the classifier, and even then did not achieve the targeted class in 13of those cases.model is simply logistic regression. To further speed training, we whitened the 1536 dimensionalrepresentation.Under this transformation, the experiment regime is identical to the permutation invariant MNISTtask. We therefore used a similar model architecture. Inputs are passed through two fully connectedlayers, each with 1024 units. Next, data is fed to a stochastic encoding layer; this layer is charac-terized by a spherical Gaussian with 1024 learned means and standard deviations. The output ofthe stochastic layer is fed to the variational classifier–itself a logistic regression, for simplicity. Allother hyperparameters and training choices are identical to those used in MNIST, more details inAppendix A.ClassificationWe see the same favorable VIB classification performance in ImageNet as in MNIST. By varying, the estimated mutual information between encoding and image ( I(Z;X)) varies as well. At largevalues ofaccuracy suffers, but at intermediate values we obtain improved performance over botha deterministic baseline and a = 0 regime. In all cases our accuracy is somewhat lower thanthe original 80.4% accuracy. This may be a consequence of inadequate training time or suboptimalhyperparameters.Overall the best accuracy we achieved was using = 0:01. Under this setting we saw an accu-racy of 80.12%–nearly the same as the state-of-the-art unmodified network– but with substantiallysmaller information footprint, only I(X;Z)45bits. This is a surprisingly small amount of infor-mation;= 0implies over 10,000 bits yet only reaches an accuracy of 78.87%. The deterministicbaseline, which was the same network, but without the VIB loss and a 1024 fully connected lin-ear layer instead of the stochastic embedding similarly only achieved 78.75% accuracy. We stressthat regressions from the achievable 80.4% are likely due to suboptimal hyperparameters settings orinadequate training.Considering a continuum of and a deterministic baseline, the best classification accuracy wasachieved with a = 0:012(0;1). In other words, VIB offered accuracy benefit yet using a mere45bits of information from each image.13Published as a conference paper at ICLR 2017Adversarial RobustnessWe next show that the VIB-trained network improves resistance to adversarial attack.We focus on the Carlini targeted L2attack (see Section 4.2.1). We show results for the VIB-trainednetwork and a deterministic baseline (both on top of precomputed features), as well as for the origi-nal pretrained Inception ResNet V2 network itself. The VIB network is more robust to the targetedL2optimization attack in both magnitude of perturbation and frequency of successful attack.Figure 7 shows some example images which were all misclassified as “soccer balls” by the deter-ministic models; by contrast, with the VIB model, only 17 out of 30 of the attacks succeeded inbeing mislabeled as the target label.11We find that the VIB model can resist about 43.3% of theattacks, but the deterministic models always fail (i.e., always misclassify into the targeted label).Figure 8 shows the absolute pixel differences between the perturbed and unperturbed images for theexamples in Figure 7. We see that the VIB network requires much larger perturbations in order tofool the classifier, as quantified in Table 2.Metric Determ IRv2 VIB(0.01)Sucessful target 1.0 1.0 0.567L26.45 14.43 43.27L1 0.18 0.44 0.92Table 2: Quantitative results showing how the different Inception Resnet V2-based architectures(described in Section 4.2.5) respond to targeted L2adversarial examples. Determ is the deterministicarchitecture, IRv2 is the unmodified Inception Resnet V2 architecture, and VIB(0.01) is the VIBarchitecture with = 0:01.Successful target is the fraction of adversarial examples that causedthe architecture to classify as the target class (soccer ball). Lower is better. L2andL1are theaverageLdistances between the original images and the adversarial examples. Larger values meanthe adversary had to make a larger perturbation to change the class.5 F UTURE DIRECTIONSThere are many possible directions for future work, including: putting the VIB objective at multipleor every layer of a network; testing on real images; using richer parametric marginal approxima-tions, rather than assuming r(z) =N(0;I); exploring the connections to differential privacy (seee.g., Wang et al. (2016a); Cuff & Yu (2016)); and investigating open universe classification problems(see e.g., Bendale & Boult (2015)). In addition, we would like to explore applications to sequenceprediction, where Xdenotes the past of the sequence and Ythe future, while Zis the current repre-sentation of the network. This form of the information bottleneck is known as predictive information(Bialek et al., 2001; Palmer et al., 2015).REFERENCESMart ́ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg SCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machinelearning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 , 2016.David Barber Felix Agakov. The IM algorithm: a variational approach to information maximization.InNIPS , volume 16, 2004.Shumeet Baluja, Michele Covell, and Rahul Sukthankar. The virtues of peer pressure: A simplemethod for discovering high-value mistakes. In Intl. Conf. Computer Analysis of Images andPatterns , 2015.Abhijit Bendale and Terrance Boult. Towards open world recognition. In CVPR , 2015.11The attacks still often cause the VIB model to misclassify the image, but not to the targeted label. This isa form of “partial” robustness, in that an attacker will have a harder time hitting the target class, but can stilldisrupt correct function of the network.14Published as a conference paper at ICLR 2017William Bialek, Ilya Nemenman, and Naftali Tishby. Predictability, complexity, and learning. Neu-ral computation , 13(11):2409–2463, 2001.Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty inneural networks. In ICML , 2015.Ryan P. Browne and Paul D. McNicholas. Multivariate sharp quadratic bounds via -strong con-vexity and the fenchel connection. Electronic Journal of Statistics , 9, 2015.Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. Arxiv ,2016.Matthew Chalk, Olivier Marre, and Gasper Tkacik. Relevant sparse codes with variational informa-tion bottleneck. In NIPS , 2016.G. Chechik, A Globersonand N. Tishby, and Y . Weiss. Information bottleneck for gaussian variables.J. of Machine Learning Research , 6:165188, 2005.Paul Cuff and Lanqing Yu. Differential privacy as a mutual information constraint. In ACM Confer-ence on Computer and Communications Security (CCS) , 2016.Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scalehierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009.IEEE Conference on , pp. 248–255. IEEE, 2009.Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers:from adversarial to random noise. In NIPS , 2016.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neuralnetworks. In AI/Statistics , volume 9, pp. 249–256, 2010.Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarialexamples. In ICLR , 2015.Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick,Shakir Mohamed, and Alexander Lerchner. beta-V AE: Learning basic visual concepts with aconstrained variational framework. In ICLR , 2017. URL https://openreview.net/pdf?id=Sy2fzU9gl .Ruitong Huang, Bing Xu, Dale Schuurmans, and Csaba Szepesv ́ari. Learning with a strong adver-sary. CoRR , abs/1511.03034, 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR , 2015.Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In ICLR , 2014.Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. InICLR Workshop , 2017. URL https://openreview.net/pdf?id=S1OufnIlx .Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. The variational fairautoencoder. In ICLR , 2016. URL http://arxiv.org/abs/1511.00830 .David JC MacKay. Information theory, inference and learning algorithms . Cambridge universitypress, 2003.Shakir Mohamed and Danilo Jimenez Rezende. Variational information maximisation for intrinsi-cally motivated reinforcement learning. In NIPS , pp. 2125–2133, 2015.Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universaladversarial perturbations. Arxiv , 2016.Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple andaccurate method to fool deep neural networks. In CVPR , 2016.15Published as a conference paper at ICLR 2017Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High con-fidence predictions for unrecognizable images. In CVPR , 2015. URL http://arxiv.org/abs/1412.1897 .Stephanie E Palmer, Olivier Marre, Michael J Berry, and William Bialek. Predictive information ina sensory population. PNAS , 112(22):6908–6913, 2015.Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and AnanthramSwami. The limitations of deep learning in adversarial settings. In Proceedings of the 1st IEEEEuropean Symposium on Security and Privacy , 2015.Gabriel Pereyra, George Tuckery, Jan Chorowski, and Lukasz Kaiser. Regularizing neural net-works by penalizing confident output predictions. In ICLR Workshop , 2017. URL https://openreview.net/pdf?id=HyhbYrGYe .Boris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging.SIAM Journal on Control and Optimization , 30(4):838–855, 1992.Leigh Robinson and Benjamin Graham. Confusing deep convolution networks by relabelling. arXivpreprint 1510.06925 , 2015.Sara Sabour, Yanshuai Cao, Fartash Faghri, and David J Fleet. Adversarial manipulation of deeprepresentations. In ICLR , 2016.Noam Slonim, Gurinder Singh Atwal, Ga ˇsper Tka ˇcik, and William Bialek. Information-based clus-tering. PNAS , 102(51):18297–18302, 2005.Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow,and Rob Fergus. Intriguing properties of neural networks. In ICLR , 2014. URL http://arxiv.org/abs/1312.6199 .Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alex Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261 ,2016.N Tishby and N Zaslavsky. Deep learning and the information bottleneck principle. In IEEE Infor-mation Theory Workshop , pp. 1–5, April 2015a.N. Tishby, F.C. Pereira, and W. Biale. The information bottleneck method. In The 37th annualAllerton Conf. on Communication, Control, and Computing , pp. 368–377, 1999.Naftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. InInformation Theory Workshop (ITW), 2015 IEEE , pp. 1–5. IEEE, 2015b.Weina Wang, Lei Ying, and Junshan Zhang. On the relation between identifiability, differentialprivacy and Mutual-Information privacy. IEEE Trans. Inf. Theory , 62:5018–5029, 2016a.Weiran Wang, Honglak Lee, and Karen Livescu. Deep variational canonical correlation analysis.arXiv [cs.LG] , 11 October 2016b. URL https://arxiv.org/abs/1610.03454 .16Published as a conference paper at ICLR 2017A H YPERPARAMETERS AND ARCHITECTURE DETAILS FOR EXPERIMENTSAll of the networks for this paper were trained using TensorFlow (Abadi et al., 2016). All weightswere initialized using the default TensorFlow Xavier initialization scheme (Glorot & Bengio, 2010)using the averaging fan scaling factor on uniform noise. All biases were initialized to zero. TheAdam optimizer (Kingma & Ba, 2015) was used with initial learning rate of 104,(1= 0:5;2=0:999) and exponential decay, decaying the learning rate by a factor of 0.97 every 2 epochs. Thenetworks were all trained for 200 epochs total. For the MNIST experiments, a batch size of 100was used, and the full 60,000 training and validation set was used for training, and the 10,000 testimages for test results. The input images were scaled to have values between -1 and 1 before fed tothe network.All runs maintained an exponential weighted average of the parameters during the training run;these averaged parameters were used at test time. This is in the style of Polyak averaging Polyak &Juditsky (1992), with a decay constant of 0.999. Our estimate of mutual informations were measuredin bits. For the VIB experiments in all sections, no other form of regularization was used.For the 256 dimensional gaussian embeddings of Section 4.1.1, a linear layer of size 512 was usedto create the 256 mean values and standard deviations for the embedding. The standard deviationswere made to be positive by a softplus transformation with a bias of -5.0 to have them initially besmall.= log (1 + exp( x5:0)) (19)For the 1024 dimensional Imagenet embeddings of Section 4.2.5, a sigma bias of 0.57 was used tokeep the initial standard deviations near 1 originally, and a batch size of 200 was used.For the 2 dimensional gaussian embeddings of Section 4.1.2, a linear layer was used with 2+4 = 6outputs, the first two of which were used for the means, and the other 4 were reshaped to a 22matrix, the center was transformed according to a softplus with a bias of -5.0, and the off diagonalcomponents were multiplied by 102, while the upper triangular element was dropped to form theCholesky decomposition of the covariance matrix.B C ONNECTION TO VARIATIONAL AUTOENCODERSWe can also consider unsupervised versions of the information bottleneck objective. Consider theobjective:maxI(Z;X)I(Z;i); (20)similar to the information theoretic objective for clustering introduced in Slonim et al. (2005).Here the aim is to take our data Xand maximize the mutual information contained in some encodingZ, while restricting how much information we allow our representation to contain about the identityof each data element in our sample (i). We will form a bound much like we did in the main text.For the first term, we form a variational decoder q(xjz)and take a bound:I(Z;X) =Zdxdzp (x;z) logp(xjz)p(x)(21)=H(x) +Zdzp(x)Zdxp(xjz) logp(xjz) (22)Zdzp(x)Zdxp(xjz) logq(xjz) (23)=Zdxp(x)Zdzp(xjz) logq(xjz): (24)Here we have dropped the entropy in our data H(X)because it is out of our control and we haveused the nonnegativity of the Kullbach-Leibler divergence to replace our intractable p(xjz)with avariational decoder q(xjz).17Published as a conference paper at ICLR 2017Turning our attention to the second term, note that:p(zji) =Zdxp(zjx)p(xji) =Zdxp(zjx)(xxi) =p(zjxi); (25)and that we will take p(i) =1N.So that we can bound our second term from aboveI(Z;i) =XiZdzp(zji)p(i) logp(zji)p(z)(26)=1NXiZdzp(zjxi) logp(zjxi)p(z)(27)1NXiZdzp(zjxi) logp(zjxi)r(z); (28)Where we have replaced the intractable marginal p(z)with a variational marginal r(z).Putting these two bounds together we have that our unsupervised information bottleneck objectivetakes the formI(Z;X)I(Z;i)Zdxp(x)Zdzp(zjx) logq(xjz)1NXiKL[p(Zjxi);r(Z)]:(29)And this takes the form of a variational autoencoder (Kingma & Welling, 2014), except with thesecond KL divergence term having an arbitrary weight .It is interesting that while this objective takes the same mathematical form as that of a VariationalAutoencoder, the interpretation of the objective is very different. In the V AE, the model starts life asa generative model with a defined prior p(z)and stochastic decoder p(xjz)as part of the model, andthe encoder q(zjx)is created to serve as a variational approximation to the true posterior p(zjx) =p(xjz)p(z)=p(x). In the VIB approach, the model is originally just the stochastic encoder p(zjx),and the decoder q(xjz)is the variational approximation to the true p(xjz) =p(zjx)p(x)=p(z)andr(z)is the variational approximation to the marginal p(z) =Rdxp(x)p(zjx). This difference ininterpretation makes natural suggestions for novel directions for improvement.This precise setup, albeit with a different motivation was recently explored in Higgins et al. (2016),where they demonstrated that by changing the weight of the variational autoencoders regularizationterm, there were able to achieve latent representations that were more capable when it came ot zero-shot learning and understanding ”objectness”. In that work, they motivated their choice to changethe relative weightings of the terms in the objective by appealing to notions in neuroscience. Herewe demonstrate that appealing to the information bottleneck objective gives a principled motivationand could open the door to better understanding the optimal choice of and more tools for accessingthe importance and tradeoff of both terms.Beyond the connection to existing variational autoencoder techniques, we note that the unsupervisedinformation bottleneck objective suggests new directions to explore, including targetting the exactmarginalp(z)in the regularization term, as well as the opportunity to explore tighter bounds on thefirstI(Z;X)term that may not require explicit variational reconstruction.C Q UADRATIC BOUNDS FOR STOCHASTIC LOGISTIC REGRESSION DECODERConsider the special case when the bottleneck Zis a multivariate Normal, i.e., zjxN(x;x)where xis aKKpositive definite matrix. The parameters x;xcan be constructed from adeep neural network, e.g.,x=1:K(x)chol(x) = diag(log(1 + exp( K+1:2K))) + subtril( 2K+1:K(K+3)=2);where(x)2RK(K+3)=2is the network output of input x.18Published as a conference paper at ICLR 2017Suppose that the prediction is a categorical distribution computed as S(Wz)whereWis aCKweight matrix and logS(x) =xlse(x)is the log-soft-max function with lse(x) =logPKk=1exp(xk)being the log-sum-exp function.This setup (which is identical to our experiments) induces a classifier which is bounded by aquadratic function, which is interesting because the theoretical framework Fawzi et al. (2016) provesthat quadratic classifiers have greater capacity for adversarial robustness than linear functions.We now derive an approximate bound using second order Taylor series expansion (TSE). The boundcan be made proper via Browne & McNicholas (2015). However, using the TSE is sufficient tosketch the derivation.Jensen’s inequality implies that the negative log-likelihood soft-max is upper bounded by:logE[S(WZ)jx;x] E[logS(WZ)jx;x]=Wx+E[lse(WZ)jx;x]=Wx+E[lse(Z)jWx;Wx]:The second order Taylor series expansion (TSE) of lseis given by,lse(x+)lse(x) +TS(x) +12Thdiag(S(x))S(x)S(x)Ti:Taking the expectation of the TSE at the mean yields,EN(0;WxWT)[lse(Wx+)]lse(Wx) +EN(0;WxWT)[T]S(Wx)++12EN(0;WxWT)[Thdiag(S(Wx))S(Wx)S(Wx)Ti]=lse(Wx) +12tr(WxWThdiag(S(Wx))S(Wx)S(Wx)Ti)=lse(Wx) +12tr(WxWTdiag(S(Wx)))12S(Wx)TWxWTS(Wx)=lse(Wx) +12pS(Wx)TWxWTpS(Wx)12S(Wx)TWxWTS(Wx)The second-moment was calculated by noting,E[XTBX] =Etr(XXTB) = tr( E[XXT]B) = tr(B):Putting this altogether, we conclude,E[S(WZ)jx;x]'S(Wx) exp12pS(Wx)TWxWTpS(Wx) +12S(Wx)TWxWTS(Wx):As indicated, rather than approximate the lsevia TSE, we can make a sharp, quadratic upper boundvia Browne & McNicholas (2015). However this merely changes the S(Wx)scaling in the expo-nential; the result is still log-quadratic.19
SyVVJ85lg
Published as a conference paper at ICLR 2017PALEO : A P ERFORMANCE MODEL FORDEEPNEURAL NETWORKSHang QiUCLAhangqi@cs.ucla.eduEvan R. SparksUC Berkeleysparks@cs.berkeley.eduAmeet TalwalkarUCLAameet@cs.ucla.eduABSTRACTAlthough various scalable deep learning software packages have been proposed,it remains unclear how to best leverage parallel and distributed computing infras-tructure to accelerate their training and deployment. Moreover, the effectivenessof existing parallel and distributed systems varies widely based on the neural net-work architecture and dataset under consideration. In order to efficiently explorethe space of scalable deep learning systems and quickly diagnose their effective-ness for a given problem instance, we introduce an analytical performance modelcalled P ALEO . Our key observation is that a neural network architecture carrieswith it a declarative specification of the computational requirements associatedwith its training and evaluation. By extracting these requirements from a givenarchitecture and mapping them to a specific point within the design space of soft-ware, hardware and communication strategies, P ALEO can efficiently and accu-rately model the expected scalability and performance of a putative deep learningsystem. We show that P ALEO is robust to the choice of network architecture,hardware, software, communication schemes, and parallelization strategies. Wefurther demonstrate its ability to accurately model various recently published scal-ability results for CNNs such as NiN, Inception and AlexNet.1 I NTRODUCTIONDeep learning has been successfully applied in many areas including natural language processingand computer vision. The scale of modern datasets and the millions to billions of parameters in thesedeep networks pose new challenges when designing computational systems that leverage paralleland distributed computing. Indeed, several important open questions remain:How fast can we train or evaluate a model on a user’s given hardware?For a given architecture, how can a user best leverage parallel and distributed computation?How can we design a new neural network architecture that can be trained and evaluated efficientlyunder common hardware setups?In response to these fundamental questions, various software packages and systemshave beenpainstakingly developed, e.g. DistBelief (Dean et al., 2012), TensorFlow (Abadi et al., 2015),MXNet (Chen et al., 2015), SparkNet (Moritz et al., 2015), FireCaffe (Iandola et al., 2016). More-over, expensive benchmarking efforts, e.g., Chintala et al. (2016), have performed brute-force pro-filing on some of these deep learning systems on a handful network architectures.In this work we aim to tackle these questions by taking an analytical approach to model the per-formance of arbitrary learning systems. Our work hinges on the observation that a neural networkarchitecture is a declarative specification of the forward and backward propagation steps requiredfor training and deploying the network. However, given this specification, there is a rich designspace of algorithms, hardware choices, and communications strategies to most efficiently executethese specifications. We build a novel performance model called P ALEO1that maps this declarativespecification to arbitrary points in this design space to estimate the execution time of training and1Open-sourced at https://github.com/TalwalkarLab/paleo .1Published as a conference paper at ICLR 2017deploying deep neural networks.2PALEO applies broadly to a wide variety of neural network archi-tectures and for arbitrary learning systems within this design space, and thus can serve as a valuabletool for practitioners and developers to answer the questions mentioned above.2 B ACKGROUND AND RELATED WORKTraining deep neural networks can be very time and resource consuming, and it is not uncommonfor the training of a model to take days across tens or hundreds of machines. Several high-levelstrategies have been proposed to accelerate this process, and these strategies collectively define thedesign space considered by P ALEO .Hardware acceleration approaches are designed to accelerate the computation of the forward andbackward passes and often make use of specialized hardware, such as GPUs (Coates et al., 2013), ormore recently custom hardware designed specifically for deep learning (Jouppi, 2016). P ALEO ac-cepts constants associated with hardware as input (e.g., peak FLOPS, network bandwidth) and au-tomatically adapts to changes in this input.Software acceleration via specialized libraries, e.g., cuda-convnet (Krizhevsky, 2014a) andcuDNN (Chetlur et al., 2014), and highly-optimized algorithms for commonly used primitives,e.g., Chetlur et al. (2014) and Lavin (2016), can also be used to accelerate deep model training.PALEO dynamically picks among the best available implementation for each layer at execution time.Parallelization is a natural approach to consider, and can involve training a neural network withmany computational devices (e.g. CPUs, GPUs) on a single machine, or across a network. Thereare two major parallelization strategies when it comes to training deep neural network models atscale: data parallelism and model parallelism. In classical data parallel systems, each worker storesan identical copy of the model and computes gradients only on a shard of the training examples, andthese gradients are aggregated to update the model. In contrast, model parallel systems shard themodel itself across the workers, while the training data may be stored on each worker or shardedacross the workers. P ALEO models both data and model parallel settings.Communication schemes have also been explored to accelerate incremental model updates acrossdistributed workers. Three of the most common schemes are (Iandola et al., 2016; Zhao & Canny,2013): (i) the OneToAll scheme has a 2KT communication time as a master node must communi-cate with all Kworkers individually, where Tis the time for communicating data through one linkin the network; (ii) the Tree AllReduce scheme takes 2 log2(K)Tfor weights to be aggregated andbroadcasted to all workers following a tree topology; and (iii) the Butterfly AllReduce scheme inwhich all workers receive aggregated weights in log2(K)Tusing a butterfly network. We restrictthe focus of P ALEO to distributed communication schemes that return equivalent results to serialexecutions, and we thus do not consider the recently introduced butterfly mixing scheme of Zhao &Canny (2013), or non-deterministic asynchronous parameter servers.3 P ALEOWe now present P ALEO , a model for the lean consumption of resources during the training of DNNs.PALEO decomposes the total execution time into computation time and communication time; bothare estimated for each pass of a neural network’s evaluation given user specified choices within thedesign space of algorithms, hardware, and communications strategies. Figure 1 illustrates the overallidea. The computation time is calculated from factors including the size of the computation inputsimposed by the network architecture, the complexity of the algorithms and operations involved inthe network layers, and the performance of the hardware to be used. The communication timeis estimated based on the computational dependencies imposed by the network, the communicationbandwidth of the hardware, and the assumed parallelization schemes. Once the network architectureand design space choices are fixed, all of the key factors in P ALEO can be derived, and we canestimate execution time without actually implementing the entire network and/or an underlyingsoftware package.2Training a neural network involves both forward and backward propagation, whereas deploying a trainednetwork on a new data point involves only forward propagation. Thus, estimating the execution time of modeltraining encompasses both model training and deployment, and is the focus of this work.2Published as a conference paper at ICLR 2017Network architecture GPUs CPUs GPU cluster CPU cluster scale-out scale-up Complexity (e.g. FLOP counts) Communication bandwidth (GB/s) Computation speed (TFLOPS) Memory (Data, weights, gradients, activations) Dependencies (Network architecture) Communication Computation Execution Time Software framework compute network Parallelization strategies (Model parallel, data parallel) Operation selection (e.g. GEMM, FFT, Tiled FFT ) Communication scheme (OneToAll, Tree AllReduce, Butterfly AllReduce) Figure 1: Overview of the P ALEO modeling approach. P ALEO decomposes execution time intocomputation time and communication time, which can be derived from various factors implicitlyspecified by network architectures and hardware configurations.3.1 C OMPUTATION MODELINGWe first describe the computation model on a single machine. The computation in a neural networkcan be expressed as a directed graph N=hfu(i)gni=1;f(u(i); u(j))gi, where each node u(i)isassociated with an operation f(i)on a device d(i); each directed edge (u(i); u(j))represents thedependency that operation f(j)cannot be executed until f(i)is finished. We use Pa (u(j))to representthe set of immediate parent nodes of u(j). We model each layer in the neural network as a node, andthe connections between layers as edges. In the following text, we omit the superscript index whenthere is no ambiguity.3.1.1 C OMPUTATION TIME FOR A SINGLE LAYERTo model the runtime of a layer u, we consider the operation fand decompose the execution timeof this operation into three terms (as shown in Figure 2a): the time to fetch the input produced byits parent layersR(Pa(u)); the time to perform the computation of fon the designated device d,i.e.,C(f; d); and the time to write the outputs to the local memory W(f; d). Assuming a sequentialexecution, the runtime for a node ucan be written as a simple summation:T(u) =R(Pa(u)) +C(f; d) +W(f; d): (1)Among the three terms, the computation time C(f; d)is calculated as the FLOP (floating-point op-eration) counts of the operation divided by the computation speed (FLOPS; floating-point operationper second) of the device: C(f; d) = FLOPs (f)=speed (d):The IO timesR(Pa(u))andW(u)arecalculated as the size of memory footprints involved in the computation divided by the IO bandwidthof the device. When inputs must be fetched from other devices, e.g. in the case of model parallelism,this IO bandwidth refers to the communication bandwidth between two devices. P ALEO treats thespeed and bandwidth of devices as parameters given to the model so that users can configure themto reflect user-specific configurations.Using this per-layer model, we will next describe how to model the computation time of an entirenetwork. We will subsequently we present FLOP counts for layer operations commonly used inmodern DNNs in Section 4.3.1.2 C OMPUTATION TIME FOR NETWORKSWe first consider simple sequential structures where layers are constructed one after another, as inFigure 2b. The total execution time can be calculated as the sum of execution time of all layersT(N) =Pni=1T(u(i)). While this calculation may seem trivial at first glance, it forms the founda-tion for modeling execution time for more complex architectures.3Published as a conference paper at ICLR 2017Operation f x y Write outputs Fetch inputs Conv Pooling Conv Pooling ......Pooling Pooling Conv Conv Pooling FC......Device 1 Device 2 (a) (b) (c) Figure 2: (a) The execution time of a node in the computation graph consists of the time for fetchinginput, computing results, and writing results to memory. (b) An example of a sequential computationgraph segment. (c) An example of a parallel computation graph segment.Parallel structures are not uncommon in DNNs; for example, the Inception model (Szegedy et al.,2015a) contains layers that can be evaluated simultaneously, and layers on different workers canrun in parallel in model parallel setups (Dean et al., 2012). Figure 2c illustrates a parallel structure,where two convolutional layers (each followed by a pooling layer) are scheduled to be executed ontwo devices.To model computation time of parallel structures, we identify synchronization barriers before andafter every parallel structure and introduce a notation of supernode U=fG(i)gki=1as a set of disjointsubgraphs sandwiched by the synchronization barriers in the computation graph. When substitutingthe subgraphs with the supernode, the network is reduced to a sequential structure described above.For the supernode, the execution time T(U)is within the range [max iT(G(i));PiT(G(i))], wherethe lower bound corresponds to perfect parallelization, the upper bound corresponds to sequentialexecution. Note that the execution time of a subgraph T(G(i))can be calculated recursively.3.1.3 C OMPUTATION MODELING FOR LAYER OPERATIONSIn modern DNNs, the convolutional layer is one of the most commonly used and computation-ally intensive type of layer. For this reason, there has been many heavily optimized implementa-tions (Chetlur et al., 2014; Vasilache et al., 2015; Lavin, 2016). Deriving plausible FLOP countsfor other types of layers is a straightforward process, and in this section, we consider two leadingimplementations for convolutional operations: matrix multiplication and Fast Fourier Transform.Following the notation used by Chetlur et al. (2014), a 2D convolutional layer during forward prop-agation3takes an input feature map DNCHW(which has a batch of Ninput feature maps withshape HWandCchannels) and a set of convolutional filters FKCRS(Kfilters with shapeRSandCchannels). It produces NKfeature maps each of shape PQwhich can be calcu-lated from the shapes of inputs and filters together with additional striding and padding parameters.The FLOP counts for the convolution operation can be expressed as 2KCRSNPQ . A commonlyused implementation is to reduce convolution operations to matrix multiplications, which can beefficiently computed with well-optimized SGEMM routines on various platforms. Although theseFLOP counts ignore auxiliary operations (e.g. indexing arithmetic in efficient implementations),they nonetheless provide a good estimate of FLOP counts for matrix multiplication implementa-tions.Another implementation is based on Fast Fourier Transform (Vasilache et al., 2015): both input fea-ture maps and filters are transformed into the frequency domain, then element-wise multiplicationsare performed followed by an inverse Fourier transform. This implementation introduces computa-tion and memory overhead in discrete Fourier transforms, but reduces the computation complexitytoO(NCKHW +(NC+CK+NK)HW log(HW)). Convolutional layers with large filters or a3Our arguments generalize to N-dimensional settings, and similar arguments apply for the backward pass.4Published as a conference paper at ICLR 2017large problem size can benefit from FFT implementations. When counting FLOPs, it is not possibleto get exact counts without knowing the underlying implementation details. In P ALEO , we adopt thecommonly used FFT complexity 5nlog2nas the FLOP counts for complex-valued transformationsof size n(Cooley & Tukey, 1965). To account for the IO overhead caused by auxiliary memories,PALEO estimates the memory size required for complex-valued matrices in the frequency domainand incorporates it into the data reading and writing terms. For FFT-based implementations withtilings, P ALEO estimates the number of tiles from the convolution specifications.The choice of algorithm – matrix multiplication or FFT – is problem specific, as it depends on thefilter size, strides, input size of the convolutional layers, and memory workspace. In order to derivereasonable estimations for user-specific DNNs comparable to real executions, it is important for P A-LEO to make decisions comparable to real-world systems. Two common approaches are employedin existing DNN software frameworks and libraries to choose between these algorithms: (i) usingpredefined heuristics based on offline benchmarks; (ii) autotuning to empirically evaluate availablealgorithms on the given specification. Since autotuning is tied to platform and software implementa-tions, for maximum generality P ALEO by default takes the first approach. In particular, P ALEO usesheuristics from cuDNN to make algorithm choices while also accounting for user preferences.3.2 C OMMUNICATION MODELINGWe now describe our modeling for communication among multiple workers. Let jDjbe the size ofdata to be communicated between two workers, and define Bas the bandwidth of the communica-tion channel. Then the communication time can simply be written as Tcomm =jDj=B. By usingdifferent bandwidth configurations, P ALEO works for both scale-up setups (multiple GPUs on onemachine) and scale-out setups (multiple machines in a cluster). Moreover, in data-parallel settings,an AllReduce operation is performed to synchronize model parameters across all workers after everybackward pass. P ALEO considers three communications schemes: OneToAll, Tree AllReduce, andButterfly AllReduce. The communication time under these three schemes is described in Section 2.3.3 P LATFORM PERCENT OF PEAKThus far, we have assumed that deep learning software platforms make perfect use of their underly-ing hardware. That is, that the CPUs and GPUs are operating at “peak FLOPS”, and that networkand IO links are fully saturated. This has allowed our model to be platform independent.However, this assumption is unreasonable in practice. For instance, achieving peak FLOPS is adifficult proposition, usually requiring customized libraries developed by organizations with intimateknowledge of the underlying hardware, e.g., Intel’s MKL (int, 2009), ATLAS (Whaley & Petitet,2005), and cuDNN. Even these specially tuned libraries may fall short of peak execution by as muchas 40% (atl). Further, anycomputation done outside the scope of P ALEO (e.g. job scheduling, datacopying) will exacerbate the observed inefficiency in practice. Sometimes such inefficiencies arewarranted from the perspective of ease of programmability or maintenance of the learning platforms.Rather than trying to measure and capture every source of inefficiency in every learning framework,we take a small number of representative deep learning workloads which contain convolutions,pooling, dropout, and fully connected layers and run them for a short time on a single GPU. Givenobserved total throughput and estimated total throughput on this benchmark we fit a scaling constantto estimate a platform percent of peak (PPP) parameter which captures the average relative ineffi-ciency of the platform compared to peak FLOPS. Highly specialized frameworks (e.g. cuDNN) willin general have a computational PPP that is close to 100%, while frameworks with higher overheadsmay have PPP constants closer to 50% or less.We follow a similar benchmarking procedure to estimate PPP for the communication link for Ten-sorFlow. For the FireCaffe experiments, we estimate the communication PPP based on the empiricalresults for communication reported in Table 4 of the paper.4 E XPERIMENTSWe now present empirical results which illustrate that P ALEO is robust to the choice of networkarchitecture, hardware, communication schemes, and parallelization strategies.5Published as a conference paper at ICLR 20174.1 L AYER -WISE EVALUATIONWe first compare P ALEO -estimated runtimes with actual runtimes measured from Tensor-Flow4(Abadi et al., 2015) execution in two popular CNN architectures: the one-tower variant ofAlexNet (Krizhevsky, 2014b) and the 16-layer VGG network (Simonyan & Zisserman, 2014). P A-LEO uses cuDNN heuristics to choose algorithms and the auto-tuning mechanism in TensorFlow isdisabled. Experiments are run on a NVIDIA TITAN X GPU with a 4 GB workspace limit.For convolutional and fully connected layers, we evaluate forward computation, backward compu-tation with respect to layer inputs, and backward computation with respect to filters separately (seeFigure 4 in the appendix for the plots of layer-by-layer comparison.) Table 1 shows a comparisonof full forward pass and backward pass with all layers included. P ALEO ’s per layer estimates arequite close to the actual TensorFlow execution, with only one layer – ‘fc6’ – consistently beingunderestimated by P ALEO .5In spite of this issue with ‘fc6’, our full pass estimates are remarkablyaccurate.Table 1: Full pass time of TensorFlow and P ALEO estimation on AlexNet and VGG-16.Forward pass (ms) Backward pass (ms)AlexNet TensorFlow 44.00 155.10PALEO Estimation 45.96 118.44VGG-16 TensorFlow 400.46 1117.48PALEO Estimation 435.46 1077.274.2 C ASE STUDIESWe now revisit the questions posed at the beginning of the paper and demonstrate how P ALEO canhelp in answering them. In this subsection we present three case studies. We extract experiment se-tups including network architectures, hardware specifications, communication schemes, and paral-lelization strategies from selected publications focusing on scalability of CNNs. We then plug thoseconfigurations into P ALEO and compare the simulated scalability results with the reported results inthe original publications. Table 2 summaries the configurations of P ALEO in these experiments.Table 2: P ALEO configurations used in the case studies.Case 1 Case 2 Case 3Net NiN Inception v3 AlexNetDevice NVIDIA K20X NVIDIA K20 NVIDIA K20Workers Up to 128 Up to 100 Up to 8Bandwidth 70 Gbps 10 Gbps 6 GB/sCommunication Tree AllReduce Parameter Server VariousParallelization Data Parallelism Data Parallelism HybridPlatform Caffe TensorFlow cuda-convnet2One Step Time6PALEO Estimation 1918 ms 4269 ms 402 msReported Time72275 ms – 418 ms4TensorFlow 0.9 with cuDNN 4 backend.5Examining the TensorFlow execution with the NVIDIA profiler revealed that TensorFlow spent two-thirdsof its reported ‘fc6’ time in transforming data layout between NHWC and NCHW when calling the underlyingcuBLAS primitives.6Total time of forward pass, backward pass, and parameter update for one mini-batch on one worker.7Reported times for Cases 1 and 3 are derived approximately from information in the publications. For Case2 no run time information is provided.6Published as a conference paper at ICLR 20174.2.1 C ASE 1: N INWITH FIRECAFFEFireCaffe (Iandola et al., 2016) adopts the Tree AllReduce communication scheme when training aNiN model (Lin et al., 2013) in data parallel settings with up to 128 servers on the Titan supercom-puter. They report a 38 speedup for NiN with batch size 1024 relative to single-GPU performance.Tabel 3 shows the results from P ALEO compared with the results reported by FireCaffe.Table 3: Comparison between P ALEO estimation and FireCaffe for training NiN.FireCaffe PALEO EstimationWorkers Batch size Train Time Speedup Train Time Speedup1 256 5.8 days 1 4.9 days 132 256 11 hours 13 7.6 hours 15.5 32 1024 6 hours 23 4.6 hours 25.3 128 1024 3.6 hours 39 2.3 hours 51.6 4.2.2 C ASE 2: I NCEPTION WITH TENSOR FLOWMurray et al. (2016) reported their results in synchronously training the Inception model (Szegedyet al., 2015b) with TensorFlow and achieved a 56 speedup with 100 workers. They apply a weakscaling strategy with batch size 256 to keep GPUs saturated. Although Murray et al. (2016) lever-aged a distributed parameter server rather than one of the three communications schemes consideredin P ALEO , the communication cost of Butterfly AllReduce can be viewed as a lower bound (Zhao &Canny, 2013). To account for the fact that they train with worker nodes each of which have 8 GPUs,we assumes a linear speedup for GPUs on the same host. Figure 3a shows a comparison betweenreported speedups and P ALEO estimated speedups. For absolute runtime, in one of the experiments,their model completes 20 epochs of training after 100 hours when using 8 Tesla K40’s and a batchsize 256. P ALEO projects a 111 hours runtime under the same setting.4.2.3 C ASE 3: A LEXNET WITH HYBRID PARALLELISMKrizhevsky (2014b) describes a hybrid model and data parallelism approach for training AlexNetusing up to 8 GPUs with a weak scaling strategy. In his setup, each of the two CPUs connects to 4GPUs, the communication bandwidth is penalized by 50% across the two groups as mentioned inthe paper. Table 4 shows the comparison between P ALEO ’s projection and the original result, whichare quite similar. Moreover, whereas Krizhevsky (2014b) does not quantify the speedup of hybridparallelism relative to strict data parallelism, P ALEO simulates training the entire network with onlydata parallelism (see last two columns of Table 4) in order to estimate this speedup.Table 4: Comparison between P ALEO estimation and Krizhevsky (2014b) for training AlexNet.One Weird Trick PALEO EstimationHybrid parallelism Hybrid parallelism Data parallelismWorkers Train Time (h) Speedup Train Time (h) Speedup Train Time (h) Speedup1 98.95 1 96.31 1 96.31 12 50.24 1.95 49.57 1.94 55.90 1.724 26.20 3.74 25.42 3.79 32.82 3.038 16.68 6.25 14.37 6.70 23.65 5.404.3 H YPOTHETICAL SETUPSIn this subsection, we use P ALEO in two hypothetical setups to analyze the scalability of AlexNetand a GAN model under different communication schemes.7Published as a conference paper at ICLR 20174.3.1 A LEXNET IN A CLOUD -BASED SETUPIn this study, we present an analysis of data parallel training of AlexNet. We assume a modern cloudsetup with a cluster of servers each equipped with a NVIDIA K80 GPU connected to a 20 Gbpsnetwork. In contrast to the Inception model with 23 million parameter, the one-tower variant ofAlexNet has 50 million parameters and therefore doubles communication workload when trainingwith data parallelism.We show strong scaling for all three communication schemes in Figure 3c. Even when assuminga fairly large batch size of 2048 which is beneficial in distributed settings, we see very modestspeedups. The OneToAll scheme achieves a max speedup of less than a 2 using 4 workers, whilethe communication-efficient Butterfly AllReduce scheme achieves a max speedup of roughly 5 when using 32 workers. The weak scaling results, shown in Figure 3b, show drastically improvedscaling results, as we observe nearly linear speedups as we increase the number of workers. How-ever, it is important to note that we are increasing the effective batch size as we increase the numberof workers, and it is well-known that training with large effective batch-sizes can yield models withsubstandard accuracy (Breuel, 2015).1 2 4 8 16 50 100Workers020406080100SpeedupPaleo: OneToAllPaleo: Tree AllReducePaleo: Butterfly AllReduceMurray el at. (2016)(a) Inception / weak1 2 4 8 16 32 64 128Workers020406080100120Estimated SpeedupOneToAllTree AllReduceButterfly AllReduce (b) AlexNet / weak1 2 4 8 16 32 64 128Workers012345678Estimated SpeedupOneToAllTree AllReduceButterfly AllReduce (c) AlexNet / strong1 2 4 8 16 32 64 128Workers0123456789Estimated SpeedupOneToAllTree AllReduceButterfly AllReduce (d) GAN / strongFigure 3: Comparison of P ALEO projected speedups for various networks under different scalingstrategies and communication schemes. (a-b) weak scaling. (c-d) strong scaling.4.3.2 GAN A RCHITECTUREPALEO can be applied to architectures other than CNNs. We profile a generative adversarial network(GAN) inspired by Radford et al. (2015) for the LSUN dataset with the same hardware assumptionsas the previous case study. Table 5 shows that P ALEO estimations are close to empirical TensorFlowrun time for both the discriminator and generator networks. Figure 3d plots the estimated speedupsfor training the model with a batch size 256 on up to 128 workers under strong scaling. With-out communication-intensive fully-connected layers, while training this GAN architecture is morescalable than AlexNet, P ALEO still only predicts an 8 sub-linear speedup with 64 workers.Table 5: Full pass time of the discriminator and generator in a GAN architecture.Forward pass (ms) Backward pass (ms)Discriminator TensorFlow 30.19 77.39PALEO Estimation 27.55 79.25Generator TensorFlow 110.11 374.18PALEO Estimation 117.02 324.495 C ONCLUSIONWe introduced P ALEO – an analytical performance model for exploring the space of scalable deeplearning systems. By extracting computational requirements carried by neural network architecturesand mapping them to the design space of software, hardware, and communication strategies, P A-LEO can effectively and accurately model the expected scalability and performance of a putativedeep learning system.8Published as a conference paper at ICLR 2017REFERENCESAtlas timings. URL http://math-atlas.sourceforge.net/timing/ .Intel Math Kernel Library. Reference Manual . Intel Corporation, 2009. Santa Clara, USA. ISBN 630813-054US.Martın Abadi et al. Tensorflow: Large-scale machine learning on heterogeneous systems, 2015. Softwareavailable from tensorflow. org , 2015.Thomas Breuel. The effects of hyperparameters on sgd training of neural networks. arXiv:1508.02788 , 2015.Tianqi Chen et al. Mxnet: A flexible and efficient machine learning library for heterogeneous distributedsystems. arXiv:1512.01274 , 2015.Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and EvanShelhamer. cudnn: Efficient primitives for deep learning. arXiv:1410.0759 , 2014.Soumith Chintala et al. convnet-benchmarks, 2016. URL https://github.com/soumith/convnet-benchmarks .Adam Coates, Brody Huval, Tao Wang, David Wu, Bryan Catanzaro, and Ng Andrew. Deep learning withcots hpc systems. In Proceedings of the 30th international conference on machine learning , pp. 1337–1345,2013.James W Cooley and John W Tukey. An algorithm for the machine calculation of complex fourier series.Mathematics of computation , 19(90):297–301, 1965.Jeffrey Dean et al. Large scale distributed deep networks. In NIPS , pp. 1223–1231, 2012.Forrest N Iandola, Khalid Ashraf, Mattthew W Moskewicz, and Kurt Keutzer. Firecaffe: near-linear accelera-tion of deep neural network training on compute clusters. In CVPR , 2016.Norm Jouppi. Google supercharges machine learning tasks with tpu customchip, 2016. URL https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html .Alex Krizhevsky. cuda-convnet2, 2014a. URL https://github.com/akrizhevsky/cuda-convnet2 .Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv:1404.5997 , 2014b.Andrew Lavin. Fast algorithms for convolutional neural networks. In CVPR , 2016.Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400 , 2013.Philipp Moritz, Robert Nishihara, Ion Stoica, and Michael I Jordan. Sparknet: Training deep networks in spark.arXiv:1511.06051 , 2015.Derek Murray et al. Announcing tensorflow 0.8 now with distributed computing support!, 2016. URL https://research.googleblog.com/2016/04/announcing-tensorflow-08-now-with.html .Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutionalgenerative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR ,abs/1409.1556, 2014.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan,Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR , pp. 1–9, 2015a.Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking theinception architecture for computer vision. arXiv preprint arXiv:1512.00567 , 2015b.Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, and Yann LeCun. Fastconvolutional nets with fbfft: A gpu performance evaluation. In ICLR , 2015.R. Clint Whaley and Antoine Petitet. Minimizing development and maintenance costs in supporting persistentlyoptimized BLAS. Software: Practice and Experience , 2005.Huasha Zhao and John Canny. Butterfly mixing: Accelerating incremental-update algorithms on clusters. InSIAM Conf. on Data Mining . SIAM, 2013.9Published as a conference paper at ICLR 2017AWe include supplementary figures in appendix due to the space constraint.0 2 4 6 810 12 14Time (ms)fc8fc7fc6conv5conv4conv3conv2conv1LayersForward0 10 20 30 40 50 60Time (ms)Backward wrt inputs0 5 10 15 20Time (ms)Backward wrt filtersPaleo EstimationTensorFlow(a) Layer-wise comparison in AlexNet.010 20 30 40 50 60 70Time (ms)fc8fc7fc6conv5-3conv5-2conv5-1conv4-3conv4-2conv4-1conv3-3conv3-2conv3-1conv2-2conv2-1conv1-2conv1-1LayersForward0 50 100 150Time (ms)Backward wrt inputs0 20 40 60 80 100Time (ms)Backward wrt filters(b) Layer-wise comparison in VGG-16.Figure 4: Layer-wise comparison between P ALEO Estimation and TensorFlow inAlexNet (Krizhevsky, 2014b) and VGG-16 (Simonyan & Zisserman, 2014).10
ryelgY5eg
Published as a conference paper at ICLR 2017OPTIMAL BINARY AUTOENCODING WITH PAIRWISECORRELATIONSAkshay BalsubramaniStanford Universityabalsubr@stanford.eduABSTRACTWe formulate learning of a binary autoencoder as a biconvex optimization problemwhich learns from the pairwise correlations between encoded and decoded bits.Among all possible algorithms that use this information, ours finds the autoencoderthat reconstructs its inputs with worst-case optimal loss. The optimal decoderis a single layer of artificial neurons, emerging entirely from the minimax lossminimization, and with weights learned by convex optimization. All this is reflectedin competitive experimental results, demonstrating that binary autoencoding canbe done efficiently by conveying information in pairwise correlations in an optimalfashion.1 I NTRODUCTIONConsider a general autoencoding scenario, in which an algorithm learns a compression scheme forindependently, identically distributed (i.i.d.) V-dimensional bit vector data^x(1);:::; ^x(n). Forsome encoding dimension H, the algorithm encodes each data example ^x(i)= (^x(i)1;:::; ^x(i)V)>into anH-dimensional representation e(i), withH < V . It then decodes each e(i)back into areconstructed example ~x(i)using some small amount of additional memory, and is evaluated on thequality of the reconstruction by the cross-entropy loss commonly used to compare bit vectors. Agood autoencoder learns to compress the data into Hbits so as to reconstruct it with low loss.When the loss is squared reconstruction error and the goal is to compress data in RVtoRH, this isoften accomplished with principal component analysis (PCA), which projects the input data on thetopHeigenvectors of their covariance matrix (Bourlard & Kamp (1988); Baldi & Hornik (1989)).These eigenvectors in RVconstituteVH real values of additional memory needed to decode thecompressed data in RHback to the reconstructions in RV, which are linear combinations of theeigenvectors. Crucially, this total additional memory does not depend on the amount of data n,making it applicable when data are abundant.This paper considers a similar problem, except using bit-vector data and the cross-entropy recon-struction loss. Since we are compressing samples of i.i.d. V-bit data into H-bit encodings, a naturalapproach is to remember the pairwise statistics: the VH average correlations between pairs of bits inthe encoding and decoding, constituting as much additional memory as the eigenvectors used in PCA.The decoder uses these along with the H-bit encoded data, to produce V-bit reconstructions.We show how to efficiently learn the autoencoder with the worst-case optimal loss in this scenario,without any further assumptions, parametric or otherwise. It has some striking properties.The decoding function is identical in form to the one used in a standard binary autoencoder with onehidden layer (Bengio et al. (2013a)) and cross-entropy reconstruction loss. Specifically, each bit vof the decoding is the output of a logistic sigmoid artificial neuron of the encoded bits, with somelearned weights wv2RH. This form emerges as the uniquely optimal decoding function, and is notassumed as part of any explicit model.We show that the worst-case optimal reconstruction loss suffered by the autoencoder is convex inthese decoding weights W=fwvgv2[V], and in the encoded representations E. Though it is notMost of the work was done as a PhD student at UC San Diego.1Published as a conference paper at ICLR 2017jointly convex in both, the situation still admits a natural and efficient optimization algorithm inwhich the loss is alternately minimized in EandWwhile the other is held fixed. The algorithmis practical and performs well empirically, learning incrementally from minibatches of data in astochastic optimization setting.1.1 N OTATIONThe observed data and encodings can be written in matrix form, representing bits as 1:^X=0B@^x(1)1 ^x(n)1.........^x(1)V ^x(n)V1CA2[1;1]Vn;E=0B@e(1)1e(n)1.........e(1)He(n)H1CA2[1;1]Hn(1)Here the encodings are allowed to be randomized, represented by values in [1;1]instead of just thetwo valuesf1;1g; e.g.e(1)i=12is+1w.p.34and1w.p.14. The data in Xare also allowed to berandomized, which we will see essentially loses no generality (Appendix B). We write the columns of^X;Eas^x(i);e(i)fori2[n](where [s] :=f1;:::;sg), representing the data. The rows are writtenas^xv= (x(1)v;:::;x(n)v)>forv2[V]andeh= (e(1)h;:::;e(n)h)>forh2[H].We also consider the correlation of each bit hof the encoding with each decoded bit vover the data,i.e.bv;h:=1nPni=1x(i)ve(i)h. This too can be written in matrix form as B:=1n^XE>2RVH,whose rows and columns we respectively write as bv= (bv;1;:::;bv;H)>overv2[V]andbh= (b1;h;:::;bV;h)>overh2[H]; the indexing will be clear from context.As alluded to earlier, the loss incurred on any example x(i)is the cross-entropy between the exam-ple and its reconstruction ~x(i), in expectation over the randomness in x(i). Defining`(~x(i)v) =ln21~x(i)v(thepartial losses to true labels1), the loss is written as:`(x(i);~x(i)) :=VXv=1" 1 +x(i)v2!`+(~x(i)v) + 1x(i)v2!`(~x(i)v)#(2)In addition, define a potential well (m) := ln (1 + em) + ln (1 +em)with derivative 0(m) :=1em1+em. Univariate functions like this are applied componentwise to matrices in this paper.1.2 P ROBLEM SETUPWith these definitions, the autoencoding problem we address can be precisely stated as two tasks,encoding and decoding. These share only the side information B. Our goal is to perform these stepsso as to achieve the best possible guarantee on reconstruction loss, with no further assumptions. Thiscan be written as a zero-sum game of an autoencoding algorithm seeking to minimize loss against anadversary, by playing encodings and reconstructions:Using ^X, algorithm plays (randomized) encodings E, resulting in pairwise correlations B.Using EandB, algorithm plays reconstructions ~X=~x(1);:::;~x(n)2[1;1]Vn.Given ~X;E;B, adversary plays X2[1;1]Vnto maximize reconstruction loss1nPni=1`(x(i);~x(i)).To incur low loss, the algorithm must use an EandBsuch that no adversary playing Xcan inflicthigher loss. The algorithm never sees X, which represents the worst the data could be given thealgorithm’s incomplete memory of it ( E;B) and reconstructions ( ~X).We find the autoencoding algorithm’s best strategy in two parts. First, we find the optimal decodingfunction of any encodings EgivenB, in Section 2. Then, we use the resulting optimal reconstructionfunction to outline the best encoding procedure, i.e. one that finds the E;Bthat lead to the bestreconstruction, in Section 3.1. Combining these ideas yields an autoencoding algorithm in Section2Published as a conference paper at ICLR 20173.2 (Algorithm 1), where its implementation and interpretation are specified. Further discussion andrelated work in Section 4 are followed by more extensions of the framework in Section 5. Experimentsin Section 6 show extremely competitive results with equivalent fully-connected autoencoders trainedwith backpropagation.2 O PTIMALLY DECODING AN ENCODED REPRESENTATIONTo address the game of Section 1.2, we first assume EandBare fixed, and derive the optimaldecoding rule given this information. We show in this section that the form of this optimal decoder isprecisely the same as in a classical autoencoder: having learned a weight vector wv2RHfor eachv2[V], thevthbit of each reconstruction ~xiis expressed as a logistic function of a wv-weightedcombination of the Hencoded bits ei– a logistic artificial neuron with weights wv. The weightvectors are learned by convex optimization, despite the nonconvexity of the transfer functions.To develop this, we minimize the worst-case reconstruction error, where Xis constrained by our priorknowledge that B=1nXE>, i.e.1nExv=bv8v2[V]. This can be written as a function of E:LB(E) := min~x(1);:::;~x(n)2[1;1]Vmaxx(1);:::;x(n)2[1;1]V;8v2[V]:1nExv=bv1nnXi=1`(x(i);~x(i)) (3)We solve this minimax problem for the optimal reconstructions played by the minimizing player in(3), written as ~x(1);:::; ~x(n).Theorem 1. Define the bitwise slack function E(w;b) :=b>w+1nPni=1(w>e(i)), which isconvex in w. W.r.t. any bv, this has minimizing weights wv:=wv(E;B) := arg minw2RHE(w;bv).Then the minimax value of the game (3)isLB(E) =12VXv=1E(wv;bv). For any example i2[n],the minimax optimal reconstruction can be written for any bit vas~x(i)v:=1ew>ve(i)1+ew>ve(i).This tells us that the optimization problem of finding the minimax optimal reconstructions ~x(i)isextremely convenient in several respects. The learning problem decomposes over the Vbits in thedecoding, reducing to solving for a weight vector wv2RHfor each bitv, by optimizing each bitwiseslack function. Given the weights, the optimal reconstruction of any example ican be specified by alayer of logistic sigmoid artificial neurons of its encoded bits, with w>ve(i)as the bitwise logits.Hereafter, we write W2RVHas the matrix of decoding weights, with rows fwvgVv=1. In particular,the optimal decoding weights W(E;B)are the matrix with rows fwv(E;B)gVv=1.3 L EARNING AN AUTOENCODER3.1 F INDING AN ENCODED REPRESENTATIONHaving computed the optimal decoding function in the previous section given any EandB, wenow switch perspectives to the encoder, which seeks to compress the input data ^Xinto encodedrepresentations E(from which Bis easily calculated to pass to the decoder). We seek to find (E;B)to ensure the lowest worst-case reconstruction loss after decoding; recall that this is LB(E)from (3).Observe that1n^XE>=Bby definition, and that the encoder is given ^X. Therefore, by using Thm. 1and substituting bv=1nE^xv8v2[V],LB(E) =12nnXi=1VXv=1h^x(i)v(w>ve(i)) + ( w>ve(i))i:=L(W;E) (4)3Published as a conference paper at ICLR 2017So it is convenient to define the feature distortion1for anyv2[V]with respect to W, between anyexample xand its encoding e:Wv(e;x) :=xvw>ve+ (w>ve) (5)From the above discussion, the best Egiven any decoding W, written as E(W), solves theminimizationminE2[1;1]HnL(W;E) =12nnXi=1mine(i)2[1;1]HVXv=1Wv(e(i);^x(i))which immediately yields the following result.Proposition 2. Define the optimal encodings for decoding weights WasE(W) :=arg minE2[1;1]HnL(W;E). Then e(i)(W)can be computed separately for each example ^x(i)2[1;1]V, minimizing its total feature distortion over the decoded bits w.r.t. W:ENC(^x(i);W) :=e(i)(W) := arg mine2[1;1]HVXv=1Wv(e;^x(i)) (6)Observe that the encoding function ENC(^x(i);W)can be efficiently computed to any desired pre-cision since the feature distortion Wv(e;^x(i))of each bitvis convex and Lipschitz in e; anL1error ofcan be reached in O(2)linear-time first-order optimization iterations. Note that theencodings need not be bits, and can be e.g. unconstrained 2RHinstead; the proof of Thm. 1 assumesno structure on them, and the optimization will proceed as above but without projecting into thehypercube.3.2 A NAUTOENCODER LEARNING ALGORITHMOur ultimate goal is to minimize the worst-case reconstruction loss. As we have seen in (3)and(6),it is convex in the encoding Eand in the decoding parameters W, each of which can be fixed whileminimizing with respect to the other. This suggests a learning algorithm that alternately performs twosteps: finding encodings Ethat minimizeL(W;E)as in (6)with a fixed W, and finding decodingparameters W(E;B), as given in Algorithm 1.Algorithm 1 Pairwise Correlation Autoencoder (PC-AE)Input: Size-ndataset ^X, number of epochs TInitialize W0(e.g. with each element being i.i.d. N(0;1))fort= 1toTdoEncode each example to ensure accurate reconstruction using weights Wt1, and compute theassociated pairwise bit correlations Bt:8i2[n] : [e(i)]t=ENC(^x(i);Wt1); Bt=1n^XE>tUpdate weight vectors [wv]tfor eachv2[V]to minimize slack function, using encodings Et:8v2[V] : [wv]t= arg minw2RH"[bv]>tw+1nnXi=1(w>e(i)t)#end forOutput: Weights WT1Noting that (w>ve)w>ve, we see that Wv(e;^x)w>vesgn(w>ve)^xv. So the optimizer tendsto change eso that w>vematches signs with ^xv, motivating the name.4Published as a conference paper at ICLR 20173.3 E FFICIENT IMPLEMENTATIONOur derivation of the encoding and decoding functions involves no model assumptions at all, onlyusing the minimax structure and pairwise statistics that the algorithm is allowed to remember.Nevertheless, the (en/de)coders can be learned and implemented efficiently.Decoding is a convex optimization in Hdimensions, which can be done in parallel for each bitv2[V]. This is relatively easy to solve in the parameter regime of primary interest when data areabundant, in which H < Vn. Similarly, encoding is also a convex optimization problem inonlyHdimensions. If the data examples are instead sampled in minibatches of size n, they canbe encoded in parallel, with a new minibatch being sampled to start each epoch t. The number ofexamplesn(per batch) is essentially only limited by nH, the number of compressed representationsthat fit in memory.So far in this paper, we have stated our results in the transductive setting, in which all data are giventogether a priori, with no assumptions whatsoever made about the interdependences between theVfeatures. However, PC-AE operates much more efficiently than this might suggest. Crucially,the encoding and decoding tasks both depend on nonly to average a function of x(i)ore(i)overi2[n], so they can both be solved by stochastic optimization methods that use first-order gradientinformation, like variants of stochastic gradient descent (SGD). We find it remarkable that theminimax optimal encoding and decoding can be efficiently learned by such methods, which do notscale computationally in n. Note that the result of each of these steps involves (n)outputs ( Eand~X), which are all coupled together in complex ways.Furthermore, efficient first-order convex optimization methods for both encoding and decoding stepsmanipulate more intermediate gradient-related quantities, with facile interpretations. For details, seeAppendix A.2.3.4 C ONVERGENCE AND WEIGHT REGULARIZATIONAs we noted previously, the objective function of the optimization is biconvex. This means that thealternating minimization algorithm we specify is an instance of alternating convex search , shownin that literature to converge under broad conditions (Gorski et al. (2007)). It is not guaranteedto converge to the global optimum, but each iteration will monotonically decrease the objectivefunction. In light of our introductory discussion, the properties and rate of such convergence wouldbe interesting to compare to stochastic optimization algorithms for PCA, which converge efficientlyunder broad conditions (Balsubramani et al. (2013); Shamir (2016)).The basic game used so far has assumed perfect knowledge of the pairwise correlations, leading toequality constraints 8v2[V] :1nExv=bv. This makes sense in PC-AE , where the encodingphase of each epoch gives the exact Btfor the decoding phase. However, in other stochastic settingsas for denoising autoencoders (see Sec. 5.2), it may be necessary to relax this constraint. A relaxedconstraint of1nExvbv1exactly corresponds to an extra additive regularization term ofkwvk1on the corresponding weights in the convex optimization used to find W(Appendix D.1).Such regularization leads to provably better generalization (Bartlett (1998)) and is often practical touse, e.g. to encourage sparsity. But we do not use it for our PC-AE experiments in this paper.4 D ISCUSSION AND RELATED WORKOur approach PC-AE is quite different from existing autoencoding work in several ways.First and foremost, we posit no explicit decision rule, and avoid optimizing the highly non-convexdecision surface traversed by traditional autoencoding algorithms that learn with backpropagation(Rumelhart et al. (1986)). The decoding function, given the encodings, is a single layer of artificialneurons only because of the minimax structure of the problem when minimizing worst-case loss. Thisdiffers from reasoning typically used in neural net work (see Jordan (1995)), in which the loss is thenegative log-likelihood (NLL) of the joint probability, which is assumed to follow a form specifiedby logistic artificial neurons and their weights. We instead interpret the loss in the usual direct way asthe NLL of the predicted probability of the data given the visible bits, and avoid any assumptions onthe decision rule (e.g. not monotonicity in the score w>ve(i), or even dependence on such a score).5Published as a conference paper at ICLR 2017This justification of artificial neurons – as the minimax optimal decision rules given information onpairwise correlations – is one of our more distinctive contributions (see Sec. 5.1).Crucially, we make no assumptions whatsoever on the form of the encoding or decoding, excepton the memory used by the decoding. Some such “regularizing" restriction is necessary to rule outthe autoencoder just memorizing the data, and is typically expressed by assuming a model class ofcompositions of artificial neuron layers. We instead impose it axiomiatically by limiting the amountof information transmitted through B, which does not scale in n; but we do not restrict how thisinformation is used. This confers a clear theoretical advantage, allowing us to attain the strongestrobust loss guarantee among all possible autoencoders that use the correlations B.More importantly in practice, avoiding an explicit model class means that we do not have to optimizethe typically non-convex model, which has long been a central issue for backpropagation-basedlearning methods (e.g. Dauphin et al. (2014)). Prior work related in spirit has attempted to avoidthis through convex relaxations, including for multi-layer optimization under various structuralassumptions (Aslan et al. (2014); Zhang et al. (2016)), and when the number of hidden units is variedby the algorithm (Bengio et al. (2005); Bach (2014)).Our approach also isolates the benefit of higher nin dealing with overfitting, as the pairwisecorrelations Bcan be measured progressively more accurately as nincreases. In this respect, wefollow a line of research using such pairwise correlations to model arbitary higher-order structureamong visible units, rooted in early work on (restricted) Boltzmann Machines (Ackley et al. (1985);Smolensky (1986); Rumelhart & McClelland (1987); Freund & Haussler (1992)). More recently,theoretical algorithms have been developed with the perspective of learning from the correlationsbetween units in a network, under various assumptions on the activation function, architecture, andweights, for both deep (Arora et al. (2014)) and shallow networks (using tensor decompositions,e.g. Livni et al. (2014); Janzamin et al. (2015)). Our use of ensemble aggregation techniques (fromBalsubramani & Freund (2015a; 2016)) to study these problems is anticipated in spirit by prior workas well, as discussed at length by Bengio (2009) in the context of distributed representations.4.1 O PTIMALITY , OTHER ARCHITECTURES ,AND DEPTHWe have established that a single layer of logistic artificial neurons is an optimal decoder, givenonly indirect information about the data through pairwise correlations. This is not a claim thatautoencoders need only a single-layer architecture in the worst case. Sec. 3.1 establishes that the bestrepresentations Eare the solution to a convex optimization, with no artificial neurons involved incomputing them from the data. Unlike the decoding function, the optimal encoding function ENCcannot be written explicitly in terms of artificial neurons, and is incomparable to existing architectures(though it is analogous to PCA in prescribing an efficient operation that yields the encodings fromunlabeled data). Also, the encodings are only optimal given the pairwise correlations; trainingalgorithms like backpropagation, which communicate other knowledge of the data through derivativecomposition, can learn final decoding layers that outperform ours, as we see in experiments.In our framework so far, we explore using all the pairwise correlations between hidden and visiblebits to inform learning by constraining the adversary, resulting in a Lagrange parameter – a weight –for each constraint. These VH weights Wconstitute the parameters of the optimal decoding layer,describing a fully connected architecture. If just a select few of these correlations were used, onlythey would constrain the adversary in the minimax problem of Sec. 2, so weights would only beintroduced for them, giving rise to sparser architectures.Our central choices – to store only pairwise correlations and minimize worst-case reconstructionloss – play a similar regularizing role to explicit model assumptions, and other autoencoding methodsmay achieve better performance on data for which these choices are too conservative, by e.g. makingdistributional assumptions on the data. From our perspective, other architectures with more layers– particularly highly successful ones like convolutional, recurrent, residual, and ladder networks(LeCun et al. (2015); He et al. (2015); Rasmus et al. (2015)) – lend the autoencoding algorithm morepower by allowing it to measure more nuanced correlations using more parameters, which decreasesthe worst-case loss. Applying our approach with these would be interesting future work.Extending this paper’s convenient minimax characterization to deep representations with empiricalsuccess is a very interesting open problem. Prior work on stacking autoencoders/RBMs (Vincent et al.6Published as a conference paper at ICLR 2017(2010)) and our learning algorithm PC-AE suggest that we could train a deep network in alternatingforward and backward passes. Using this paper’s ideas, the forward pass would learn the weights toeach layer given the previous layer’s activations (and inter-layer pairwise correlations) by minimizingthe slack function, with the backward pass learning the activations for each layer given the weights to/ activations of the next layer by convex optimization (as we learn E). Both passes would consistof successive convex optimizations dictated by our approach, quite distinct from backpropagation,though loosely resembling the wake-sleep algorithm (Hinton et al. (1995)).4.2 G ENERATIVE APPLICATIONSParticularly recently, autoencoders have been of interest largely for their many applications beyondcompression, especially for their generative uses. The most directly relevant to us involve repurposingdenoising autoencoders (Bengio et al. (2013b); see Sec. 5.2); moment matching among hidden andvisible units (Li et al. (2015)); and generative adversarial network ideas (Goodfellow et al. (2014);Makhzani et al. (2015)), the latter particularly since the techniques of this paper have been applied tobinary classification (Balsubramani & Freund (2015a;b)). These are outside this paper’s scope, butsuggest themselves as future extensions of our approach.5 E XTENSIONS5.1 O THER RECONSTRUCTION LOSSESIt may make sense to use another reconstruction loss other than cross-entropy, for instance theexpected Hamming distance between x(i)and~x(i). It turns out that the minimax manipulations weuse work under very broad conditions, for nearly any loss that additively decomposes over the Vbitsas cross-entropy does. In such cases, all that is required is that the partial losses `+(~x(i)v);`(~x(i)v)aremonotonically decreasing and increasing respectively (recall that for cross-entropy loss, this is true as`(~x(i)v) = ln21~x(i)v); they need not even be convex. This monotonicity is a natural condition,because the loss measures the discrepancy to the true label, and holds for all losses in common use.Changing the partial losses only changes the structure of the minimax solution in two respects: byaltering the form of the transfer function on the decoding neurons, and the univariate potential well optimized to learn the decoding weights. Otherwise, the problem remains convex and the algorithmis identical. Formal statements of these general results are in Appendix E.5.2 D ENOISING AUTOENCODINGOur framework can be easily applied to learn a denoising autoencoder (DAE; Vincent et al. (2008;2010)), which uses noise-corrupted data (call it _X) for training, and uncorrupted data for evaluation.From our perspective, this corresponds to leaving the learning of Wunchanged, but using corrupteddata when learning E. Consequently, the minimization problem over encodings must be changed toaccount for the bias on Bintroduced by the noise; so the algorithm plays given the noisy data, but tominimize loss against X. This is easiest to see for zero-mean noise, for which our algorithms arecompletely unchanged because Bdoes not change (in expectation) after the noise is added.Another common scenario illustrating this technique is to mask a fraction of the input bits uniformlyat random (in our notation, changing 1s to1s). This masking noise changes each pairwise correlationbv;hby an amount v;h:=1nPni=1( _x(i)vx(i)v)e(i)h. Therefore, the optimand Eq. (4)must be modifiedby subtracting this factor v;h. Thisv;hcan be estimated (w.h.p.) given _xv;eh;;xv. But even withjust the noisy data and not xv, we can estimate v;hw.h.p. by extrapolating the correlation of the bitsof_xvthat are left as +1(a1fraction) with the corresponding values in eh(see Appendix C).7Published as a conference paper at ICLR 2017Table 1: Cross-entropy reconstruction losses for PC-AE and a vanilla single-layer autoencoder, withbinary and unconstrained real-valued encodings, and significant results in bold. The PC-AE resultsare significantly better (see Appendix A) than the AE results.PC-AE (bin.) PC-AE (real) AE (bin.) AE (real) PCAMNIST,H= 32 51.9 53.8 65.2 64.3 86.6MNIST,H= 100 9.2 9.9 26.8 25.0 52.7Omniglot,H= 32 76.1 77.2 93.1 90.6 102.8Omniglot,H= 100 12.1 13.2 46.6 45.4 63.6Caltech-101, H= 32 54.5 54.9 97.5 87.6 118.7Caltech-101, H= 100 7.1 7.1 64.3 45.4 75.2notMNIST,H= 32 121.9 122.4 149.6 141.8 174.0notMNIST,H= 100 62.2 63.0 99.6 92.1 115.5Adult,H= 10 7.7 7.8 9.3 8.1 13.5Adult,H= 20 0.65 0.64 2.5 1.5 7.96 E XPERIMENTSIn this section we compare our approach2empirically to a standard autoencoder with one hidden layer(termed AE here) trained with backpropagation, and a thresholded PCA baseline. Our goal is simplyto verify that our approach, though very different, is competitive in reconstruction performance.The datasets we use are first normalized to [0;1], and then binarized by sampling each pixel stochasti-cally in proportion to its intensity, following prior work (Salakhutdinov & Murray (2008)). Changingbetween binary and real-valued encodings in PC-AE requires just a line of code, to project the en-codings into [1;1]Hafter convex optimization updates to compute ENC(). We use Adagrad (Duchiet al. (2011)) for the convex minimizations of our algorithms; we observed that their performance isnot very sensitive to the choice of optimization method, explained by our approach’s convexity.We compare to a basic AE with a single hidden layer, trained using the Adam method with defaultparameters (Kingma & Ba (2014)). Other models like variational autoencoders (Kingma & Welling(2013)) are not shown here because they do not aim to optimize reconstruction loss or are notcomparably general autoencoding architectures. We also use a sign-thresholded PCA baseline(essentially a completely linear autoencoder, but with the output layer thresholded to be in [1;1]);see Appendix A for more details. We vary the number of hidden units Hfor all algorithms, andtry both binary and unconstrained real-valued encodings where appropriate; the respective AE useslogistic sigmoid and ReLU transfer functions for the encoding neurons. The results are in Table 1.The reconstruction performance of PC-AE indicates that it can encode information very well usingpairwise correlations, compared to the directly learned AE and PCA approaches. Loss can becomeextremely low when His raised, giving Bthe capacity to robustly encode almost all the informationin the input bits ^X. The performance is roughly equal between binary hidden units and unconstrainedones, which is expected by our derivations.We also try learning just the decoding layer of Sec. 2, on the encoded representation of the AE. Thisis motivated by the fact that Sec. 2 establishes our decoding method to be worst-case optimal givenanyEandB. We find the results to be significantly worse than the AE alone in all datasets used (e.g.reconstruction loss of 171=133on MNIST, and211=134on Omniglot, with 32=100hiddenunits respectively). This reflects the AE’s training backpropagating information about the data beyondpairwise correlations, through non-convex function compositions – however, this comes at the costof being more difficult to optimize. The representations learned by the E NCfunction of PC-AE arequite different and capture much more of the pairwise correlation information, which is used by thedecoding layer in a worst-case optimal fashion. We attempt to visually depict the differences betweenthe representations in Fig. 3.As discussed in Sec. 4, we do not claim that this paper’s method will always achieve the best empiricalreconstruction loss, even among single-layer autoencoders. We would like to make the encoding2TensorFlow code available at https://github.com/aikanor/pc-autoencoder .8Published as a conference paper at ICLR 2017Figure 1: Top row: randomly chosen test images from Caltech-101 silhouettes. Middle and bottomrows: corresponding reconstructions of PC-AE and AE with H= 32 binary hidden units.Figure 2: As Fig. 2, with H= 100 on Omniglot. Difference in quality is particularly noticeable inthe 1st, 5th, 8th, and 11th columns.function quicker to compute, as well. But we believe this paper’s results, especially when His high,illustrate the potential of using pairwise correlations for autoencoding as in our approach, learning toencode with alternating convex minimization and extremely strong worst-case robustness guarantees.ACKNOWLEDGMENTSI am grateful to Jack Berkowitz, Sanjoy Dasgupta, and Yoav Freund for helpful discussions; DanielHsu and Akshay Krishnamurthy for instructive examples; and Gary Cottrell for an enjoyable chat. Iacknowledge funding from the NIH (grant R01ES02500902).REFERENCESDavid H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmannmachines. Cognitive science , 9(1):147–169, 1985.Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning somedeep representations. In Proceedings of the 31st International Conference on Machine Learning(ICML-14) , pp. 584–592, 2014.Özlem Aslan, Xinhua Zhang, and Dale Schuurmans. Convex deep learning via normalized kernels.InAdvances in Neural Information Processing Systems , pp. 3275–3283, 2014.Francis Bach. Breaking the curse of dimensionality with convex neural networks. arXiv preprintarXiv:1412.8690 , 2014.Pierre Baldi. Autoencoders, unsupervised learning, and deep architectures. Unsupervised andTransfer Learning Challenges in Machine Learning, Volume 7 , pp. 43, 2012.Pierre Baldi and Kurt Hornik. Neural networks and principal component analysis: Learning fromexamples without local minima. Neural networks , 2(1):53–58, 1989.Akshay Balsubramani and Yoav Freund. Optimally combining classifiers using unlabeled data. InConference on Learning Theory (COLT) , 2015a.Akshay Balsubramani and Yoav Freund. Scalable semi-supervised classifier aggregation. In Advancesin Neural Information Processing Systems (NIPS) , 2015b.9Published as a conference paper at ICLR 2017Figure 3: Top three rows: the reconstructions of random test images from MNIST ( H= 12 ), as inFig. 2. PC-AE achieves loss 105:1here, and AE 111:2. Fourth and fifth rows: visualizations of allthe hidden units of PC-AE and AE, respectively. It is not possible to visualize the PC-AE encodingunits by the image that maximally activates them, as commonly done, because of the form of theENCfunction which depends on Wand lacks explicit encoding weights. So each hidden unit hisdepicted by the visible decoding of the encoded representation which has bit h"on" and all other bits"off." (If this were PCA with a linear decoding layer, this would simply represent hidden unit hby itscorresponding principal component vector, the decoding of the hthcanonical basis vector in RH.)Akshay Balsubramani and Yoav Freund. Optimal binary classifier aggregation for general losses. InAdvances in Neural Information Processing Systems (NIPS) , 2016. arXiv:1510.00452.Akshay Balsubramani, Sanjoy Dasgupta, and Yoav Freund. The fast convergence of incremental pca.InAdvances in Neural Information Processing Systems (NIPS) , pp. 3174–3182, 2013.Peter L Bartlett. The sample complexity of pattern classification with neural networks: the size of theweights is more important than the size of the network. IEEE Transactions on Information Theory ,44(2):525–536, 1998.Yoshua Bengio. Learning deep architectures for ai. Foundations and Trends in Machine Learning , 2(1):1–127, 2009.Yoshua Bengio, Nicolas L Roux, Pascal Vincent, Olivier Delalleau, and Patrice Marcotte. Convexneural networks. In Advances in neural information processing systems (NIPS) , pp. 123–130,2005.Yoshua Bengio, Aaron Courville, and Pierre Vincent. Representation learning: A review and newperspectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on , 35(8):1798–1828,2013a.Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent. Generalized denoising auto-encodersas generative models. In Advances in Neural Information Processing Systems (NIPS) , pp. 899–907,2013b.Hervé Bourlard and Yves Kamp. Auto-association by multilayer perceptrons and singular valuedecomposition. Biological cybernetics , 59(4-5):291–294, 1988.Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. Interna-tional Conference on Learning Representations (ICLR) , 2016. arXiv preprint arXiv:1509.00519.Nicolo Cesa-Bianchi and Gàbor Lugosi. Prediction, Learning, and Games . Cambridge UniversityPress, New York, NY , USA, 2006.Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and YoshuaBengio. Identifying and attacking the saddle point problem in high-dimensional non-convexoptimization. In Advances in neural information processing systems (NIPS) , pp. 2933–2941, 2014.10Published as a conference paper at ICLR 2017John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning andstochastic optimization. The Journal of Machine Learning Research , 12:2121–2159, 2011.Yoav Freund and David Haussler. Unsupervised learning of distributions on binary vectors usingtwo layer networks. In Advances in Neural Information Processing Systems (NIPS) , pp. 912–919,1992.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in NeuralInformation Processing Systems (NIPS) , pp. 2672–2680, 2014.Jochen Gorski, Frank Pfeuffer, and Kathrin Klamroth. Biconvex sets and optimization with biconvexfunctions: a survey and extensions. Mathematical Methods of Operations Research , 66(3):373–407,2007.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition. arXiv preprint arXiv:1512.03385 , 2015.Geoffrey E Hinton, Peter Dayan, Brendan J Frey, and Radford M Neal. The" wake-sleep" algorithmfor unsupervised neural networks. Science , 268(5214):1158–1161, 1995.Majid Janzamin, Hanie Sedghi, and Anima Anandkumar. Beating the perils of non-convexity:Guaranteed training of neural networks using tensor methods. arXiv preprint arXiv:1506.08473 ,2015.Michael I Jordan. Why the logistic function? a tutorial discussion on probabilities and neuralnetworks, 1995.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature , 521(7553):436–444,2015.Yujia Li, Kevin Swersky, and Rich Zemel. Generative moment matching networks. In Proceedingsof the 32nd International Conference on Machine Learning (ICML-15) , pp. 1718–1727, 2015.Roi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational efficiency of training neuralnetworks. In Advances in Neural Information Processing Systems (NIPS) , pp. 855–863, 2014.Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders.arXiv preprint arXiv:1511.05644 , 2015.Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervisedlearning with ladder networks. In Advances in Neural Information Processing Systems , pp. 3546–3554, 2015.David E Rumelhart and James L McClelland. Parallel distributed processing, explorations inthe microstructure of cognition. vol. 1: Foundations. Computational Models of Cognition andPerception, Cambridge: MIT Press , 1987.David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations byback-propagating errors. Nature , 323(6088):533–536, 1986.Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. InProceedings of the 25th International Conference on Machine Learning (ICML) , pp. 872–879,2008.Ohad Shamir. Convergence of stochastic gradient descent for pca. International Conference onMachine Learning (ICML) , 2016. arXiv preprint arXiv:1509.09002.11Published as a conference paper at ICLR 2017P Smolensky. Information processing in dynamical systems: foundations of harmony theory. InParallel distributed processing: explorations in the microstructure of cognition, vol. 1 , pp. 194–281.MIT Press, 1986.Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting andcomposing robust features with denoising autoencoders. In Proceedings of the 25th internationalconference on Machine learning (ICML) , pp. 1096–1103. ACM, 2008.Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.Stacked denoising autoencoders: Learning useful representations in a deep network with a localdenoising criterion. The Journal of Machine Learning Research , 11:3371–3408, 2010.Yuchen Zhang, Percy Liang, and Martin J Wainwright. Convexified convolutional neural networks.arXiv preprint arXiv:1609.01000 , 2016.12Published as a conference paper at ICLR 2017A E XPERIMENTAL DETAILSIn addition to MNIST, we use the preprocessed version of the Omniglot dataset found in Burda et al.(2016), split 1 of the Caltech-101 Silhouettes dataset, the small notMNIST dataset, and the UCI Adult(a1a) dataset. The results reported are the mean of 10 Monte Carlo runs, and the PC-AE significanceresults use 95% Monte Carlo confidence intervals. Only notMNIST comes without a predefined split,so the displayed results use 10-fold cross-validation. Non-binarized versions of all datasets (grayscalepixels) resulted in nearly identical PC-AE performance (not shown); this is as expected from itsderivation using expected pairwise correlations, which with high probability are nearly invariantunder binarization (by e.g. Hoeffding bounds).We used minibatches of size 250. All standard autoencoders use the ’Xavier’ initialization and trainedfor 500 epochs or using early stopping on the test set. The “PCA" baseline was run on exactlythe same input data as the others; it finds decodings by mean-centering this input, finding the topHprincipal components with standard PCA, reconstructing the mean-centered input with thesecomponents, adding back the means, and finally thresholding the result to [1;1]V.We did not evaluate against other types of autoencoders which regularize (Kingma & Welling(2013)) or are otherwise not trained for direct reconstruction loss minimization. Also, not shown isthe performance of a standard convolutional autoencoder (32-bit representation, depth-3 64-64-32(en/de)coder) which performs better than the standard autoencoder, but is still outperformed byPC-AE on our image-based datasets. A deeper architecture could quite possibly achieve superiorperformance, but the greater number of channels through which information is propagated makes faircomparison with our flat fully-connected approach difficult. We consider extension of our PC-AEapproach to such architectures to be fascinating future work.A.1 F URTHER RESULTSOur bound on worst-case loss is invariably quite tight, as shown in Fig. 4. Similar results are foundon all datasets. This is consistent with our conclusions about the nature of the PC-AE representations– conveying almost exactly the information available in pairwise correlations.Figure 4: Actual reconstruction loss to real data (red) and slack function [objective function] value(dotted green), during Adagrad optimization to learn Wusing the optimal E;B. Monotonicity isexpected since this is a convex optimization. The objective function value theoretically upper-boundsthe actual loss, and practically tracks it nearly perfectly.A 2D visualization of MNIST is in Fig. 6, showing that even with just two hidden units there isenough information in pairwise correlations for PC-AE to learn a sensible embedding. We alsoinclude more pictures of our autoencoders’ reconstructions, and visualizations of the hidden unitswhenH= 100 in Fig. 5.13Published as a conference paper at ICLR 2017Figure 5: Visualizations of all the hidden units of PC-AE (left) and AE (right) from Omniglot forH= 100 , as in Fig. 3.Figure 6: AE (left) and PC-AE (right) visualizations of a random subset of MNIST test data, withH= 2real-valued hidden units, and colors corresponding to class labels (legend at left). PC-AE ’sloss is189here, and that of AE is 179.A.2 PC-AE I NTERPRETATION AND IMPLEMENTATION DETAILSHere we give some details that are useful for interpretation and implementation of the proposedmethod.14Published as a conference paper at ICLR 2017Figure 7: As Fig. 2, with H= 100 on Caltech-101 silhouettes.Figure 8: As Fig. 2, with H= 100 on MNIST.A.2.1 E NCODINGProposition 2 defines the encoding function for any data example xas the vector that minimizes thetotal feature distortion, summed over the bits in the decoding, rewritten here for convenience:ENC(x(i);W) := arg mine2[1;1]HVXv=1hx(i)vw>ve(i)+ (w>ve(i))i(7)Doing this on multiple examples at once (in memory as a minibatch) can be much faster than oneach example separately. We can now compute the gradient of the objective function w.r.t. eachexamplei2[n], writing the gradient w.r.t. example ias columniof a matrix G2RHn.Gcan becalculated efficiently in a number of ways, for example as follows:Compute matrix of hallucinated data X:= 0(WE)2RVn.Subtract Xto compute residuals R:=XX2RVn.Compute G=1nW>R2RHn.Optimization then proceeds with gradient descent using G, with the step size found using linesearch. Note that since the objective function is convex, the optimum Eleads to optimal residualsR2RVnsuch that G=1nW>R=0Hn, so each column of Ris in the null space of W>,which maps the residual vectors to the encoded space. We conclude that although the compression isnot perfect (so the optimal residuals R6=0Vnin general), each column of Ris orthogonal tothe decoding weights at an equilibrium towards which the convex minimization problem of (7)isguaranteed to stably converge.A.2.2 D ECODINGThe decoding step finds Wto ensure accurate decoding of the given encodings Ewith correlationsB, solving the convex minimization problem:W= arg minW2RVHVXv=1"b>vwv+1nnXi=1(w>ve(i))#(8)This can be minimized by first-order convex optimization. The gradient of (8) at Wis:B+1n[0(WE)]E>(9)15Published as a conference paper at ICLR 2017Figure 9: As Fig. 2, with H= 32 on notMNIST.The second term can be understood as “hallucinated" pairwise correlations B, between bits of theencoded examples Eand bits of their decodings under the current weights, X:= 0(WE). Thehallucinated correlations can be written as B:=1nXE>. Therefore, (9)can be interpreted as theresidual correlations BB. Since the slack function of (8)is convex, the optimum Wleads tohallucinated correlations B=B, which is the limit reached by the optimization algorithm aftermany iterations.B A LLOWING RANDOMIZED DATA AND ENCODINGSIn this paper, we represent the bit-vector data in a randomized way in [1;1]V. Randomizing the dataonly relaxes the constraints on the adversary in the game we play; so at worst we are working with anupper bound on worst-case loss, instead of the exact minimax loss itself, erring on the conservativeside. Here we briefly justify the bound as being essentially tight, which we also see empirically inthis paper’s experiments.In the formulation of Section 2, the only information we have about the data is its pairwise correlationswith the encoding units. When the data are abundant ( nlarge), then w.h.p. these correlations are closeto their expected values over the data’s internal randomization, so representing them as continuousvalues w.h.p. results in the same Band therefore the same solutions for E;W. We are effectivelyallowing the adversary to play each bit’s conditional probability of firing, rather than the binaryrealization of that probability.This allows us to apply minimax theory and duality to considerably simplify the problem to a convexoptimization, when it would otherwise be nonconvex, and computationally hard (Baldi (2012)). Thefact that we are only using information about the data through its expected pairwise correlations withthe hidden units makes this possible.The above also applies to the encodings and their internal randomization, allowing us to learn binaryrandomized encodings by projecting to the convex set [1;1]H.C D ENOISING AUTOENCODER WITH MASKING NOISE : DETAILSThis section elaborates on the discussion of Sec. 5.2.Recall the correlation correction term v;hfrom Sec. 5.2:v;h=1nnXi=1( _x(i)vx(i)v)e(i)hHere, we express this in terms of the known quantities _xv;eh;, and not the unknown denoised dataxv.Consider that( _x(i)vx(i)v)e(i)h=1x(i)v=1( _x(i)vx(i)v)e(i)h+1x(i)v= +1( _x(i)vx(i)v)e(i)h16Published as a conference paper at ICLR 2017Now ifx(i)v=1, then _x(i)v=1, so( _x(i)vx(i)v)e(i)h= 0. Therefore the first term above is zero,and the expression can be simplified:( _x(i)vx(i)v)e(i)h=1x(i)v= +1( _x(i)vx(i)v)e(i)h=1x(i)v= +1^_x(i)v=1(2)e(i)h(10)Now on any example i, independent of the value of e(i)h, afraction of the bits where x(i)v= +1 areflipped to get _x(i)v. Therefore,1nXi=11x(i)v= +1^_x(i)v=1e(i)h11nXi=11x(i)v= +1^_x(i)v= +1e(i)hPutting it all together,v;h=1nnXi=1( _x(i)vx(i)v)e(i)h=2nnXi=11x(i)v= +1^_x(i)v=1e(i)h2n1nXi=11x(i)v= +1^_x(i)v= +1e(i)h=2n1nXi=11_x(i)v= +1e(i)hD P ROOFSProof of Theorem 1. Writing (~x(i)v) :=`(~x(i)v)`+(~x(i)v) = ln1+~x(i)v1~x(i)vfor convenience, wecan simplifyL, using the definition of the loss (2), and Lagrange duality for all VH constraintsinvolving B.This leads to the following chain of equalities, where for brevity the constraint sets are sometimesomitted when clear, and we write Xas shorthand for the data x(1);:::;x(n)and~Xanalogously forthe reconstructions.L=12min~x(1);:::;~x(n)2[1;1]Vmaxx(1);:::;x(n)2[1;1]V;8v2[V]:1nExv=bv1nnXi=1VXv=1h1 +x(i)v`+(~x(i)v) +1x(i)v`(~x(i)v)i=12min~XmaxXminW2RVH"1nnXi=1VXv=1`+(~x(i)v) +`(~x(i)v)x(i)v(~x(i)v)+VXv=1w>v1nExvbv#(a)=12minw1;:::;wV"VXv=1b>vwv+1nmin~XmaxXVXv=1"nXi=1`+(~x(i)v) +`(~x(i)v)x(i)v(~x(i)v)+w>vExv##=12minw1;:::;wV"VXv=1b>vwv+1nmin~XnXi=1VXv=1`+(~x(i)v) +`(~x(i)v) + maxx(i)2[1;1]Vx(i)vw>ve(i)(~x(i)v)#(11)where (a)uses the minimax theorem (Cesa-Bianchi & Lugosi (2006)), which can be applied as inlinear programming, because the objective function is linear in x(i)andwv. Note that the weightsare introduced merely as Lagrange parameters for the pairwise correlation constraints, not as modelassumptions.The strategy x(i)which solves the inner maximization of (11) is to simply match signs with w>ve(i)(~x(i)v)coordinate-wise for each v2[V]. Substituting this into the above,L=12minw1;:::;wV"VXv=1b>vwv+1nnXi=1min~x(i)2[1;1]VVXv=1`+(~x(i)v) +`(~x(i)v) +w>ve(i)(~x(i)v)#=12VXv=1minwv2RH"b>vwv+1nnXi=1min~x(i)v2[1;1]`+(~x(i)v) +`(~x(i)v) +w>ve(i)(~x(i)v)#17Published as a conference paper at ICLR 2017The absolute value breaks down into two cases, so the inner minimization’s objective can be simplified:`+(~x(i)v) +`(~x(i)v) +w>ve(i)(~x(i)v)=(2`+(~x(i)v) +w>ve(i)ifw>ve(i)(~x(i)v)2`(~x(i)v)w>ve(i)ifw>ve(i)<(~x(i)v)(12)Suppose ~x(i)vfalls in the first case of (12), so that w>ve(i)(~x(i)v). By definition of `+(),2`+(~x(i)v) +w>ve(i)is decreasing in ~x(i)v, so it is minimized for the greatest ~x(i)v1s.t.(~x(i)v)w>ve(i). This means (~x(i)v) =w>ve(i), so the minimand (12) is`+(~x(i)v) +`(~x(i)v), where~xiv=1ew>ve(i)1+ew>ve(i).A precisely analogous argument holds if ~x(i)vfalls in the second case of (12), where w>ve(i)<(~x(i)v).Putting the cases together, we have shown the form of the summand . We have also shown thedependence of ~x(i)vonw>ve(i), where wvis the minimizer of the outer minimization of (11). Thiscompletes the proof.D.1L1CORRELATION CONSTRAINTS AND L1WEIGHT REGULARIZATIONHere we formalize the discussion of Sec. 3.4 with the following result.Theorem 3.min~x(1);:::;~x(n)2[1;1]Vmaxx(1);:::;x(n)2[1;1]V;8v2[V]:k1nExvbvk1v1nnXi=1`(x(i);~x(i))=12VXv=1minwv2RH"b>vwv+1nnXi=1(w>ve(i)) +vkwvk1#For eachv;i, the minimizing ~x(i)vis a logistic function of the encoding e(i)with weights equal to theminimizing wvabove, exactly as in Theorem 1.Proof. The proof adapts the proof of Theorem 1, following the result on L1regularization inBalsubramani & Freund (2016) in a very straightforward way; we describe this here.We break each L1constraint into two one-sided constraints for each v, i.e.1nExvbvv1nand1nExvbvv1n. These respectively give rise to two sets of Lagrange parameters v;v0Hfor eachv, replacing the unconstrained Lagrange parameters wv2RH.The conditions for the minimax theorem apply here just as in the proof of Theorem 1, so that (11) isreplaced by12min1;:::; V1;:::;V"VXv=1b>v(vv)v1>(v+v)(13)+1nmin~XnXi=1VXv=1`+(~x(i)v) +`(~x(i)v) + maxx(i)x(i)v(vv)>e(i)(~x(i)v)#(14)Suppose for some h2[H]thatv;h>0andv;h>0. Then subtracting min(v;h;v;h)from bothdoes not affect the value [vv]h, but always decreases [v+v]h, and therefore always decreasesthe objective function. Therefore, we can w.l.o.g. assume that 8h2[H] : min(v;h;v;h) = 0 .Defining wv=vv(so thatv;h= [wv;h]+andv;h= [wv;h]for allh), we see that the termv1>(v+v)in (13) can be replaced by vkwvk1.Proceeding as in the proof of Theorem 1 gives the result.18Published as a conference paper at ICLR 2017E G ENERAL RECONSTRUCTION LOSSESIn this section we extend Theorem 1 to a larger class of reconstruction losses for binary autoencoding,of which cross-entropy loss is a special case. This uses techniques recently employed by Balsubramani& Freund (2016) for binary classification.Since the data Xare still randomized binary, we first broaden the definition of (2), rewritten here:`(x(i);~x(i)) :=VXv=1" 1 +x(i)v2!`+(~x(i)v) + 1x(i)v2!`(~x(i)v)#(15)We do this by redefining the partial losses `(~x(i)v), to any functions satisfying the following mono-tonicity conditions.Assumption 1. Over the interval (1;1),`+()is decreasing and `()is increasing, and both aretwice differentiable.Assumption 1 is a very natural one and includes many non-convex losses (see Balsubramani & Freund(2016) for a more detailed discussion, much of which applies bitwise here). This and the additivedecomposability of (15) over theVbits are the only assumptions we make on the reconstruction loss`(x(i);~x(i)). The latter decomposability assumption is often natural when the loss is a log-likelihood,where it is tantamount to conditional independence of the visible bits given the hidden ones.Given such a reconstruction loss, define the increasing function (y) :=`(y)`+(y) : [1;1]7!R,for which there exists an increasing (pseudo)inverse 1. Using this we broaden the definition of thepotential function in terms of`:(m) :=8<:m+ 2`(1) ifm(1)`+(1(m)) +`(1(m)) ifm2((1);(1))m+ 2`+(1) ifm(1)Then we may state the following result, describing the optimal decoding function for a generalreconstruction loss.Theorem 4. Define the potential functionmin~x(1);:::;~x(n)2[1;1]Vmaxx(1);:::;x(n)2[1;1]V;8v2[V]:1nExv=bv1nnXi=1`(x(i);~x(i))=12VXv=1minwv2RH"b>vwv+1nnXi=1(w>ve(i))#For eachv2[V];i2[n], the minimizing ~x(i)vis a sigmoid function of the encoding e(i)with weightsequal to the minimizing wvabove, as in Theorem 1. The sigmoid is defined as~x(i)v:=8<:1 ifwv>e(i)(1)1(wv>e(i)) ifwv>e(i)2((1);(1))1 ifwv>e(i)(1)(16)The proof is nearly identical to that of the main theorem of Balsubramani & Freund (2016). Thatproof is essentially recapitulated here for each bit v2[V]due to the additive decomposability of theloss, through algebraic manipulations (and one application of the minimax theorem) identical to theproof of Theorem 1, but using the more general specifications of andin this section. So we donot rewrite it in full here.A notable special case of interest is the Hamming loss, for which `(~x(i)v) =121~x(i)v,where the reconstructions are allowed to be randomized binary values. In this case, we have(m) = max(jmj;1), and the sigmoid used for each decoding neuron is the clipped linearitymax(1;min(wv>e(i);1)).19Published as a conference paper at ICLR 2017F A LTERNATE APPROACHESWe made some technical choices in the derivation of PC-AE , which prompt possible alternativesnot explored here for a variety of reasons. Recounting these choices gives more insight into ourframework.The output reconstructions could have restricted pairwise correlations, i.e.1n~XE>=B. Oneoption is to impose such restrictions instead of the existing constraints on X, leaving Xunrestricted.However, this is not in the spirit of this paper, because Bis our means of indirectly conveyinginformation to the decoder about how Xis decoded.Another option is to restrict both ~XandX. This is possible and may be useful in propagatingcorrelation information between layers of deeper architectures while learning, but its minimaxsolution does not have the conveniently clean structure of the PC-AE derivation.In a similar vein, we could restrict Eduring the encoding phase, using BandX. AsBis changedonly during this phase to better conform to the true data X, this tactic fixes Bduring the optimization,which is not in the spirit of this paper’s approach. It also performed significantly worse in ourexperiments.20
rJY0-Kcll
Published as a conference paper at ICLR 2017OPTIMIZATION AS A MODEL FORFEW-SHOT LEARNINGSachin Raviand Hugo LarochelleTwitter, Cambridge, USAfsachinr,hugog@twitter.comABSTRACTThough deep neural networks have shown great success in the large data domain,they generally perform poorly on few-shot learning tasks, where a classifier has toquickly generalize after seeing very few examples from each class. The generalbelief is that gradient-based optimization in high capacity classifiers requires manyiterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to trainanother learner neural network classifier in the few-shot regime. The parametriza-tion of our model allows it to learn appropriate parameter updates specifically forthe scenario where a set amount of updates will be made, while also learning ageneral initialization of the learner (classifier) network that allows for quick con-vergence of training. We demonstrate that this meta-learning model is competitivewith deep metric-learning techniques for few-shot learning.1 I NTRODUCTIONDeep learning has shown great success in a variety of tasks with large amounts of labeled data inimage classification (He et al., 2015), machine translation (Wu et al., 2016), and speech modeling(Oord et al., 2016). These achievements have relied on the fact that optimization of these deep,high-capacity models requires many iterative updates across many labeled examples. This type ofoptimization breaks down in the small data regime where we want to learn from very few labeledexamples. In this setting, rather than have one large dataset, we have a set of datasets, each with fewannotated examples per class. The motivation for this task lies not only in the fact that humans, evenchildren, can usually generalize after just one example of a given object, but also because modelsexcelling at this task would have many useful applications. Firstly, they would help alleviate datacollection as we would not require millions of labeled examples to attain reasonable performance.Furthermore, in many fields, data exhibits the characteristic of having many different classes but fewexamples per class. Models that are able to generalize from few examples would be able to capturethis type of data effectively.There seem to be two main reasons why gradient-based optimization fails in the face of few la-beled examples. Firstly, the variants of gradient-based optimization algorithms, such as momentum(Nesterov, 1983), Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), and ADAM (Kingma &Ba, 2014), weren’t designed specifically to perform well under the constraint of a set number ofupdates. Specifically when applied to non-convex optimization problems, with a reasonable choiceof hyperparameters these algorithms don’t have very strong guarantees of speed of convergence,beyond that they will eventually converge to a good solution after what could be many millions ofiterations. Secondly, for each separate dataset considered, the network would have to start from arandom initialization of its parameters, which considerably hurts its ability to converge to a goodsolution after a few updates. Transfer learning (Caruana, 1995; Bengio et al., 2012; Donahue et al.,2013) can be applied to alleviate this problem by fine-tuning a pre-trained network from another taskwhich has more labelled data; however, it has been observed that the benefit of a pre-trained networkgreatly decreases as the task the network was trained on diverges from the target task (Yosinski et al.,2014). What is needed is a systematic way to learn a beneficial common initialization that wouldWork done as an intern at Twitter. Sachin is a PhD student at Princeton University and can be reached atsachinr@princeton.edu .1Published as a conference paper at ICLR 2017serve as a good point to start training for the set of datasets being considered. This would provide thesame benefits as transfer learning, but with the guarantee that the initialization is an optimal startingpoint for fine-tuning.Previous work has suggested one manner in which to acquire quick knowledge from few examples,through the idea of meta-learning (Thrun, 1998; Schmidhuber et al., 1997). Meta-learning suggestsframing the learning problem at two levels. The first is quick acquisition of knowledge within eachseparate task presented. This process is guided by the second, which involves slower extraction ofinformation learned across all the tasks.We present a method here that addresses the weakness of neutral networks trained with gradient-based optimization on the few-shot learning problem by framing the problem within a meta-learningsetting. We propose an LSTM-based meta-learner optimizer that is trained to optimize a learnerneural network classifier. The meta-learner captures both short-term knowledge within a task andlong-term knowledge common among all the tasks. By using an objective that directly captures anoptimization algorithm’s ability to have good generalization performance given only a set numberof updates, the meta-learner model is trained to converge a learner classifier to a good solutionquickly on each task. Additionally, the formulation of our meta-learner model allows it to learn atask-common initialization for the learner classifier, which captures fundamental knowledge sharedamong all the tasks.2 T ASK DESCRIPTIONWe first begin by detailing the meta-learning formulation we use. In the typical machine learningsetting, we are interested in a dataset Dand usually split Dso that we optimize parameters on atraining setDtrain and evaluate its generalization on the test set Dtest. In meta-learning, however,we are dealing with meta-sets Dcontaining multiple regular datasets, where each D2Dhas a splitofDtrain andDtest.We consider the k-shot,N-class classification task, where for each dataset D, the training set con-sists ofklabelled examples for each of Nclasses, meaning that Dtrain consists ofkNexamples,andDtesthas a set number of examples for evaluation. We note that previous work (Vinyals et al.,2016) has used the term episode to describe each dataset consisting of a training and test set.In meta-learning, we thus have different meta-sets for meta-training, meta-validation, and meta-testing ( Dmetatrain ,Dmetavalidation , andDmetatest, respectively). On Dmetatrain , we areinterested in training a learning procedure (the meta-learner) that can take as input one of its train-ing setsDtrain and produce a classifier (the learner) that achieves high average classification perfor-mance on its corresponding test set Dtest. Using Dmetavalidation we can perform hyper-parameterselection of the meta-learner and evaluate its generalization performance on Dmetatest.For this formulation to correspond to the few-shot learning setting, each training set in datasetsD2Dwill contain few labeled examples (we consider k= 1 ork= 5), that must be used togeneralize to good performance on the corresponding test set. An example of this formulation isgiven in Figure 1.3 M ODELWe now move to the description of our proposed model for meta-learning.3.1 M ODEL DESCRIPTIONConsider a single dataset, or episode, D2Dmetatrain . Suppose we have a learner neural netclassifier with parameters that we want to train on Dtrain . The standard optimization algorithmsused to train deep neural networks are some variant of gradient descent, which uses updates of theformt=t1trt1Lt; (1)2Published as a conference paper at ICLR 2017Figure 1: Example of meta-learning setup. The top represents the meta-training set Dmetatrain ,where inside each gray box is a separate dataset that consists of the training set Dtrain (left side ofdashed line) and the test set Dtest(right side of dashed line). In this illustration, we are consideringthe1-shot, 5-class classification task where for each dataset, we have one example from each of5classes (each given a label 1-5) in the training set and 2examples for evaluation in the test set.The meta-test set Dmetatestis defined in the same way, but with a different set of datasets thatcover classes not present in any of the datasets in Dmetatrain (similarly, we additionally have ameta-validation set that is used to determine hyper-parameters).wheret1are the parameters of the learner after t1updates,tis the learning rate at time t,Ltis the loss optimized by the learner for its tthupdate,rt1Ltis the gradient of that loss withrespect to parameters t1, andtis the updated parameters of the learner.Our key observation that we leverage here is that this update resembles the update for the cell statein an LSTM (Hochreiter & Schmidhuber, 1997)ct=ftct1+it~ct; (2)ifft= 1;ct1=t1;it=t;and~ct=rt1Lt.Thus, we propose training a meta-learner LSTM to learn an update rule for training a neural net-work. We set the cell state of the LSTM to be the parameters of the learner, or ct=t, and thecandidate cell state ~ct=rt1Lt, given how valuable information about the gradient is for opti-mization. We define parametric forms for itandftso that the meta-learner can determine optimalvalues through the course of the updates.Let us start with it, which corresponds to the learning rate for the updates. We letit=WIrt1Lt;Lt;t1;it1+bI;meaning that the learning rate is a function of the current parameter value t1, the current gradientrt1Lt, the current lossLt, and the previous learning rate it1. With this information, the meta-learner should be able to finely control the learning rate so as to train the learner quickly whileavoiding divergence.As forft, it seems possible that the optimal choice isn’t the constant 1. Intuitively, what wouldjustify shrinking the parameters of the learner and forgetting part of its previous value would beif the learner is currently in a bad local optima and needs a large change to escape. This wouldcorrespond to a situation where the loss is high but the gradient is close to zero. Thus, one proposalfor the forget gate is to have it be a function of that information, as well as the previous value of theforget gate:ft=WFrt1Lt;Lt;t1;ft1+bF:Additionally, notice that we can also learn the initial value of the cell state c0for the LSTM, treatingit as a parameter of the meta-learner. This corresponds to the initial weights of the classifier (that3Published as a conference paper at ICLR 2017the meta-learner is training). Learning this initial value lets the meta-learner determine the optimalinitial weights of the learner so that training begins from a beneficial starting point that allowsoptimization to proceed rapidly. Lastly, note that though the meta-learner’s update rule matches thecell state update of the LSTM, the meta-learner also bears similarity to the GRU (Cho et al., 2014)hidden state update, with the exception that the forget and input gates aren’t tied to sum to one.3.2 P ARAMETER SHARING & P REPROCESSINGBecause we want our meta-learner to produce updates for deep neural networks, which consistof tens of thousands of parameters, to prevent an explosion of meta-learner parameters we need toemploy some sort of parameter sharing. Thus as in Andrychowicz et al. (2016), we share parametersacross the coordinates of the learner gradient. This means each coordinate has its own hidden andcell state values but the LSTM parameters are the same across all coordinates. This allows us touse a compact LSTM model and additionally has the nice property that the same update rule is usedfor each coordinate, but one that is dependent on the respective history of each coordinate duringoptimization. We can easily implement parameter sharing by having the input be a batch of gradientcoordinates and loss inputs (rt;iLt;Lt)for each dimension i.Because the different coordinates of the gradients and the losses can be of very different magnitudes,we need to be careful in normalizing the values so that the meta-learner is able to use them properlyduring training. Thus, we also found that the preprocessing method of Andrychowicz et al. (2016)worked well when applied to both the dimensions of the gradients and the losses at each time step:x!(log(jxj)p;sgn(x)ifjxjep(1;epx) otherwiseThis preprocessing adjusts the scaling of gradients and losses, while also separating the informationabout their magnitude and their sign (the latter being mostly useful for gradients). We found that thesuggested value of p= 10 in the above formula worked well in our experiments.3.3 T RAININGThe question now is how do we train the LSTM meta-learner model to be effective at few-shotlearning tasks? As observed in Vinyals et al. (2016), in order to perform well at this task, it is keyto have training conditions match those of test time. During evaluation of the meta-learning, foreach dataset (episode), D= (Dtrain;Dtest)2Dmetatest, a good meta-learner model will, givena series of learner gradients and losses on the training set Dtrain , suggest a series of updates for theclassifier that pushes it towards good performance on the test set Dtest.Thus to match test time conditions, when considering each dataset D2Dmetatrain , the trainingobjective we use is the loss Ltestof the produced classifier on D’s test setDtest. While iteratingover the examples in D’s training set Dtrain , at each time step tthe LSTM meta-learner receives(rt1Lt;Lt)from the learner (the classifier) and proposes the new set of parameters t. Theprocess repeats for Tsteps, after which the classifier and its final parameters are evaluated on thetest set to produce the loss that is then used to train the meta-learner. The training algorithm isdescribed in Algorithm 1 and the corresponding computational graph is shown in Figure 2.3.3.1 G RADIENT INDEPENDENCE ASSUMPTIONNotice that our formulation would imply that the losses Ltand gradientsrt1Ltof the learner aredependent on the parameters of the meta-learner. Gradients on the meta-learner’s parameters shouldnormally take this dependency into account. However, as discussed by Andrychowicz et al. (2016),this complicates the computation of the meta-learner’s gradients. Thus, following Andrychowiczet al. (2016), we make the simplifying assumption that these contributions to the gradients aren’timportant and can be ignored, which allows us to avoid taking second derivatives, a considerablyexpensive operation. We were still able to train the meta-learner effectively in spite of this simplify-ing assumption.4Published as a conference paper at ICLR 2017Figure 2: Computational graph for the forward pass of the meta-learner. The dashed line dividesexamples from the training set Dtrain and test setDtest. Each (Xi;Yi)is theithbatch from thetraining set whereas (X;Y)is all the elements from the test set. The dashed arrows indicate that wedo not back-propagate through that step when training the meta-learner. We refer to the learner asM, whereM(X;)is the output of learner Musing parameters for inputs X. We also usertasa shorthand forrt1Lt.3.3.2 I NITIALIZATION OF META-LEARNER LSTMWhen training LSTMs, it is advised to initialize the LSTM with small random weights and to set theforget gate bias to a large value so that the forget gate is initialized to be close to 1, thus enablinggradient flow (Zaremba, 2015). In addition to the forget gate bias setting, we found that we neededto initialize the input gate bias to be small so that the input gate value (and thus the learning rate)used by the meta-learner LSTM starts out being small. With this combined initialization, the meta-learner starts close to normal gradient descent with a small learning rate, which helps initial stabilityof training.3.4 B ATCH NORMALIZATIONBatch Normalization (Ioffe & Szegedy, 2015) is a recently proposed method to stabilize and thusspeed up learning of deep neural networks by reducing internal covariate shift within the learner’shidden layers. This reduction is achieved by normalizing each layer’s pre-activation, by subtractingby the mean and dividing by the standard deviation. During training, the mean and standard devi-ation are estimated using the current batch being trained on, whereas during evaluation a runningaverage of both statistics calculated on the training set is used. We need to be careful with batchnormalization for the learner network in the meta-learning setting, because we do not want to collectmean and standard deviation statistics during meta-testing in a way that allows information to leakbetween different datasets (episodes), being considered. One easy way to prevent this issue is to notcollect statistics at all during the meta-testing phase, but just use our running averages from meta-training. This, however, has a bad impact on performance, because we have changed meta-trainingand meta-testing conditions, causing the meta-learner to learn a method of optimization that relieson batch statistics which it now does not have at meta-testing time. In order to keep the two phasesas similar as possible, we found that a better strategy was to collect statistics for each dataset D2Dduring Dmetatest, but then erase the running statistics when we consider the next dataset. Thus,during meta-training, we use batch statistics for both the training and testing set whereas duringmeta-testing, we use batch statistics for the training set (and to compute our running averages) butthen use the running averages during testing. This does not cause any information to leak betweendifferent datasets, but also allows the meta-learner to be trained on conditions that are matched be-tween training and testing. Lastly, because we are doing very few training steps, we computed therunning averages so that higher preference is given to the later values.5Published as a conference paper at ICLR 2017Algorithm 1 Train Meta-LearnerInput : Meta-training set Dmetatrain , LearnerMwith parameters , Meta-Learner Rwithparameters .1:0 random initialization2:3:ford= 1;ndo4:Dtrain;Dtest random dataset from Dmetatrain5:0 c0 .Intialize learner parameters6:7: fort= 1;Tdo8: Xt;Yt random batch from Dtrain9:Lt L(M(Xt;t1);Yt) .Get loss of learner on train batch10:ct R((rt1Lt;Lt); d1).Get output of meta-learner using Equation 211:t ct .Update learner parameters12: end for13:14: X;Y Dtest15:Ltest L(M(X;T);Y) .Get loss of learner on test batch16: Update dusingrd1Ltest .Update meta-learner parameters17:18:end for4 R ELATED WORKWhile this work falls within the broad literature of transfer learning in general, we focus here onpositioning it relative to previous work on meta-learning and few-shot learning.4.1 M ETA-LEARNINGMeta-learning has a long history, but has grown to prominence recently as many have advocatedfor it as a key to achieving human-level intelligence in the future (Lake et al., 2016). The ability tolearn at two levels (learning within each task presented, while accumulating knowledge about thesimilarities and differences between tasks) is seen as being crucial to improving AI. Previous workhas used a variety of techniques in the meta-learning setting.Schmidhuber (1992; 1993) explored using networks that learn how to modify their own weights overa number of computations steps on the input. The updating of the weights is defined in a parametricform that allows the prediction and weight-change process to be differentiable end-to-end. Thework of Bengio et al. (1990; 1995) and Bengio (1993) considered learning update rules for neuralnetworks that are biologically plausible. This property is enforced by allowing the parametric formof the update to only have as input local information at each hidden unit to determine the weightchange. Different optimization methods, such as genetic programming or simulated annealing, areused to train the learning rule.In Santoro et al. (2016), a memory-augmented neural network is trained to learn how to store andretrieve memories to use for each classification task. The work of Andrychowicz et al. (2016) usesan LSTM to train a neural network; however, they are interested in learning a general optimizationalgorithm to train neural networks for large-scale classification, whereas we are interested in thefew-shot learning problem. This work also builds upon Hochreiter et al. (2001) and Bosc, bothof which used LSTMs to train multi-layer perceptrons to learn on binary classification and time-series prediction tasks. Another related method is the work of Bertinetto et al. (2016), who traina meta-learner to map a training example to the weights of a neural network that is then used toclassify future examples from this class; however, unlike our method the classifier network is directlyproduced rather than being fine-tuned after multiple training steps. Our work also bears similarityto Maclaurin et al. (2015), who tune the hyperparameters of gradient descent with momentum bybackpropagating through the chain of gradient steps to optimize the validation performance.6Published as a conference paper at ICLR 20174.2 F EW-SHOT LEARNINGThe best performing methods for few-shot learning have been mainly metric learning methods.Deep siamese networks (Koch, 2015) train a convolutional network to embed examples so thatitems in the same class are close while items in different classes are far away, according to somedistance metric. Matching networks (Vinyals et al., 2016) refine this idea so that training and testingconditions match, by defining a differentiable nearest neighbor loss involving the cosine similaritiesof embeddings produced by a convolutional network.5 E VALUATIONIn this section, we describe the results of experiments, examining the properties of our model andcomparing our method’s performance against different approaches. Following Vinyals et al. (2016),we consider the k-shot,N-class classification setting where a meta-learner trains on many relatedbut small training sets of kexamples for each of Nclasses. We first split the list of all classes inthe data into disjoint sets and assign them to each meta-set of meta-training, meta-validation, andmeta-testing. To generate each instance of a k-shot,N-class task dataset D= (Dtrain;Dtest)2D,we do the following: we first sample Nclasses from the list of classes corresponding to the meta-setwe consider. We then sample kexamples from each of those classes. These kexamples togethercompose the training set Dtrain . Then, an additional fixed amount of the rest of the examples aresampled to yield a test set Dtest. We generally have 15examples per class in the test sets. Whentraining the meta-learner, we iterate by sampling these datasets (episodes) repeatedly. For meta-validation and meta-testing, however, we produce a fixed number of these datasets to evaluate eachmethod. We produce enough datasets to ensure that the confidence interval of the mean accuracy issmall.For the learner, we use a simple CNN containing 4convolutional layers, each of which is a 33convolution with 32filters, followed by batch normalization, a ReLU non-linearity, and lastly a22max-pooling. The network then has a final linear layer followed by a softmax for the numberof classes being considered. The loss function Lis the average negative log-probability assigned bythe learner to the correct class. For the meta-learner, we use a 2-layer LSTM, where the first layer isa normal LSTM and the second layer is our modified LSTM meta-learner. The gradients and lossesare preprocessed and fed into the first layer LSTM, and the regular gradient coordinates are alsoused by the second layer LSTM to implement the state update rule shown in (1). At each time step,the learner’s loss and gradient is computed on a batch consisting of the entire training set Dtrain ,because we consider training sets with only a total of 5or25examples. We train our LSTM withADAM using a learning rate of 0:001and with gradient clipping using a value of 0:25.5.1 E XPERIMENT RESULTSThe Mini-ImageNet dataset was proposed by Vinyals et al. (2016) as a benchmark offering thechallenges of the complexity of ImageNet images, without requiring the resources and infrastructurenecessary to run on the full ImageNet dataset. Because the exact splits used in Vinyals et al. (2016)were not released, we create our own version of the Mini-Imagenet dataset by selecting a random100classes from ImageNet and picking 600examples of each class. We use 64,16, and 20classesfor training, validation and testing, respectively. We consider 1-shot and 5-shot classification for5classes. We use 15examples per class for evaluation in each test set. We compare against twobaselines and a recent metric-learning technique, Matching Networks (Vinyals et al., 2016), whichhas achieved state-of-the-art results in few-shot learning. The results are shown in Table 1.The first baseline we use is a nearest-neighbor baseline ( Baseline-nearest-neighbor ), where we firsttrain a network to classify between all the classes jointly in the original meta-training set. At meta-test time, for each dataset D, we embed all the items in the training set using our trained networkand then use nearest-neighbor matching among the embedded training examples to classify each testexample. The second baseline we use ( Baseline-finetune ) represents a coarser version of our meta-learner model. As in the first baseline, we start by training a network to classify jointly between allclasses in the meta-training set. We then use the meta-validation set to search over SGD hyperpa-rameters, where each training set is used to fine-tune the pre-trained network before evaluating onCode can be found at https://github.com/twitter/meta-learning-lstm .7Published as a conference paper at ICLR 2017Model5-class1-shot 5-shotBaseline-finetune 28:860:54% 49:790:79%Baseline-nearest-neighbor 41:080:70% 51:040:65%Matching Network 43:400:78% 51:090:71%Matching Network FCE 43:560:84% 55:310:73%Meta-Learner LSTM (OURS) 43:440:77%60:600:71%Table 1: Average classification accuracies on Mini-ImageNet with 95% confidence intervals.Marked in bold are the best results for each scenario, as well as other results with an overlappingconfidence interval.the test set. We use a fixed number of updates for fine tuning and search over the learning rate andlearning rate decay used during the course of these updates.For Matching Networks, we implemented our own version of both the basic and the fully-conditionalembedding (FCE) versions. In the basic version, a convolutional network is trained to learn indepen-dent embeddings for examples in the training and test set. In the FCE version, a bidirectional-LSTMis used to learn an embedding for the training set such that each training example’s embedding isalso a function of all the other training examples. Additionally, an attention-LSTM is used so thata test example embedding is also a function of all the embeddings of the training set. We do notconsider fine-tuning the network using the train set during meta-testing to improve performance asmentioned in Vinyals et al. (2016), but do note that our meta-learner could also be fine-tuned usingthis data. Note that to remain consistent with Vinyals et al. (2016), our baseline and matching netconvolutional networks have 4layers each with 64filters. We also added dropout to each convolu-tional block in matching nets to prevent overfitting.For our meta-learner, we train different models for the 1-shot and 5-shot tasks, that make 12and5updates, respectively. We noticed that better performance for each task was attained if the meta-learner is explicitly trained to do the set number of updates during meta-training that will be usedduring meta-testing.We attain results that are much better than the baselines discussed and competitive with MatchingNetworks. For 5-shot, we are able to do much better than Matching Networks, whereas for 1-shot,the confidence interval for our performance intersects the interval for Matching Networks. Again,we note that the numbers do not match the ones provided by Vinyals et al. (2016) simply because wecreated our version of the dataset and implemented our own versions of their model. It is interestingto note that the fine-tuned baseline is worse than the nearest-neighbor baseline. Because we are notregularizing the classifier, with very few updates the fine-tuning model overfits, especially in the1-shot case. This propensity to overfit speaks to the benefit of meta-training the initialization of theclassifier end-to-end as is done in the meta-learning LSTM.5.2 V ISUALIZATION OF META -LEARNERWe also visualize the optimization strategy learned by the meta-learner, in Figure 3. We can lookat theitandftgate values in Equation 2 at each update step, to try to get an understanding of howthe meta-learner updates the learner during training. We visualize the gate values while trainingon different datasets Dtrain , to observe whether there are variations between training sets. Weconsider both 1-shot and 5-shot classification settings, where the meta-learner is making 10and5updates, respectively. For the forget gate values for both tasks, the meta-learner seems to adopt asimple weight decay strategy that seems consistent across different layers. The input gate valuesare harder to interpret to glean the meta-learner’s strategy. However, there seems to a be a lot ofvariability between different datasets, indicating that the meta-learner isn’t simply learning a fixedoptimization strategy. Additionally, there seem to be differences between the two tasks, suggestingthat the meta-learner has adopted different methods to deal with the different conditions of eachsetting.8Published as a conference paper at ICLR 2017(a) Forget gate values for 1-shot meta-learner(b) Input gate values for 1-shot meta-learner(c) Forget gate values for 5-shot meta-learner (d) Input gate values for 5-shot meta-learnerFigure 3: Visualization of the input and forget values output by the meta-learner during the courseof its updates. Layers 14represent the values for a randomly selected parameter from the 4convolutional layers and layer 5represents the values for a random parameter from fully-connectedlayer. The different curves represent training steps on different datasets.6 C ONCLUSIONWe described an LSTM-based model for meta-learning, which is inspired from the parameter up-dates suggested by gradient descent optimization algorithms. Our LSTM meta-learner uses its stateto represent the learning updates of the parameters of a classifier. It is trained to discover both agood initialization for the learner’s parameters, as well as a successful mechanism for updating thelearner’s parameters to a given small training set for some new classification task. Our experimentsdemonstrate that our approach outperforms natural baselines and is competitive to the state-of-the-art in metric learning for few-shot learning.In this work, we focused our study to the few-shot and few-classes setting. However, it would bemore valuable to train meta-learners that can perform well across a full spectrum of settings, i.e. forfew or lots of training examples and for few or lots of possible classes. Our future work will thusconsider moving towards this more challenging scenario.ACKNOWLEDGMENTSWe thank Jake Snell, Kevin Swersky, and Oriol Vinyals for helpful discussions of this work.REFERENCESMarcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, TomSchaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. CoRR ,abs/1606.04474, 2016. URL http://arxiv.org/abs/1606.04474 .9Published as a conference paper at ICLR 2017Samy Bengio. Optimisation d’une r ́egle d’apprentissage pour r ́eseaux de neurones artificiels . PhDthesis, D ́epartement d’Informatique et Recherche Op ́erationnelle. Universit ́e de Montr ́eal, 1993.Samy Bengio, Yoshua Bengio, and Jocelyn Cloutier. On the search for new learning rules for ANNs.Neural Processing Letters , 2(4):26–30, 1995.Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule . Universit ́ede Montr ́eal, D ́epartement d’informatique et de recherche op ́erationnelle, 1990.Yoshua Bengio et al. Deep learning of representations for unsupervised and transfer learning. ICMLUnsupervised and Transfer Learning , 27:17–36, 2012.Luca Bertinetto, Jo ̃ao F. Henriques, Jack Valmadre, Philip H. S. Torr, and Andrea Vedaldi. Learningfeed-forward one-shot learners. CoRR , abs/1606.05233, 2016. URL http://arxiv.org/abs/1606.05233 .Tom Bosc. Learning to learn neural networks.Rich Caruana. Learning many related tasks at the same time with backpropagation. Advances inneural information processing systems , pp. 657–664, 1995.Kyunghyun Cho, Bart van Merrienboer, C ̧ aglar G ̈ulc ̧ehre, Fethi Bougares, Holger Schwenk, andYoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical ma-chine translation. CoRR , abs/1406.1078, 2014. URL http://arxiv.org/abs/1406.1078 .Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and TrevorDarrell. Decaf: A deep convolutional activation feature for generic visual recognition. CoRR ,abs/1310.1531, 2013. URL http://arxiv.org/abs/1310.1531 .John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning andstochastic optimization. J. Mach. Learn. Res. , 12:2121–2159, July 2011. ISSN 1532-4435. URLhttp://dl.acm.org/citation.cfm?id=1953048.2021068 .Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. CoRR , abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385 .Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Sepp Hochreiter, A. Steven Younger, and Peter R. Conwell. Learning to learn using gradient de-scent. In IN LECTURE NOTES ON COMP . SCI. 2130, PROC. INTL. CONF . ON ARTI NEURALNETWORKS (ICANN-2001 , pp. 87–94. Springer, 2001.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. CoRR , abs/1502.03167, 2015. URL http://arxiv.org/abs/1502.03167 .Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR ,abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980 .Gregory Koch. Siamese neural networks for one-shot image recognition . PhD thesis, University ofToronto, 2015.Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Buildingmachines that learn and think like people. CoRR , abs/1604.00289, 2016. URL http://arxiv.org/abs/1604.00289 .Dougal Maclaurin, David Duvenaud, and Ryan P Adams. Gradient-based hyperparameter optimiza-tion through reversible learning. In Proceedings of the 32nd International Conference on MachineLearning , 2015.Yurii Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2).1983.10Published as a conference paper at ICLR 2017Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves,Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model forraw audio. arXiv preprint arXiv:1609.03499 , 2016.Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy P. Lillicrap. One-shot learning with memory-augmented neural networks. CoRR , abs/1605.06065, 2016. URLhttp://arxiv.org/abs/1605.06065 .J ̈urgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrentnetworks. Neural Computation , 4(1):131–139, 1992.J ̈urgen Schmidhuber. A neural network that embeds its own meta-levels. In Neural Networks, 1993.,IEEE International Conference on , pp. 407–412. IEEE, 1993.J ̈urgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Shifting inductive bias with success-storyalgorithm, adaptive levin search, and incremental self-improvement. Machine Learning , 28(1):105–130, 1997.Sebastian Thrun. Lifelong learning algorithms. In Learning to learn , pp. 181–209. Springer, 1998.Oriol Vinyals, Charles Blundell, Timothy P. Lillicrap, Koray Kavukcuoglu, and Daan Wierstra.Matching networks for one shot learning. CoRR , abs/1606.04080, 2016. URL http://arxiv.org/abs/1606.04080 .Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey,Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine trans-lation system: Bridging the gap between human and machine translation. arXiv preprintarXiv:1609.08144 , 2016.Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deepneural networks? CoRR , abs/1411.1792, 2014. URL http://arxiv.org/abs/1411.1792 .Wojciech Zaremba. An empirical exploration of recurrent network architectures. 2015.Matthew D. Zeiler. ADADELTA: an adaptive learning rate method. CoRR , abs/1212.5701, 2012.URL http://arxiv.org/abs/1212.5701 .11
r1PRvK9el
Under review as a conference paper at ICLR 2017IMPLICIT REASO NET: M ODELING LARGE -SCALESTRUCTURED RELATIONSHIPS WITH SHARED MEM-ORYYelong Shen, Po-Sen Huang, Ming-Wei Chang , Jianfeng GaoMicrosoft Research, Redmond, WA, USA{yeshen,pshuang,minchang,jfgao}@microsoft.comABSTRACTRecent studies on knowledge base completion, the task of recovering missingrelationships based on recorded relations, demonstrate the importance of learningembeddings from multi-step relations. However, due to the size of knowledge bases,learning multi-step relations directly on top of observed instances could be costly.In this paper, we propose Implicit ReasoNets ( IRNs ), which is designed to performlarge-scale inference implicitly through a search controller and shared memory.Unlike previous work, IRNs use training data to learn to perform multi-stepinference through the shared memory, which is also jointly updated during training.While the inference procedure is not operating on top of observed instances forIRNs , our proposed model outperforms all previous approaches on the popularFB15k benchmark by more than 5.7%.1 I NTRODUCTIONKnowledge bases such as WordNet (Fellbaum, 1998), Freebase (Bollacker et al., 2008), orYago (Suchanek et al., 2007) contain many real-world facts expressed as triples, e.g., ( Bill Gates ,FounderOf ,Microsoft ). These knowledge bases are useful for many downstream applicationssuch as question answering (Berant et al., 2013; Yih et al., 2015) and information extraction (Mintzet al., 2009). However, despite the formidable size of knowledge bases, many important facts arestill missing. For example, West et al. (2014) showed that 21% of the 100K most frequent PERSONentities have no recorded nationality in a recent version of Freebase. We seek to infer unknownrelations based on the observed triples. Thus, the knowledge base completion (KBC) task has emergedan important open research problem (Nickel et al., 2011).Neural-network based methods have been very popular for solving the KBC task. Following Bordeset al. (2013), one of the most popular approaches for KBC is to learn vector-space representations ofentities and relations during training, and then apply linear or bi-linear operations to infer the missingrelations at test time. However, several recent papers demonstrate limitations of prior approachesrelying upon vector-space models alone. By themselves, there is no straightforward way to capturethe structured relationships between multiple triples adequately (Guu et al., 2015; Toutanova et al.,2016; Lin et al., 2015a). For example, assume that we want to fill in the missing relation for thetriple ( Obama ,NATIONALITY , ?), a multi-step search procedure might be needed to discover theevidence in the observed triples such as ( Obama ,BORNIN,Hawaii ) and ( Hawaii ,PARTOF,U.S.A ). To address this issue, Guu et al. (2015); Toutanova et al. (2016); Lin et al. (2015a) proposedifferent approaches of injecting structured information by directly operating on the observed triplets.Unfortunately, due to the size of knowledge bases, these newly proposed approaches suffer fromsome limitations, as most paths are not informative for inferring missing relations, and it is prohibitiveto consider all possible paths during the training time with expressive models.In this paper, we take a different approach from prior work on KBC by addressing the challenges ofperforming large-scale inference through the design of search controller andshared memory . Ourinference procedure centers around the search controller , which only operates on the shared memoryinstead of directly manipulating the observed triples in knowledge base. IRNs use training data toEqual contribution.1Under review as a conference paper at ICLR 2017Search ControllerS1 St St+1 St+2XtTt Tt+1ftc(St) ftc(St+1) FalseTruefo(St) TrueOt Ot+1fa(St,M) Xt+1 fa(St+1,M)FalseShared Memory AttentionTerminationOutput ModuleInput Moduleftc(St+2) fo(St+2) Ot+2Tt+2Mqfo(St+1) Figure 1: An IRN Architecture.learn to perform multi-step inference through the shared memory. First, input module generates arepresentation of the query. Then, the search controller repeatedly interacts with the shared memoryand checks the termination gate . After each iteration, if the termination condition is met, the modelstops the search process and calls the output module to generate a prediction. The shared memory isdesigned to store key information about the overall structures it learned during training, and hencethe search controller only needs to access the shared memory instead of operating on the observedtriples.There are several advantages of using IRNs . First, the cost of inference can be controlled becausethe search controller only needs to access the shared memory. Second, all the modules, including thesearch controller and memory, are jointly trained, and hence alleviate the needs to inject structuredrelationships between instances manually. Finally, we can easily extend IRNs to other tasks thatrequire modeling structured relationships between instances by switching the input and outputmodules.The main contributions of our paper are as follows:We propose Implicit ReasoNets ( IRNs ), which use a shared memory guided by a searchcontroller to model large-scale structured relationships implicitly.We evaluate IRNs and demonstrate that our proposed model achieves the state-of-the-artresults on the popular FB15k benchmark, surpassing prior approaches by more than 5.7%.We analyze the behavior of IRNs for shortest path synthesis. We show that IRNs outper-form a standard sequence-to-sequence model and execute meaningful multi-step inference.2 R EASO NET FOR IMPLICIT INFERENCEIn this section, we describe the general architecture of IRNs in a way that is agnostic to KBC. IRNsare composed of four main components: an input component, an output component, a shared memory,and a search controller, as shown in Figure 1. In this section, we briefly describe each component.Input/Output Modules : These two modules are task-dependent. The input module takes a query andconverts the query into a vector representation q. The output module is a function fo, which convertsthe hidden state received from the search controller ( s) into an output O. We optimize the whole2Under review as a conference paper at ICLR 2017model using the output prediction Owith respect to a ground-truth target using a task-specified lossfunction.Shared Memory : The shared memory is denoted as M. It consists of a list of memory vectors,M=fmigi=1:::I, wheremiis a fixed dimensional vector. The memory vectors are randomlyinitialized and automatically updated through back-propagation. The shared memory component isshared across all instances.Search Controller : The search controller is a recurrent neural network and controls the search processby keeping internal state sequences to track the current search process and history. The searchcontroller uses an attention mechanism to fetch information from relevant memory vectors in M, anddecides if the model should output the prediction or continue to generate the next possible output.Internal State : The internal state of the search controller is denoted as S, which is a vectorrepresentation of the search process. The initial state s1is usually the vector representationof the input vector q. The internal state at t-th time step is represented by st. The sequenceof internal states is modeled by an RNN: st+1=RNN (st;xt;s).Attention to memory : The attention vector xtatt-th time step is generated based on thecurrent internal state stand the shared memory M:xt=fatt(st;M;x). Specifically,the attention score at;ion a memory vector migiven a state stis computed as at;i=softmaxi=1;:::;jMjcos(W1mi;W 2st), whereis set to 10 in our experiments and theweight matrices W1andW2are learned during training. The attention vector xtcan bewritten asxt=fatt(st;M;x) =PjMjiat;imi.Termination Control : The terminate gate produces a stochastic random variable accordingto the current internal state, ttp(jftc(st;tc))).ttis a binary random variable. If ttistrue, the IRN will finish the search process, and the output module will execute at time stept; otherwise the IRN will generate the next attention vector xt+1and feed into the statenetwork to update the next internal state st+1. In our experiments, the termination variable ismodeled by a logistical regression: ftc(st;tc) =sigmoid (Wtcst+btc), where the weightmatrixWtcand bias vector btcare learned during training.Compared IRNs to Memory Networks (MemNN) (Weston et al., 2014; Sukhbaatar et al., 2015; ?)and Neural Turing Machines (NTM) (Graves et al., 2014; 2016), the biggest difference between ourmodel and the existing frameworks is the search controller and the use of the shared memory. Webuild upon our previous work (Shen et al., 2016) for using a search controller module to dynamicallyperform a multi-step inference depending on the complexity of the instance. MemNN and NTMexplicitly store inputs (such as graph definition, supporting facts) in the memory. In contrast, in IRNs ,we do not explicitly store all the observed inputs in the shared memory. Instead, we directly operateon the shared memory, which modeling the structured relationships implicitly. We randomly initializethe memory and update the memory with respect to task-specific objectives. The idea of exploitingshared memory is proposed by Munkhdalai & Yu (2016) independently. Despite of using the sameterm, the goal and the operations used by IRNs are different from the one used in Munkhdalai & Yu(2016), as IRNs allow the model to perform multi-step for each instance dynamically.2.1 S TOCHASTIC INFERENCE PROCESSThe inference process of an IRN is as follows. First, the model converts a task-dependent input toa vector representation through the input module. Then, the model uses the input representationto initialize the search controller. In every time step, the search controller determines whether theprocess is finished by sampling from the distribution according to the terminate gate. If the outcome istermination, the output module will generate a task-dependent prediction given the search controllerstates. If the outcome is continuation, the search controller will move on to the next time step,and create an attention vector based on the current search controller state and the shared memory.Intuitively, we design whole process by mimicking a search procedure that iteratively finds its targetthrough a structure and output its prediction when a satisfying answer is found. The detailed inferenceprocess is described in Algorithm 1.The inference process of an IRN is considered as a Partially Observable Markov Decision Process(POMDP) (Kaelbling et al., 1998) in the reinforcement learning (RL) literature. The IRN produces3Under review as a conference paper at ICLR 2017Algorithm 1: Stochastic Inference Process in an IRNInput : Randomly initialized shared memory M; Input vector q; Maximum step TmaxOutput : Output vector o1Defines1=q;t= 1;2Samplettfrom the distribution p(jftc(st;tc));3ifttis false, go to Step 4; otherwise Step 7;4Generate an attention vector xt=fatt(st;M;x);5Update the internal state st+1=RNN (st;xt;s);6Sett=t+ 1; ift<T maxgo to Step 2; otherwise Step 7;7Generate output ot=fo(st;o);8Returno=ot;the output vector oTat theT-th step, which implies termination gate variables t1:T= (t1= 0;t2=0;:::;tT1= 0;tT= 1) , and then takes prediction action pTaccording to the probability distributiongivenoT. Therefore, the IRN learns a stochastic policy ((t1:T;pT)jq;)with parameters to get adistribution over termination actions, and over prediction actions. The termination step Tvaries frominstance to instance. The parameters of the IRNare given by the parameters of the embeddingmatricesWfor the input/output module, the shared memory M, the attention network x, the searchcontroller RNN network s, the output generation network o, and the termination gate network tc.The parameters =fW;M;x;s;o;tcgare trained to maximize the total expected reward thattheIRN when interacting with the environment. The expected reward for an instance is defined as:J() =E(t1:T;pT;)"TXt=1rt#The reward can only be received at the final termination step when a prediction action pTis performed.The rewards on intermediate steps are zeros, frt= 0gt=1:::T1.We employ the approach from our previous work (Shen et al., 2016), REINFORCE (Williams, 1992)based Contrastive Reward method, to maximize the expected reward. The gradient of Jcan bewritten as:rJ() =X(t1:T;pT)2Ay(t1:T;pT;)hrlog(t1:T;pT;)(rTbi1)iwhere Ayis all the possible episodes, the baseline bi=P(t1:T;pT)2Ay(t1:T;pT;)rTis theexpected reward on the jAyjepisodes for the i-th training instance.3 A PPLYING IRN STOKNOWLEDGE BASE COMPLETIONThe goal of KBC tasks (Bordes et al., 2013) is to predict a head or a tail entity given the relation typeand the other entity, i.e. predicting hgiven (?;r;t)or predicting tgiven (h;r;?), where ?denotesthe missing entity. For a KBC task, the input to our model is a subject entity (a head or tail entity)and a relation. The task-dependent input module first extracts the embedding vectors for the entityand relation from an embedding matrix. We then represent the query vector qfor an IRN as theconcatenation of the two vectors. We randomly initialize the shared memory component. At each step,a training triplet is processed through the model by Algorithm 1, where no explicit path informationis given. The IRN updates the shared memory implicitly with respect to the objective function. Forthe task dependent output module, we use a nonlinear projection to project the search controller stateinto an output vector o:fo(st;o) =tanh(Wost+bo), where theWoandboare the weight matrixand bias vector, respectively. We define the ground truth target (object) entity embedding as y, anduse theL1distance measure between the output oand target entity y, namelyd(o;y) =joyj1. Wesample a set of incorrect entity embeddings N=fyigjNji=1as negative examples. The probability of4Under review as a conference paper at ICLR 2017selecting a prediction ^y2Dcan be approximated asp(^yjo) =exp(d(o;^y))Pyk2Dexp(d(o;yk))whereD=N[fyg. We setjNjandto 20 and 5, respectively, for the experiments on FB15k andWN18 datasets. The IRN performs a prediction action pTon selecting ^ywith probability p(^yjo).We define the reward of the prediction action as one if the ground truth entity is selected, and zerootherwise.4 E XPERIMENTAL RESULTSIn this section, we evaluate the performance of our model on the benchmark FB15k and WN18datasets for KBC tasks (Bordes et al., 2013). These datasets contain multi-relations between headand tail entities. Given a head entity and a relation, the model produces a ranked list of the entitiesaccording to the score of the entity being the tail entity of this triple. To evaluate the ranking, wereport mean rank (MR) , the mean of rank of the correct entity across the test examples, and hits@10 ,the proportion of correct entities ranked in the top-10 predictions. Lower MR or higher hits@10indicates a better prediction performance. We follow the evaluation protocol in Bordes et al. (2013)to report filtered results, where negative examples Nare removed from the dataset. In this case, wecan avoid some negative examples being valid and ranked above the target triplet.We use the same hyper-parameters of our model for both FB15k and WN18 datasets. Entity embed-dings (which are not shared between input and output modules) and relation embedding are both100-dimensions. We use the input module and output module to encode subject and object entities,respectively. There are 64 memory vectors with 200 dimensions each, initialized by random vectorswith unitL2-norm. We use single-layer GRU with 200 cells as the search controller. We set themaximum inference step of the IRN to 5. We randomly initialize all model parameters, and use SGDas the training algorithm with mini-batch size of 64. We set the learning rate to a constant number,0.01. To prevent the model from learning a trivial solution by increasing entity embeddings norms,we follow Bordes et al. (2013) to enforce the L2-norm of the entity embeddings as 1. We use hits@10as the validation metric for the IRN. Following the work (Lin et al., 2015a), we add reverse relationsinto the training triplet set to increase the training data.Following Nguyen et al. (2016), we divide the results of previous work into two groups. The firstgroup contains the models that directly optimize a scoring function for the triples in a knowledge basewithout using extra information. The second group of models make uses of additional informationfrom multi-step relations. For example, RTransE (García-Durán et al., 2015) and PTransE (Lin et al.,2015a) models are extensions of the TransE (Bordes et al., 2013) model by explicitly exploringmulti-step relations in the knowledge base to regularize the trained embeddings. The NLFeat model(Toutanova et al., 2015) is a log-linear model that makes use of simple node and link features.Table 1 presents the experimental results. According to the table, our model significantly outperformsprevious baselines, regardless of whether previous approaches use additional information or not.Specifically, on FB15k, the MR of our model surpasses all previous results by 12, and our hit@10outperforms others by 5.7%. On WN18, the IRN obtains the highest hit@10 while maintainingsimilar MR results compared to previous work.1To better understand the behavior of IRNs , we report the results of IRNs with different memory sizesand different Tmax on FB15K in Table 2. We find the performance of IRNs increases significantlyif the number of inference step increases. Note that an IRN withTmax = 1 is the case that an IRNwithout the shared memory. Interestingly, given Tmax = 5,IRNs are not sensitive to memory sizes.In particular, larger memory always improves the MR score, but the best hit@10 is obtained byjMj=64 memory vectors. A possible reason is that the best memory size is determined by thecomplexity of the tasks.We analyze hits@10 results on FB15k with respect to the relation categories. Following the evaluationin Bordes et al. (2013), we evaluate the performance in four types of relation: 1-1 if a head entity1Nguyen et al. (2016) reported two results on WN18, where the first one is obtained by choosing to optimizehits@10 on the validation set, and second one is obtained by choosing to optimize MR on the validation set. Welist both of them in Table 1.5Under review as a conference paper at ICLR 2017Table 1: The knowledge base completion (link prediction) results on WN18 and FB15k.Model Additional Information WN18 FB15kHits@10 (%) MR Hits@10 (%) MRSE (Bordes et al., 2011) NO 80.5 985 39.8 162Unstructured (Bordes et al., 2014) NO 38.2 304 6.3 979TransE (Bordes et al., 2013) NO 89.2 251 47.1 125TransH (Wang et al., 2014) NO 86.7 303 64.4 87TransR (Lin et al., 2015b) NO 92.0 225 68.7 77CTransR (Lin et al., 2015b) NO 92.3 218 70.2 75KG2E (He et al., 2015) NO 93.2 348 74.0 59TransD (Ji et al., 2015) NO 92.2 212 77.3 91TATEC (García-Durán et al., 2015) NO - - 76.7 58NTN (Socher et al., 2013) NO 66.1 - 41.4 -DISTMULT (Yang et al., 2014) NO 94.2 - 57.7 -STransE (Nguyen et al., 2016) NO 94.7 (93) 244 ( 206) 79.7 69RTransE (García-Durán et al., 2015) Path - - 76.2 50PTransE (Lin et al., 2015a) Path - - 84.6 58NLFeat (Toutanova et al., 2015) Node + Link Features 94.3 - 87.0 -Random Walk (Wei et al., 2016) Path 94.8 - 74.7 -IRN NO 95.3 249 92.7 38Table 2: The performance of IRNs with different memory sizes and inference steps on FB15K.Number of memory vectors Maximum inference step FB15kHits@10 (%) MRjMj=64 Tmax = 1 80.7 55.7jMj=64 Tmax = 2 87.4 49.2jMj=64 Tmax = 5 92.7 38.0jMj=64 Tmax = 8 88.8 32.9jMj=32 Tmax = 5 90.1 38.7jMj=64 Tmax = 5 92.7 38.0jMj=128 Tmax = 5 92.2 36.1jMj=512 Tmax = 5 90.0 35.3jMj=4096 Tmax = 5 88.7 34.7can appear with at most one tail entity, 1-Many if a head entity can appear with many tail entities,Many-1 if multiple heads can appear with the same tail entity, and Many-Many if multiple headentities can appear with multiple tail entities. The detailed results are shown in Table 3. The IRNsignificantly improves the hits@10 results in the Many-1 category on predicting the head entity(18:8%), the 1-Many category on predicting the tail entity ( 16:5%), and the Many-Many category(over 8%in average).To analyze the behavior of IRNs , we pick some examples for the tail entity prediction in Table 4.Interestingly, we observed that the model can gradually increase the ranking score of the correct tailentity during the inference process.5 A NALYSIS : APPLYING IRN STO A SHORTEST PATH SYNTHESIS TASKWe construct a synthetic task, shortest path synthesis, to evaluate the inference capability over ashared memory. The motivations of applying our model to this task are as follows. First, we wantto evaluate IRNs on another task requiring multi-step inference. Second, we select the sequencegeneration task so that we are able to analyze the inference capability of IRNs in details.In the shortest path synthesis task, as illustrated in Figure 2, a training instance consists of a startnode and an end node (e.g., 215 493) of an underlying weighted directed graph that is unknown tomodels. The output of each instance is the shortest path between the given start and end nodes of theunderlying graph (e.g., 215!101!493). Specifically, models can only observe the start-end node6Under review as a conference paper at ICLR 2017Table 3: Hits@10 (%) in the relation category on FB15k. ( Mstands for Many )ModelPredicting head h Predicting tail t1-1 1-M M-1 M-M 1-1 1-M M-1 M-MSE (Bordes et al., 2011) 35.6 62.6 17.2 37.5 34.9 14.6 68.3 41.3Unstructured (Bordes et al., 2014) 34.5 2.5 6.1 6.6 34.3 4.2 1.9 6.6TransE (Bordes et al., 2013) 43.7 65.7 18.2 47.2 43.7 19.7 66.7 50.0TransH (Wang et al., 2014) 66.8 87.6 28.7 64.5 65.5 39.8 83.3 67.2TransR (Lin et al., 2015b) 78.8 89.2 34.1 69.2 79.2 37.4 90.4 72.1CTransR (Lin et al., 2015b) 81.5 89.0 34.7 71.2 80.8 38.6 90.1 73.8KG2E (He et al., 2015) 92.3 94.6 66.0 69.6 92.6 67.9 94.4 73.4TransD (Ji et al., 2015) 86.1 95.5 39.8 78.5 85.4 50.6 94.4 81.2TATEC (García-Durán et al., 2015) 79.3 93.2 42.3 77.2 78.5 51.5 92.7 80.7STransE (Nguyen et al., 2016) 82.8 94.2 50.4 80.1 82.4 56.9 93.4 83.1PTransE (Lin et al., 2015a) 91.0 92.8 60.9 83.8 91.2 74.0 88.9 86.4IRN 87.2 96.1 84.8 92.9 86.9 90.5 95.3 94.1Table 4: Test examples in FB15k dataset, given a head entity and a relation, the IRN predicts the tailentity with multiple search steps.Input : (Dean Koontz , /PEOPLE /PERSON /PROFESSION )Target :Film ProducerStep Termination Prob. Rank Predict top-3 entities1 0.018 9 Author TV. Director Songwriter2 0.052 7 Actor Singer Songwriter3 0.095 4 Actor Singer Songwriter4 0.132 4 Actor Singer Songwriter5 0.702 3 Actor Singer Film ProducerInput : (War and Peace , /FILM /FILM /PRODUCED _BY)Target :Carlo PontiStep Termination Prob. Rank Predict top-3 entities1 0.001 13 Scott Rudin Stephen Woolley Hal B. Wallis2 5.8E-13 7 Billy Wilder William Wyler Elia Kazan3 0.997 1 Carlo Ponti King Vidor Hal B. Wallispairs as input and their shortest path as output. The whole graph is unknown to the models and theedge weights are not revealed in the training data. At test time, a path sequence is considered correctif it connects the start node and the end node of the underlying graph, and the cost of the predictedpath is the same as the optimal path.Note that the task is very difficult and cannot be solved by dynamic programming algorithms since theweights on the edges are not revealed to the algorithms or the models. To recover some of the shortestpaths at the test time, the model needs to infer the correct path from the observed instances. Forexample, assume that we observe two instances in the training data, “ A D:A!B!G!D”and “B E:B!C!E”. In order to answer the shortest path between AandE, the modelneeds to infer that “ A!B!C!E” is a possible path between AandE. If there are multiplepossible paths, the model has to decide which one is the shortest one using statistical information.In the experiments, we construct a graph with 500 nodes and we randomly assign two nodes to forman edge. We split 20,000 instances for training, 10,000 instances for validation, and 10,000 instancesfor testing. We create the training and testing instances carefully so that the model needs to performinference to recover the correct path. We describe the details of the graph and data construction partsin the appendix section. A sub-graph of the data is shown in Figure 2.For the settings of the IRN, we switch the output module to a GRU decoder for a sequence generationtask. We assign reward rT= 1if all the prediction symbols are correct and 0otherwise. We use a64-dimensional embedding vector for input symbols, a GRU controller with 128cells, and a GRUdecoder with 128cells. We set the maximum inference step Tmaxto5.7Under review as a conference paper at ICLR 2017Step Termination Distance PredictionsProbability1 0.001 N/A 215!158!89!458!49320 N/A 215!479!277!353!49330 N/A 215!49!49340 0.77 215!140!4935 0.999 0.70 215!101!493Figure 2: An example of the shortest path synthesis dataset, given an input “ 215 493” (Answer: 215!101!493). Note that we only show the nodes that are related to this example here. The correspondingtermination probability and prediction results are shown in the table. The model terminates at step 5.We compare the IRN with two baseline approaches: dynamic programming without edge-weightinformation and a standard sequence-to-sequence model (Sutskever et al., 2014) using a similarparameter size to our model. Without knowing the edge weights, dynamic programming only recovers589 correct paths at test time. The sequence-to-sequence model recovers 904 correct paths. The IRNoutperforms both baselines, recovering 1,319 paths. Furthermore, 76.9% of the predicted paths fromIRN arevalid paths, where a path is valid if the path connects the start and end node nodes of theunderlying graph. In contrast, only 69.1% of the predicted paths from the sequence-to-sequencemodel are valid.To further understand the inference process of the IRN, Figure 2 shows the inference process of a testinstance. Interestingly, to make the correct prediction on this instance, the model has to perform afairly complicated inference.2We observe that the model cannot find a connected path in the firstthree steps. Finally, the model finds a valid path at the forth step and predict the correct shortest pathsequence at the fifth step.6 R ELATED WORKLink Prediction and Knowledge Base Completion Given thatris a relation, his the head entity,andtis the tail entity, most of the embedding models for link prediction focus on finding the scoringfunctionfr(h;t)that represents the implausibility of a triple. (Bordes et al., 2011; 2014; 2013; Wanget al., 2014; Ji et al., 2015; Nguyen et al., 2016). In many studies, the scoring function fr(h;t)islinear or bi-linear. For example, in TransE (Bordes et al., 2013), the function is implemented asfr(h;t) =kh+rtk, where h,randtare the corresponding vector representations.Recently, different studies (Guu et al., 2015; Lin et al., 2015a; Toutanova et al., 2016) demonstratethe importance for models to also learn from multi-step relations. Learning from multi-step relationsinjects the structured relationships between triples into the model. However, this also poses a technicalchallenge of considering exponential numbers of multi-step relationships. Prior approaches addressthis issue by designing path-mining algorithms (Lin et al., 2015a) or considering all possible pathsusing a dynamic programming algorithm with the restriction of using linear or bi-linear modelsonly (Toutanova et al., 2016). Toutanova & Chen (2015) shows the effectiveness of using simple nodeand link features that encode structured information on FB15k and WN18. In our work, the IRNoutperforms prior results and shows that similar information can be captured by the model withoutexplicitly designing features.2In the example, to find the right path, the model needs to search over observed instances “ 215 448:215!101!448” and “ 76 493:76!308!101!493”, and to figure out the distance of “ 140!493”is longer than “ 101!493” (there are four shortest paths between 101!493and three shortest paths between140!493in the training set).8Under review as a conference paper at ICLR 2017Studies such as (Riedel et al., 2013) show that incorporating textual information can further improvethe knowledge base completion tasks. It would be interesting to incorporate the information outsidethe knowledge bases in our model in the future.Neural Frameworks Sequence-to-sequence models (Sutskever et al., 2014; Cho et al., 2014) haveshown to be successful in many applications such as machine translation and conversation model-ing (Sordoni et al., 2015). While sequence-to-sequence models are powerful, recent work has shownthat the necessity of incorporating an external memory to perform inference in simple algorithmictasks (Graves et al., 2014; 2016).7 C ONCLUSIONIn this paper, we propose Implicit ReasoNets ( IRNs ), which perform inference over a shared memorythat models large-scale structured relationships implicitly. The inference process is guided by a searchcontroller to access the memory that is shared across instances. We demonstrate and analyze themulti-step inference capability of IRNs in the knowledge base completion tasks and a shortest pathsynthesis task. Our model, without using any explicit knowledge base information in the inferenceprocedure, outperforms all prior approaches on the popular FB15k benchmark by more than 5.7%.For future work, we aim to further extend IRNs in two ways. First, inspired from Ribeiro et al. (2016),we would like to develop techniques to exploit ways to generate human understandable reasoninginterpretation from the shared memory. Second, we plan to apply IRNs to infer the relationshipsin unstructured data such as natural language. For example, given a natural language query suchas “are rabbits animals?”, the model can infer a natural language answer implicitly in the sharedmemory without performing inference directly on top of huge amount of observed sentences such as“all mammals are animals” and “rabbits are animals”. We believe the ability to perform inferenceimplicitly is crucial for modeling large-scale structured relationships.ACKNOWLEDGMENTSWe thank Scott Wen-Tau Yih, Kristina Toutanova, Jian Tang and Zachary Lipton for their thoughtfulfeedback and discussions.REFERENCESJonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on Freebase from question-answer pairs. In Proceedings of EMNLP , 2013.Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: A collaborativelycreated graph database for structuring human knowledge. In Proceedings of SIGMOD-08 , pp. 1247–1250,2008.Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. Learning structured embeddings ofknowledge bases. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence , pp. 301–306,2011.Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translatingembeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems , pp.2787–2795, 2013.Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. A semantic matching energy function forlearning with multi-relational data. Machine Learning , 94(2):233–259, 2014.Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk,and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machinetranslation. arXiv preprint arXiv:1406.1078 , 2014.C. Fellbaum. WordNet: An Electronic Lexical Database . MIT Press, 1998.Alberto García-Durán, Antoine Bordes, and Nicolas Usunier. Composing relationships with translations. InEMNLP , pp. 286–290, 2015.Alberto García-Durán, Antoine Bordes, Nicolas Usunier, and Yves Grandvalet. Combining two and three-wayembeddings models for link prediction in knowledge bases. CoRR , abs/1506.00999, 2015.9Under review as a conference paper at ICLR 2017Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401 , 2014.Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi ́nska,Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computingusing a neural network with dynamic external memory. Nature , 2016.Kelvin Guu, John Miller, and Percy Liang. Traversing knowledge graphs in vector space. arXiv preprintarXiv:1506.01094 , 2015.Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. Learning to represent knowledge graphs with gaussianembedding. In Proceedings of the 24th ACM International on Conference on Information and KnowledgeManagement , pp. 623–632, 2015.Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. Knowledge graph embedding via dynamicmapping matrix. In ACL, pp. 687–696, 2015.Leslie Pack Kaelbling, Michael L. Littman, and Anthony R. Cassandra. Planning and acting in partiallyobservable stochastic domains. Artificial Intelligence , 101:99–134, 1998.Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. Modeling relation paths forrepresentation learning of knowledge bases. In Proceedings of the Conference on Empirical Methods forNatural Language Processing (EMNLP) , 2015a.Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. Learning entity and relation embeddings forknowledge graph completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence ,AAAI’15, pp. 2181–2187, 2015b.Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. Distant supervision for relation extraction withoutlabeled data. In Proceedings of ACL-IJCNLP-09 , pp. 1003–1011, 2009.Tsendsuren Munkhdalai and Hong Yu. Neural semantic encoders. CoRR , abs/1607.04315, 2016.Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. STransE: a novel embedding model of entitiesand relationships in knowledge bases. In NAACL , pp. 460–466, 2016.Maximilian Nickel, V olker Tresp, and Hans-Peter Kriegel. A three-way model for collective learning onmulti-relational data. In Proceedings of the 28th international conference on machine learning (ICML-11) ,pp. 809–816, 2011.Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should i trust you?": Explaining the predictionsof any classifier. In KDD , 2016.Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. Relation extraction with matrixfactorization and universal schemas. In HLT-NAACL , pp. 74–84, 2013.Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. Reasonet: Learning to stop reading in machinecomprehension. CoRR , abs/1609.05284, 2016.Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y . Ng. Reasoning With Neural TensorNetworks For Knowledge Base Completion. In Advances in Neural Information Processing Systems , 2013.Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie,Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generation of conversationalresponses. arXiv preprint arXiv:1506.06714 , 2015.F. M. Suchanek, G. Kasneci, and G. Weikum. Yago: A Core of Semantic Knowledge. In WWW , 2007.Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in NeuralInformation Processing Systems , pp. 2440–2448, 2015.Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advancesin Neural Information Processing Systems , pp. 3104–3112, 2014.Kristina Toutanova and Danqi Chen. Observed versus latent features for knowledge base and text inference. InProceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality , pp. 57–66,2015.Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon.Representing text for joint embedding of text and knowledge bases. In EMNLP , 2015.10Under review as a conference paper at ICLR 2017Kristina Toutanova, Xi Victoria Lin, Scott Wen tau Yih, Hoifung Poon, and Chris Quirk. Compositional learningof embeddings for relation paths in knowledge bases and text. In ACL, 2016.Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. Knowledge graph embedding by translating onhyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence , pp. 1112–1119,2014.Zhuoyu Wei, Jun Zhao, and Kang Liu. Mining inference formulas by goal-directed random walks. In EMNLP ,2016.Robert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, and Dekang Lin. Knowledgebase completion via search-based question answering. In Proceedings of the 23rd international conference onWorld Wide Web , pp. 515–526. ACM, 2014.Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916 , 2014.Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.Machine Learning , 8:229–256, 1992.Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations forlearning and inference in knowledge bases. CoRR , abs/1412.6575, 2014.Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. Semantic parsing via staged query graphgeneration: Question answering with knowledge base. In Proc. of ACL , 2015.A D ETAILS OF THE GRAPH CONSTRUCTION FOR THE SHORTEST PATHSYNTHESIS TASKWe construct the underlying graph as follows: on a three-dimensional unit-sphere, we randomlygenerate a set of nodes. For each node, we connect its K-nearest neighbors and use the euclideandistance between two nodes to construct a graph. We randomly sample two nodes and compute itsshortest path if it is connected between these two nodes. Given the fact that all the sub-paths within ashortest path are shortest paths, we incrementally create the dataset and remove the instances whichare a sub-path of previously selected paths or are super-set of previous selected paths. In this case, allthe shortest paths can not be answered through directly copying from another instance. In addition, allthe weights in the graph are hidden and not shown in the training data, which increases the difficultyof the tasks. We set k= 50 as a default value.11
rJXTf9Bxg
Under review as a conference paper at ICLR 2017CONDITIONAL IMAGE SYNTHESIS WITH AUXILIARYCLASSIFIER GAN SAugustus Odena, Christopher Olah & Jonathon ShlensGoogle Brainfaugustusodena,colah,shlens g@google.comABSTRACTSynthesizing high resolution photorealistic images has been a long-standing chal-lenge in machine learning. In this paper we introduce new methods for the im-proved training of generative adversarial networks (GANs) for image synthesis.We construct a variant of GANs employing label conditioning that results in128128 resolution image samples exhibiting global coherence. We expandon previous work for image quality assessment to provide two new analyses forassessing the discriminability and diversity of samples from class-conditional im-age synthesis models. These analyses demonstrate that high resolution samplesprovide class information not present in low resolution samples. Across 1000ImageNet classes, 128128samples are more than twice as discriminable as ar-tificially resized 3232samples. In addition, 84.7% of the classes have samplesexhibiting diversity comparable to real ImageNet data.1 I NTRODUCTIONCharacterizing the structure of natural images has been a rich research endeavor. Natural imagesobey intrinsic invariances and exhibit multi-scale statistical structures that have historically beendifficult to quantify (Simoncelli & Olshausen, 2001). Recent advances in machine learning of-fer an opportunity to substantially improve the quality of image models. Improved image modelsadvance the state-of-the-art in image denoising (Ball ́e et al., 2015), compression (Toderici et al.,2016), in-painting (van den Oord et al., 2016a), and super-resolution (Ledig et al., 2016). Bet-ter models of natural images also improve performance in semi-supervised learning tasks (Kingmaet al., 2014; Springenberg, 2015; Odena, 2016; Salimans et al., 2016) and reinforcement learningproblems (Blundell et al., 2016).One method for understanding natural image statistics is to build a system that synthesizes imagesde novo . There are several promising approaches for building image synthesis models. Variationalautoencoders (V AEs) maximize a variational lower bound on the log-likelihood of the training data(Kingma & Welling, 2013; Rezende et al., 2014). V AEs are straightforward to train but introducepotentially restrictive assumptions about the approximate posterior distribution (but see Rezende &Mohamed (2015); Kingma et al. (2016)). Autoregressive models dispense with latent variables anddirectly model the conditional distribution over pixels (van den Oord et al., 2016a;b). These modelsproduce convincing samples but are costly to sample from and do not provide a latent representation.Invertible density estimators transform latent variables directly using a series of parameterized func-tions constrained to be invertible (Dinh et al., 2016). This technique allows for exact log-likelihoodcomputation and exact inference, but the invertibility constraint is restrictive.Generative adversarial networks (GANs) offer a distinct and promising approach that focuses on agame-theoretic formulation for training an image synthesis model (Goodfellow et al., 2014). Recentwork has shown that GANs can produce convincing image samples on datasets with low variabilityand low resolution (Denton et al., 2015; Radford et al., 2015). However, GANs struggle to gen-erate globally coherent, high resolution samples - particularly from datasets with high variability.Moreover, a theoretical understanding of GANs is an on-going research topic (Uehara et al., 2016;Mohamed & Lakshminarayanan, 2016).Work completed as a participant in the 2016-2017 Google Brain Residency program.1arXiv:1610.09585v1 [stat.ML] 30 Oct 2016Under review as a conference paper at ICLR 2017monarch butterfly goldfinch daisy grey whale redshankFigure 1: 128128resolution samples from 5 classes taken from an AC-GAN trained on the ImageNet dataset.Note that the classes shown have been selected to highlight the success of the model and are not representative.Samples from all ImageNet classes are in the Appendix.In this work we demonstrate that that adding more structure to the GAN latent space along witha specialized cost function results in higher quality samples. We exhibit 128128pixel samplesfrom all classes of the ImageNet dataset (Russakovsky et al., 2015) with increased global coherence(Figure 1). Importantly, we demonstrate quantitatively that our high resolution samples are not justnaive resizings of low resolution samples. In particular, downsampling our 128128 samplesto3232leads to a 50% decrease in visual discriminability. We also introduce a new metricfor assessing the variability across image samples and employ this metric to demonstrate that oursynthesized images exhibit diversity comparable to training data for a large fraction (84.7%) ofImageNet classes.2 B ACKGROUNDA generative adversarial network (GAN) consists of two neural networks trained in opposition toone another. The generator Gtakes as input a random noise vector zand outputs an image Xfake=G(z). The discriminator Dreceives as input either a training image or a synthesized image fromthe generator and outputs a probability distribution P(SjX) =D(X)over possible image sources.The discriminator is trained to maximize the log-likelihood it assigns to the correct source:L=E[logP(S=realjXreal)] +E[logP(S=fakejXfake)]The generator is trained to minimize that same quantity.The basic GAN framework can be augmented using side information. One strategy is to supplyboth the generator and discriminator with class labels in order to produce class conditional samples(Mirza & Osindero, 2014). Class conditional synthesis can significantly improve the quality ofgenerated samples (van den Oord et al., 2016b). Richer side information such as image captions andbounding box localizations may improve sample quality further (Reed et al., 2016a;b).Instead of feeding side information to the discriminator, one can task the discriminator with re-constructing side information. This is done by modifying the discriminator to contain an auxiliarydecoder network1that outputs the class label for the training data (Odena, 2016; Salimans et al.,2016) or a subset of the latent variables from which the samples are generated (Chen et al., 2016).Forcing a model to perform additional tasks is known to improve performance on the original task(e.g. Sutskever et al. (2014); Szegedy et al. (2014); Ramsundar et al. (2016)). In addition, an auxil-iary decoder could leverage pre-trained discriminators (e.g. image classifiers) for further improvingthe synthesized images (Nguyen et al., 2016). Motivated by these considerations, we introduce amodel that combines both strategies for leveraging side information. That is, the model proposedbelow is class conditional, but with an auxiliary decoder that is tasked with reconstructing classlabels.2Under review as a conference paper at ICLR 2017(noise) (latent)(data)InfoGAN(Chen, et al., 2016) . . .(noise) (class)(data)AC-GAN(Present Work) (noise) (class)(data)Conditional GAN (Mirza & Osindero, 2014) (noise). . .(class)(data)Semi-Supervised GAN (Odena, 2016; Salimans, et al., 2016) Figure 2: A comparison of several GAN architectures with the proposed AC-GAN architecture.3 AC-GAN SWe propose a variant of the GAN architecture which we call an auxiliary classifier GAN (or AC-GAN - see Figure 2). In the AC-GAN, every generated sample has a corresponding class label, cpcin addition to the noise z.Guses both to generate images Xfake =G(c;z). The discriminatorgives both a probability distribution over sources and a probability distribution over the class labels,P(SjX); P(CjX) =D(X). The objective function has two parts: the log-likelihood of thecorrect source, LS, and the log-likelihood of the correct class, LC.LS=E[logP(S=realjXreal)] +E[logP(S=fakejXfake)]LC=E[logP(C=cjXreal)] +E[logP(C=cjXfake)]Dis trained to maximize LS+LCwhileGis trained to maximize LCLS. AC-GANs learn arepresentation for zthat is independent of class label (e.g. Kingma et al. (2014)).Early experiments demonstrated that increasing the number of classes trained on while holding themodel fixed decreased the quality of the model outputs (Appendix B). The structure of the AC-GAN model permits separating large datasets into subsets by class and training a generator anddiscriminator for each subset. We exploit this property in our experiments to train across the entireImageNet data set.4 R ESULTSWe train several AC-GAN models on the ImageNet data set (Russakovsky et al., 2015). Broadlyspeaking, the architecture of the generator Gis a series of ‘deconvolution’ layers that transform thenoisezand classcinto an image (Odena et al., 2016). We train two variants of the model architecturefor generating images at 128128and6464spatial resolutions. The discriminator Dis a deepconvolutional neural network with a Leaky ReLU nonlinearity (Maas et al., 2013). See Appendix Afor more details. As mentioned earlier, we find that reducing the variability introduced by all 1000classes of ImageNet significantly improves the quality of training. We train 100 AC-GAN models –each on images from just 10 classes – for 50000 mini-batches of size 100.Evaluating the quality of image synthesis models is challenging due to the variety of probabilis-tic criteria (Theis et al., 2015) and the lack of a perceptually meaningful image similarity metric.Nonetheless, in subsequent sections we attempt to measure the quality of the AC-GAN by buildingseveral ad-hoc measures for image sample discriminability and diversity. Our hope is that this workmight provide quantitative measures that may be used to aid training and subsequent developmentof image synthesis models.1Alternatively, one can force the discriminator to work with the joint distribution (X; z)and train a separateinference network that computes q(zjX)(Dumoulin et al., 2016; Donahue et al., 2016).3Under review as a conference paper at ICLR 201716 x 16 32 x 32 64 x 64 128 x 128 256 x 256RealFake0% 0% 42% 76% 76%0% 7% 62% 94% 94%Figure 3: Generating high resolution images improves discriminability. Top: Training data and synthesized im-ages from the zebra class resized to a lower spatial resolution (indicated above) and subsequently artificiallyresized to the original resolution. Inception accuracy is shown below the corresponding images. Bottom Left:Summary of accuracies across varying spatial resolutions for training data and image samples from 6464and128128models. Error bar measures standard deviation across 10 subsets of images. Dashed lines highlightthe accuracy at the output spatial resolution of the model. The training data (clipped) achieves accuracies of24%, 54%, 81% and 81% at resolutions of 32, 64, 128, and 256 respectively. Bottom Right: Comparison ofaccuracy scores at 128128and3232spatial resolutions ( xandyaxis, respectively). Each point representsan ImageNet class. 84.4% of the classes are below the line of equality. The green dot corresponds to the zebraclass.4.1 G ENERATING HIGHRESOLUTION IMAGES IMPROVES DISCRIMINABILITYBuilding a class-conditional image synthesis model necessitates measuring the extent to which syn-thesized images appear to belong to the intended class. In particular, we would like to know thata high resolution sample is not just a naive resizing of a low resolution sample. Consider a simpleexperiment: pretend there exists a model that synthesizes 3232images. One can trivially increasethe resolution of synthesized images by performing bilinear interpolation. This would yield higherresolution images, but these images would just be blurry versions of the low resolution images thatare not discriminable. Hence, the goal of an image synthesis model is not simply to produce highresolution images, but to produce high resolution images that are more discriminable than low reso-lution images.To measure discriminability, we feed synthesized images to a pre-trained Inception network(Szegedy et al., 2015) and report the fraction of the samples for which the Inception network as-signed the correct label2. We calculate this accuracy measure on a series of real and synthesized im-ages which have had their spatial resolution artificially decreased by bilinear interpolation (Figure 3,2One could also use the Inception score (Salimans et al., 2016), but our method has several advan-tages: accuracy figures are easier to interpret than exponentiated KL-divergences; accuracy may be as-sessed for individual classes; accuracy measures whether a class-conditional model generated samples from4Under review as a conference paper at ICLR 2017top panels). Note that as the spatial resolution is decreased, the accuracy decreases - indicating thatresulting images contain less class information (Figure 3, scores below top panels). We summarizedthis finding across all 1000 ImageNet classes for the ImageNet training data (black), a 128128resolution AC-GAN (red) and a 6464resolution AC-GAN (blue) in Figure 3 (bottom, left). Theblack curve (clipped) provides an upper-bound on the discriminability of real images.The goal of this analysis is to show that synthesizing higher resolution images leads to increaseddiscriminability. The 128128model achieves an accuracy of 10.1% 2.0% versus 7.0%2.0%with samples resized to 6464and 5.0%2.0% with samples resized to 3232. In other words,downsizing the outputs of the AC-GAN to 3232and6464decreases visual discriminabilityby 50% and 38% respectively. Furthermore, 84.4% of the ImageNet classes have higher accuracy at128128than at 3232(Figure 3, bottom left).We performed the same analysis on an AC-GAN trained to 6464spatial resolution. This modelachieved less discriminability than a 128128AC-GAN model. Accuracies from the 6464modelplateau at a 6464spatial resolution consistent with previous results. Finally, the 6464resolutionmodel achieves less discriminability at 64 spatial resolution than the 128128model.4.2 M EASURING THE DIVERSITY OF GENERATED IMAGESAn image synthesis model is not very interesting if it only outputs one image. Indeed, a well-knownfailure mode of GANs is that the generator will collapse and output a single prototype that maximallyfools the discriminator (Goodfellow et al., 2014; Salimans et al., 2016). A class-conditional modelof images is not very interesting if it only outputs one image per class. The Inception accuracy cannot measure whether a model has collapsed. A model that simply memorized one example fromeach ImageNet class would do very well by this metric. Thus, we seek a complementary metric toexplicitly evaluate the intra-class diversity of samples generated by the AC-GAN.Several methods exist for quantitatively evaluating image similarity by attempting to predict humanperceptual similarity judgements. The most successful of these is multi-scale structural similarity(MS-SSIM) (Wang et al., 2004b; Ma et al., 2016). MS-SSIM is a multi-scale variant of a well-characterized perceptual similarity metric that attempts to discount aspects of an image that are notimportant for human perception (Wang et al., 2004a). MS-SSIM values range between 0.0 and 1.0;higher MS-SSIM values correspond to perceptually more similar images. As a proxy for imagediversity, we measure the MS-SSIM scores between randomly chosen pairs of images within agiven class. Samples from classes that have higher diversity result in lower mean MS-SSIM scores(Figure 4, left columns); samples from classes with lower diversity have higher mean MS-SSIMscores (Figure 4, right columns). Training images from the ImageNet training data contain a varietyof mean MS-SSIM scores across the classes indicating the variability of image diversity in ImageNetclasses (Figure 5, left panel, x-axis). Note that the highest mean MS-SSIM score (indicating the leastvariability) is 0.25 for the training data.We calculate the mean MS-SSIM score for all 1000 ImageNet classes generated by the AC-GANmodel. We track this value during training to identify whether the generator has collapsed (Figure 5,right panel, red curve). We also employ this metric to compare the diversity of the training imagesto the samples from the GAN model after training has completed. Figure 5 (left) plots the meanMS-SSIM values for image samples and training data broken up by class. The blue line is the lineof equality. Out of the 1000 classes, we find that 847 have mean sample MS-SSIM scores belowthat of the maximum MS-SSIM for the training data. In other words, 84.7% of classes have samplevariability that exceeds that of the least variable class from the ImageNet training data.4.3 G ENERATED IMAGES ARE BOTH DIVERSE AND DISCRIMINABLEWe have presented quantitative metrics demonstrating that AC-GAN samples may be diverse anddiscriminable but we have yet to examine how these metrics interact. Figure 6 shows the jointdistribution of Inception accuracies and MS-SSIM scores across all classes. Inception accuracyand MS-SSIM are anti-correlated ( r2=0:16). In fact, 74% of the classes with low diversity (MS-SSIM0:25) contain Inception accuracies 1%. These results suggest that GANs that drop modesthe intended class. To compute the Inception accuracy, we modified a version of Inception-v3 supplied inhttps://github.com/openai/improved-gan/ .5Under review as a conference paper at ICLR 2017hot dog artichoke promontory green appleMS-SSIM = 0. 11 MS-SSIM = 0.29 MS-SSIM = 0.41 MS-SSIM = 0.90MS-SSIM = 0.05 MS-SSIM = 0.15 MS-SSIM = 0.08 MS-SSIM = 0.04real synthesizedFigure 4: Examples of different MS-SSIM scores. The top and bottom rows contain AC-GAN samples andtraining data, respectively.Figure 5: (Left) Comparison of the mean MS-SSIM scores between pairs of images within a given class forImageNet training data and samples from the GAN (blue line is equality). The horizontal red line marks themaximum MS-SSIM value across all ImageNet classes. Each point is an individual class. The mean standarddeviation of scores across the training data and the samples was 0.06 and 0.08 respectively. Scores belowthe red line (84.7% of classes) arise from classes where GAN training largely succeeded. (Right) Intra-classMS-SSIM for selected ImageNet classes throughout a training run. Classes that successfully train tend to havedecreasing mean MS-SSIM scores, to a point.are most likely to produce low quality images. Conversely, 78% of classes with high diversity (MS-SSIM<0:25) have Inception accuracies that exceed 1%. In comparison, the Inception-v3 modelachieves 78.8% accuracy on average across all 1000 classes (Szegedy et al., 2015). A fraction of theclasses AC-GAN samples reach this level of accuracy. This indicates opportunity for future imagesynthesis models.4.4 C OMPARISON TO PREVIOUS RESULTSPrevious quantitative results for image synthesis models trained on ImageNet are reported in termsof log-likelihood (van den Oord et al., 2016a;b). Log-likelihood is a coarse and potentially inaccu-rate measure of sample quality (Theis et al., 2015). Addditionally, log-likelihood is intractable tocompute for GANs. Instead we compare with previous state-of-the-art results on CIFAR-10 using alower spatial resolution ( 3232). Following the procedure in Salimans et al. (2016), we compute6Under review as a conference paper at ICLR 2017Figure 6: Inception accuracy vs MS-SSIM for all 1000 ImageNet classes ( r2=0:16). Samples from AC-GAN models do not achieve variability at the expense of discriminability.the Inception score3for 50000 samples from an AC-GAN with resolution ( 3232), split into 10groups at random. We also compute the Inception score for 25000 extra samples, split into 5 groupsat random. We select the best model based on the first score and report the second score. Performinga grid search across 27 hyperparameter configurations, we are able to achieve a score of 8.25 0.07compared to state of the art 8.09 0.07 (Salimans et al., 2016). Moreover, we accomplish this with-out employing any of the new techniques introduced in that work (i.e. virtual batch normalization,minibatch discrimination, and label smoothing). This provides additional evidence that AC-GANsare effective even without the benefit of class splitting (Appendix B).4.5 S EARCHING FOR SIGNATURES OF OVERFITTINGOne possibility that must be investigated is that the AC-GAN has overfit on the training data. As afirst check that the network does not memorize the training data, we identify the nearest neighborsof image samples in the training data measured by L1 distance in pixel space (Figure 7). The nearestneighbors from the training data do not resemble the corresponding samples. This provides evidencethat the AC-GAN is not merely memorizing the training data.Figure 7: Nearest neighbor analysis. (Left) Samples from a single ImageNet class. (Right) Correspondingnearest neighbor (L1 distance) in training data for each sample.A more sophisticated method for understanding the degree of overfitting in a model is to explorethat model’s latent space by interpolation. In an overfit model one might observe discrete transitionsin the interpolated images and regions in latent space that do not correspond to meaningful images(Bengio et al., 2012; Radford et al., 2015; Dinh et al., 2016). Figure 8 (left) highlights interpolationsin the latent space between several image samples. Notably, the generator learned that certain com-binations of dimensions correspond to semantically meaningful features (e.g. size of the arch, lengthof a bird’s beak) and there are no discrete transitions or ‘holes’ in the latent space. A second methodfor exploring the latent space of the AC-GAN is to exploit the structure of the model. The AC-GANfactorizes its representation into class information and a class-independent latent representation z.Sampling the AC-GAN with zfixed but altering the class label corresponds to generating sampleswith the same ‘style’ across multiple classes (Kingma et al., 2014). Figure 8 (right) shows samples3The Inception score is given by exp (Ex[DKL(p(yjx)jjp(y))])where xis a particular image, p(yjx)is the conditional output distribution over the classes in a pre-trained Inception network (Szegedy et al., 2014)given x, andp(y)is the marginal distribution over the classes.7Under review as a conference paper at ICLR 2017from 8 bird classes. Elements of the same row have the same z. Although the class changes foreach column, elements of the global structure (e.g. position, layout, background) are preserved,indicating that AC-GAN can represent certain types of ‘compositionality’.Figure 8: (Left) Latent space interpolations for selected ImageNet classes. Left-most and right-columns showthree pairs of image samples - each pair from a distinct class. Intermediate columns highlight linear interpola-tions in the latent space between these three pairs of images. (Right) Class-independent information containsglobal structure about the synthesized image. Each column is a distinct bird class while each row correspondsto a fixed latent code z.5 D ISCUSSIONThis work introduced the AC-GAN architecture and demonstrated that AC-GANs can generate glob-ally coherent ImageNet samples. We provided a new quantitative metric for image discriminabilityas a function of spatial resolution. Using this metric we demonstrated that our samples are morediscriminable than those from a model that generates lower resolution images and performs a naiveresize operation. We also analyzed the diversity of our samples with respect to the training dataand provided some evidence that the image samples from the majority of classes are comparable indiversity to ImageNet training data. We hope that these metrics might provide quantitative measuresof sample quality for evaluating and improving future image synthesis models.Several directions exist for building upon this work. Much work needs to be done to improve thevisual discriminability of the 128128resolution model. Although some synthesized image classesexhibit high Inception accuracies, the average Inception accuracy of the model ( 10:1%2:0%)is still far below real training data at 81%. One immediate opportunity for addressing this is toaugment the discriminator with a pre-trained model to perform additional supervised tasks (e.g.image segmentation, Ronneberger et al. (2015)). Such techniques might allow for the synthesis ofeven higher resolution images with global coherence and meaningful visual content.Improving the robustness and reliability of training a GAN is an ongoing research topic. Only 84.7%of the ImageNet classes avoided mode dropping and exhibited a diversity comparable to real trainingdata. Training stability was vastly aided by dividing up 1000 ImageNet classes across 100 AC-GANmodels. Building a single unified model that could generate diverse samples from all 1000 classeswould be an important step forward.Image synthesis models provide a unique opportunity for performing semi-supervised learning.Namely, these models build a rich prior over natural image statistics that can be leveraged by clas-sifiers to improve predictions on datasets for which few labels exist. The AC-GAN model canperform semi-supervised learning by simply ignoring the component of the loss arising from classlabels when a label is unavailable for a given training image. Interestingly, prior work suggeststhat achieving good sample quality might be independent of success in semi-supervised learning(Salimans et al., 2016).ACKNOWLEDGMENTSWe thank the developers of TensorFlow (Abadi et al., 2016). We thank Luke Metz and VincentDumoulin for extensive and helpful comments on drafts. We also thank Ben Poole, Sam Schoenholz,Barret Zoph, Mart ́ın Abadi, Manjunath Kudlur and Jascha Sohl-Dickstein for helpful discussions.8Under review as a conference paper at ICLR 2017REFERENCESAbadi et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR ,abs/1603.04467, 2016. URL http://arxiv.org/abs/1603.04467 .Johannes Ball ́e, Valero Laparra, and Eero P. Simoncelli. Density modeling of images using a gen-eralized normalization transformation. CoRR , abs/1511.06281, 2015. URL http://arxiv.org/abs/1511.06281 .Yoshua Bengio, Gr ́egoire Mesnil, Yann Dauphin, and Salah Rifai. Better mixing via deep represen-tations. CoRR , abs/1207.4404, 2012. URL http://arxiv.org/abs/1207.4404 .C. Blundell, B. Uria, A. Pritzel, Y . Li, A. Ruderman, J. Z Leibo, J. Rae, D. Wierstra, and D. Hassabis.Model-Free Episodic Control. ArXiv e-prints , June 2016.X. Chen, Y . Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. InfoGAN: InterpretableRepresentation Learning by Information Maximizing Generative Adversarial Nets. ArXiv e-prints , June 2016.Emily L. Denton, Soumith Chintala, Arthur Szlam, and Robert Fergus. Deep generative imagemodels using a laplacian pyramid of adversarial networks. CoRR , abs/1506.05751, 2015. URLhttp://arxiv.org/abs/1506.05751 .Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. CoRR ,abs/1605.08803, 2016. URL http://arxiv.org/abs/1605.08803 .J. Donahue, P. Kr ̈ahenb ̈uhl, and T. Darrell. Adversarial Feature Learning. ArXiv e-prints , May 2016.V . Dumoulin, I. Belghazi, B. Poole, A. Lamb, M. Arjovsky, O. Mastropietro, and A. Courville.Adversarially Learned Inference. ArXiv e-prints , June 2016.I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, andY . Bengio. Generative Adversarial Networks. ArXiv e-prints , June 2014.D. P Kingma and M. Welling. Auto-Encoding Variational Bayes. ArXiv e-prints , December 2013.Diederik P. Kingma, Danilo Jimenez Rezende, Shakir Mohamed, and Max Welling. Semi-supervised learning with deep generative models. CoRR , abs/1406.5298, 2014. URL http://arxiv.org/abs/1406.5298 .Diederik P. Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverseautoregressive flow. CoRR , abs/1606.04934, 2016. URL http://arxiv.org/abs/1606.04934 .C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi.Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. ArXive-prints , September 2016.Kede Ma, Qingbo Wu, Zhou Wang, Zhengfang Duanmu, Hongwei Yong, Hongliang Li, and LeiZhang. Group mad competition - a new methodology to compare objective image quality models.InThe IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , June 2016.Andrew Maas, Awni Hannun, and Andrew Ng. Rectifier nonlinearities improve neural networkacoustic models. In Proceedings of The 33rd International Conference on Machine Learning ,2013.Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. CoRR , abs/1411.1784,2014. URL http://arxiv.org/abs/1411.1784 .Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXivpreprint arXiv:1610.03483 , 2016.Anh Mai Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthe-sizing the preferred inputs for neurons in neural networks via deep generator networks. CoRR ,abs/1605.09304, 2016. URL http://arxiv.org/abs/1605.09304 .9Under review as a conference paper at ICLR 2017A. Odena. Semi-Supervised Learning with Generative Adversarial Networks. ArXiv e-prints , June2016.Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts.http://distill.pub/2016/deconv-checkerboard/, 2016.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. CoRR , abs/1511.06434, 2015. URL http://arxiv.org/abs/1511.06434 .Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, and VijayPande. Massively multitask networks for drug discovery. In Proceedings of The 33rd Inter-national Conference on Machine Learning , 2016.Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learn-ing what and where to draw. arXiv preprint arXiv:1610.02454 , 2016a.Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.Generative adversarial text-to-image synthesis. In Proceedings of The 33rd International Confer-ence on Machine Learning , 2016b.D. Rezende and S. Mohamed. Variational Inference with Normalizing Flows. ArXiv e-prints , May2015.D. Rezende, S. Mohamed, and D. Wierstra. Stochastic Backpropagation and Approximate Inferencein Deep Generative Models. ArXiv e-prints , January 2014.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomed-ical image segmentation. CoRR , abs/1505.04597, 2015. URL http://arxiv.org/abs/1505.04597 .Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.T. Salimans, I. Goodfellow, W. Zaremba, V . Cheung, A. Radford, and X. Chen. Improved Tech-niques for Training GANs. ArXiv e-prints , June 2016.Eero Simoncelli and Bruno Olshausen. Natural image statistics and neural representation. AnnualReview of Neuroscience , 24:1193–1216, 2001.J. T. Springenberg. Unsupervised and Semi-supervised Learning with Categorical Generative Ad-versarial Networks. ArXiv e-prints , November 2015.Ilya Sutskever, Oriol Vinyals, and Le Quoc V . Sequence to sequence learning with neural networks.InNeural Information Processing Systems , 2014.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov,Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions.CoRR , abs/1409.4842, 2014. URL http://arxiv.org/abs/1409.4842 .Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re-thinking the inception architecture for computer vision. CoRR , abs/1512.00567, 2015. URLhttp://arxiv.org/abs/1512.00567 .L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. ArXive-prints , November 2015.George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minnen, Joel Shor, andMichele Covell. Full resolution image compression with recurrent neural networks. CoRR ,abs/1608.05148, 2016. URL http://arxiv.org/abs/1608.05148 .M. Uehara, I. Sato, M. Suzuki, K. Nakayama, and Y . Matsuo. Generative Adversarial Nets from aDensity Ratio Estimation Perspective. ArXiv e-prints , October 2016.10Under review as a conference paper at ICLR 2017A ̈aron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.CoRR , abs/1601.06759, 2016a. URL http://arxiv.org/abs/1601.06759 .A ̈aron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and KorayKavukcuoglu. Conditional image generation with pixelcnn decoders. CoRR , abs/1606.05328,2016b. URL http://arxiv.org/abs/1606.05328 .Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment:from error visibility to structural similarity. IEEE transactions on image processing , 13(4):600–612, 2004a.Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image qualityassessment. In Signals, Systems and Computers, 2004. Conference Record of the Thirty-SeventhAsilomar Conference on , volume 2, pp. 1398–1402. Ieee, 2004b.11Under review as a conference paper at ICLR 2017A H YPERPARAMETERSOperation Kernel Strides Feature maps BN? Dropout NonlinearityGx(z)–11011inputLinear N/A N/A 768 0.0 ReLUTransposed Convolution 55 22 384p0.0 ReLUTransposed Convolution 55 22 256p0.0 ReLUTransposed Convolution 55 22 192p0.0 ReLUTransposed Convolution 55 22 3 0.0 TanhD(x)–12833inputConvolution 33 22 16 0.5 Leaky ReLUConvolution 33 11 32p0.5 Leaky ReLUConvolution 33 22 64p0.5 Leaky ReLUConvolution 33 11 128p0.5 Leaky ReLUConvolution 33 22 256p0.5 Leaky ReLUConvolution 33 11 512p0.5 Leaky ReLULinear N/A N/A 11 0.0 Soft-SigmoidOptimizer Adam ( = 0:0002 ,1= 0:5,2= 103)Batch size 100Iterations 50000Leaky ReLU slope 0.2Weight, bias initialization Isotropic gaussian ( = 0,= 0:02), Constant( 0)Table 1: Model hyperparameters. A Soft-Sigmoid refers to an operation over K+ 1output units where weapply a Softmax activation to Kof the units and a Sigmoid activation to the remaining unit. We also useactivation noise in the discriminator as suggested in Salimans et al. (2016).12Under review as a conference paper at ICLR 2017B M EASURING THE EFFECT OF CLASS SPLITS ON IMAGE SAMPLE QUALITY .Class conditional image synthesis affords the opportunity to divide up a dataset based on image label.In our final model we divide 1000 ImageNet classes across 100 AC-GAN models. In this sectionwe describe early experiments that highlight the benefit of cutting down the diversity of classes fortraining an AC-GAN. We employed an ordering of the labels and divided it into contiguous groupsof 10. This ordering can be seen in the following section, where we display samples from all 1000classes. Two aspects of the split merit discussion: the number of classes per split and the intra-splitdiversity.We find that training a fixed model on more classes harms the model’s ability to produce compellingsamples (Figure 9). Performance on larger splits can be improved by giving the model more param-eters. However, using a small split is not sufficient to achieve good performance. We were unable totrain a GAN (Goodfellow et al., 2014) to converge reliably even for a split size of 1.Figure 9: Mean pairwise MS-SSIM values for 10 ImageNet classes plotted against the number of ImageNetclasses used during training. We fix everything except the number of classes trained on, using values from 10to 100. We only report the MS-SSIM values for the first 10 classes to keep the scores comparable. MS-SSIMquickly goes above 0.25 (the red line) as the class count increases. These scores were computed using 9 randomrestarts per class count, using the same number of training steps for each model.This raises the question of whether it is easier to train a model on a diverse set of classes than on asimilar set of classes. We were unable to find conclusive evidence that the selection of classes in asplit significantly affects sample quality.13Under review as a conference paper at ICLR 2017C S AMPLES FROM ALL 1000 IMAGENET CLASSESThe following is a link to 10 samples from each of the 1000 ImageNet classes:https://goo.gl/photos/8bgHBkCwDEVTXAPaA14
ByG8A7cee
Under review as a conference paper at ICLR 2017REFERENCE -AWARE LANGUAGE MODELSZichao Yang1, Phil Blunsom2;3, Chris Dyer1;2, and Wang Ling21Carnegie Mellon University,2DeepMind, and3University of Oxfordzichaoy@cs.cmu.edu, fpblunsom,cdyer,lingwang g@google.comABSTRACTWe propose a general class of language models that treat reference as an explicitstochastic latent variable. This architecture allows models to create mentions ofentities and their attributes by accessing external databases (required by, e.g., di-alogue generation and recipe generation) and internal state (required by, e.g. lan-guage models which are aware of coreference). This facilitates the incorporationof information that can be accessed in predictable locations in databases or dis-course context, even when the targets of the reference may be rare words. Ex-periments on three tasks show our model variants outperform models based ondeterministic attention.1 I NTRODUCTIONReferring expressions (REs) in natural language are noun phrases (proper nouns, common nouns,and pronouns) that identify objects, entities, and events in an environment. REs occur frequentlyand they play a key role in communicating information efficiently. While REs are common, previ-ous works neglect to model REs explicitly, either treating REs as ordinary words in the model orreplacing them with special tokens. Here we propose a language modeling framework that explicitlyincorporates reference decisions.In Figure 1we list examples of REs in the context of the three tasks that we consider in this work.Firstly, reference to a database is crucial in many applications. One example is in task orienteddialogue where access to a database is necessary to answer a user’s query ( Young et al. ,2013 ;Liet al. ,2016 ;Vinyals & Le ,2015 ;Wen et al. ,2015 ;Sordoni et al. ,2015 ;Serban et al. ,2016 ;Bordes& Weston ,2016 ;Williams & Zweig ,2016 ;Shang et al. ,2015 ;Wen et al. ,2016 ). Here we considerthe domain of restaurant recommendation where a system refers to restaurants (name) and theirattributes (address, phone number etc) in its responses. When the system says “ the nirala is anice restaurant”, it refers to the restaurant name the nirala from the database. Secondly, manymodels need to refer to a list of items ( Kiddon et al. ,2016 ;Wen et al. ,2015 ). In the task of recipegeneration from a list of ingredients ( Kiddon et al. ,2016 ), the generation of the recipe will frequentlyreference these items. As shown in Figure 1, in the recipe “Blend soy milk and . . . ”, soy milkrefers to the ingredient summaries. Finally, we address references within a document ( Mikolov et al. ,2010 ;Ji et al. ,2015 ;Wang & Cho ,2015 ), as the generation of words will ofter refer to previouslygenerated words. For instance the same entity will often be referred to throughout a document. InFigure 1, the entity you refers to Iin a previous utterance.In this work we develop a language model that has a specific module for generating REs. A series oflatent decisions (should I generate a RE? If yes, which entity in the context should I refer to? Howshould the RE be rendered?) augment a traditional recurrent neural network language model andthe two components are combined as a mixture model. Selecting an entity in context is similar tofamiliar models of attention ( Bahdanau et al. ,2014 ), but rather than being a deterministic functionthat reweights representations of elements in the context, it is treated as a distribution over contextualelements which are stochastically selected and then copied or, if the task warrants it, transformed(e.g., a pronoun rather than a proper name is produced as output). Two variants are possible forupdating the RNN state: one that only looks at the generated output form; and a second that looksat values of the latent variables. The former admits trivial unsupervised learning, latent decisionsare conditionally independent of each other given observed context, whereas the latter enables moreWork completed at DeepMind.1Under review as a conference paper at ICLR 2017referenceexampledialoguerecipecoreferenceM: the nirala is a nice restuarantthe niralamoderate...1 cpu plain soy milk...tableingredientsBlend soy milk and ...[I]1 [Linda]2 [you]1...um and [I]1 think ... [you]1 ...corefFigure 1: Reference-aware language models.expressive models that can extract information from the entity that is being referred to. In each ofthe three tasks, we demonstrate our reference aware model’s efficacy in evaluations against modelsthat do not explicitly include a reference operation.Our contributions are as follows:We propose a general framework to model reference in language and instantiate it in thecontext of dialogue modeling, recipe generation and coreference based language models.We build three data sets to test our models. There lack existing data sets that satisfy ourneed, so we build these data sets ourselves. These data sets are either built on top existingdata set (we constructed the table for DSTC2 data set for dialogue evaluation), crawledfrom websites (we crawled all recipes in www.allrecipes.com ) or annotated withNLP tools (we annotate the coreference with Gigaword corpus for our evaluation).We perform comprehensive evaluation of our models on the three data sets and verify ourmodels perform better than strong baselines.2 R EFERENCE -AWARE LANGUAGE MODELSHere we propose a general framework for reference-aware language models.We denote each document as a series of tokens x1; : : : ; x L, where Lis the number of tokens in thedocument. Our goal is to maximize the probabilities p(xijci), for each word in the document basedon its previous context ci=x1; : : : ; x i1. In contrast to traditional neural language models, weintroduce a variable at each position zi, which controls the decision on which source xiis generatedfrom. The token conditional probably is then obtained by:p(xijci) =p(xijzi; ci)p(zijci): (1)In dialogue modeling and recipe generation, ziwill simply taken on values in f0;1g. Where zi= 1denotes that xiis generated as a reference, either to a database entry or an item in a list. However,zican also be defined as a distribution over previous entities, allowing the model to predict xiconditioned on its a previous mention word. This will be the focus of the coreference languagemodel. When ziis not observed (which it generally will not be), we will train our model to maximizethe marginal probability in Eq. 1directly.2.1 D IALOGUE MODEL WITH DATABASE SUPPORTWe first apply our model on task-oriented dialogue systems in the domain of restaurant recommenda-tions, and work on the data set from the second Dialogue State Tracking Challenge (DSTC2) ( Hen-derson et al. ,2014 ). Table. 1is one example dialogue from this dataset.We can observe from this example, users get recommendations of restaurants based on queriesthat specify the area, price and food type of the restaurant. We can support the system’s decisionsby incorporating a mechanism that allows the model to query the database allowing the model tofind restaurants that satisfy the users queries. Thus, we crawled TripAdvisor for restaurants in the2Under review as a conference paper at ICLR 2017M: Hello , welcome to the Cambridge restaurant system? You can ask for restaurants byarea ,price range orfood type . How may I help you?U: i want a moderately priced restaurant in the north townM: the nirala is a nice restaurant in the north of town in the moderate price rangeU: what is the addressM: Sure , the nirala is on7 Milton Road ChestertonTable 1: Example dialogue, M stands for Machine and U stands for UserNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONEali baba moderate lebanese centre 59 Hills Road City Cen-treCB 2, 1 NT 01462 432565theniralamoderate indian north 7 Milton Road Chester-tonCB 4, 1 UY 01223 360966Table 2: Fragment of database for dialogue system.Cambridge area, where the dialog dataset was collected. Then, we remove restaurants that do notappear in the data set and create a database with 109 entries with restaurants and their attributes (e.g.food type). A sample of our database is shown in Table. 2. We can observe that each restaurantcontains 6 attributes that are generally referred in the dialogue dataset. As such, if the user requestsa restaurant that serves “indian” food, we wish to train a model that can search for entries whose“food” column contains “indian”. Now, we describe how we deploy a model that fulfills theserequirements.2.1.1 D IALOGUE MODELMUMUsentence encoderturn encoderdecoderattnFigure 2: Hierarchical RNN Seq2Seq modelWe build a model based on the hierarchical RNN model described in ( Serban et al. ,2016 ), as indialogues, the generation of the response is not only dependent on the previous sentence, but on allsentences leading to the response. We assume that a dialogue is alternated between a machine and auser. An illustration of the model is shown in Figure 2.Consider a dialogue with Tturns, and the utterance from a user is denoted as X=fxigTi=1, whereiis the i-th utterance, whereas the utterance from a machine is denoted as Y=fyigTi=1, where iis the i-th utterance. We define xi=fxijgjxijj=1,yi=fyivgjyijv=1, where xijdenotes the j-th tokenin the i-th utterance from the user, whereas yivdenotes the v-th token in the i-th utterance fromthe machine. Finally, jxijandjyijdenote the number of tokens in the user and machine utterances,respectively. The dialogue sequence starts with machine utterance fy1; x1; y2; x2; : : : ; y T; xTg. Wewould like to model the utterances from the machinep(y1; y2; : : : ; y Tjx1; x2; : : : ; x T) =∏ip(yijy<i; x<i) =∏i;vp(yi;vjyi;<v; y<i; x<i);where y<idenotes all the utterances before iandyi;<v denotes the first v1tokens in the i-thutterance of the machine. A neural model is employed to predict p(yi;vjyi;<v; y<i; x<i), whichoperates as follows:Sentence Encoder : We first encode previous utterances y<iandx<iinto continuous space by gen-erating employing a LSTM encoder. Thus, for a given utterance xi, and start with the initial LSTMstatehxi;0and apply the recursion hxi;j=LSTM E(WExi;j; hxi;j1), where WExi;jdenotes a word3Under review as a conference paper at ICLR 2017embedding lookup for the token xi;j, and LSTM Edenotes the LSTM transition function describedinHochreiter & Schmidhuber (1997 ). The representation of the user utterance is represented bythe final LSTM state hxi=hxi;jxij. The same process is applied to obtain the machine utterancerepresentation hyi=hyi;jyij.Turn Encoder : Then, combine all the representations of all the utterances with a second LSTM,which encodes the sequence fhy1; hx1; :::; hyi; hxiginto a continuous vector. Once again, we start withan initial state u0and feed each of the utterance representation to obtain the following LSTM state,until the final state is obtained. For simplicity, we shall refer to this as ui, which can be seen as thehierarchical encoding of the previous iutterances.Seq2Seq Decoder : As for decoding, in order to generate each utterance yi, we can feed ui1intothe decoder LSTM as the initial state si;0=ui1and decode each token in yi. Thus, we can expressthe decoder as:syi;v=LSTM D(WEyi;v1; si;v1);pyi;v=softmax (Wsyi;v);where the desired probability p(yi;vjyi;<v; y<i; x<i)is expressed by pyi;v.Attention based decoder : We can also incorporate the attention mechanism in our hierarchicalmodel. An attention model builds a representation dby averaging over a set of vectors p. We definethe attention function as a=ATTN (p; q), where ais a probability distribution over the set of vectorsp, conditioned on any input representation q. A full description of this operation is described in ( Bah-danau et al. ,2014 ). Thus, for each generated token yi;v, we compute the attentions ai;v, conditionedon the current decoder state syi;v, obtaining the attentions over input tokens from previous turn ( i1).We denote the vector of all tokens in previous turn as hx;yi1= [fhxi1;jgjxi1jj=1;fhyi1;vgjyi1jv=1]. LetK=jhx;yi1jbe the number of tokens in previous turn. Thus, we obtain the attention probabilitiesover all previous tokens ai;vas ATTN (syi;v; hx;yi1). Then, the weighted sum is computed over theseprobabilities di;v=∑k2Kai;v;khx;yi1;k, where ai;v;k is the probability of aligning to the k-th tokenfrom previous turn. The resulting vector di;vis used to obtain the probability of the following wordpyi;v. Thus, we express the decoder as:syi;v=LSTM D([WEyi;v1; di;v1]; si;v1);ai;v=ATTN (hx;yi1; syi;v);di;v=∑k2Kai;v;khx;yi1;k;pyi;v=softmax (W[syi;v; di;v]):2.1.2 I NCORPORATING TABLE ATTENTIONTable Attnquery...attributesrowsTable AttndecoderUattnweighted rowStep 1: attribute attnStep 2: weighted columnStep 3: row attnpapaprpr(a) Decoder with table attention.query...attributesrowszYesNoUdecoderTable PointerTable PointerStep 1: attribute attnStep 2: weighted columnStep 3: row attnStep 4: weighted rowStep 5: column attnpapaprprpcpcpvocabpvocabpcopypcopy (b) Decoder with table pointer.Figure 3: Table based decoder.4Under review as a conference paper at ICLR 2017We now extend the attention model in order to allow the attention to be computed over a table,allowing the model to condition the generation on a database.We denote a table with Rrows and Ccolumns as ffr;cg; r2[1; R]; c2[1; C], where fr;cis the cellin row rand column c. The attribute of each column is denoted as sc, where cis the c-th attribute.fr;candscare one-hot vector.Table Encoding : To encode the table, we build an attribute vector gcfor each column. For eachcellfr;cof the table, we concatenate it with the corresponding attribute gcand then feed it througha one-layer MLP as follows: gc=WEscand then er;c= tanh( W[WEfr;c; gc]).Table Attention : The diagram for table attention is shown in Figure 3a. The attention over cellsin the table is conditioned on a given vector q, similarly to the attention model for sequencesATTN (p; q). However, rather than a sequence p, we now operate over a table f. Our attentionmodel computes a attribute attention followed by row attention of the table. We first use the atten-tion mechanism on the attributes to find out which attribute the user asks about. Suppose a usersayscheap , then we should focus on the price attribute. After we get the attention probabil-itypa=ATTN (fgcg; q), over the attribute, we calculate the weighted representation for each rower=∑cpacercconditioned on pa. Then erhas the price information of each row. We further useattention mechanism on erand get the probability pr=ATTN (ferg; q)over the rows. Then restau-rants with cheap price will be picked. Then, using the probabilities pr, we compute the weightedaverage over the all rows ec=∑rprrer;c, which is used in the decoder. The detailed process is:pa=ATTN (fgcg; q); (2)er=∑cpacerc8r; (3)pr=ATTN (ferg; q); (4)ec=∑rprrer;c8c: (5)This is embedded in the decoder by replacing the conditioned state qas the current decoder statesyi;0and then at each step, conditioning the prediction of yi;vonfecgby using attention mechanismat each step. The detailed diagram of table attention is shown in Figure 3a.2.1.3 I NCORPORATING TABLE POINTER NETWORKSWe now describe the mechanism used to refer to specific database entries during decoding. At eachtimestep, the model needs to decide whether to generate the next token from an entry of the databaseor from the word softmax. This is performed as follows.Pointer Switch : We use zi;v2[0;1]to denote the decision of whether to copy one cell from thetable. We compute this probability as follows:p(zi;vjsi;v) =sigmoid (W[si;v; di;v]):Thus, if zi;v= 1, the next token yi;vwill be generated from the database, whereas if zi;v= 0, thenthe following token is generated from a softmax. We shall now describe how we generate tokensfrom the database.Table Pointer : Ifzi;v= 1, the token is generated from the table. The detailed process of calculatingthe probability distribution over the table is shown in Figure 3b. This is similar to the attentionmechanism, except that we perform a column attention to compute the probabilities of copying fromeach column after Equation. 5. More formally:pc=ATTN (fecg; q); (6)pcopy=prpc; (7)where pcis a probability distribution over columns, whereas pris a probability distribution overrows. In order to compute a matrix with the probability of copying each cell, we simply computethe outer product pcopy=prpc.Objective: As we treat zias a latent variable, we wish to maximize the marginal probability of thesequence yiover all possible values of zi. Thus, our objective function is defined as:p(yi;vjsi;v) =pvocabp(0jsi;v) +pcopyp(1jsi;v) =pvocab(1p(1jsi;v)) +pcopyp(1jsi;v):(8)5Under review as a conference paper at ICLR 2017The model can also be trained in a fully supervised fashion, if zi;vis observed. In such cases,we simply maximize the likelihood of p(zi;vjsi;v), based on the observations, rather than using themarginal probability over zi;v.2.2 R ECIPE GENERATIONingredients recipe1 cup plain soy milk Blend soy milk andspinach leavestogether in a blender until smooth. Add bananaand pulse until thoroughly blended.3/4 cup packed fresh spinach leaves1 large banana , slicedTable 3: Ingredients and recipe for Spinach and Banana Power Smoothie .Next, we consider the task of recipe generation conditioning on the ingredient lists. In this task, wemust generate the recipe from a list of ingredients. Table. 3illustrates the ingredient list and recipeforSpinach and Banana Power Smoothie . We can see that the ingredients soy milk, spinachleaves, and banana occur in the recipe.soydecoderingredientszYesNoencoderBlendsoypvocabpvocabpcopypcopyFigure 4: Recipe pointerLet the ingredients of a recipe be X=fxigTi=1and each ingredient contains Ltokens xi=fxijgLj=1. The corresponding recipe is y=fyvgKv=1. We first use a LSTM to encode each in-gredient:hi;j=LSTM E(WExij; hi;j1)8i:Then, we sum the resulting state of each ingredient to obtain the starting LSTM state of the decoder.Once again we use an attention based decoder:sv=LSTM D(sv1; dv1; W Eyv1);pcopyv=ATTN (ffhi;jgTi=1gLj=1; sv);dv=∑ijpv;i;jhi;j;p(zvjsv) =sigmoid (W[sv; dv]);pvocabv =softmax (W[sv; dv]):Similar to the previous task, the decision to copy from the ingredient list or generate a newword from the softmax is performed using a switch, denoted as p(zvjsv). We can obtain aprobability distribution of copying each of the words in the ingredients by computing pcopyv =ATTN (ffhi;jgTi=1gLj=1; sv)in the attention mechanism. For training, we optimize the marginallikelihood function employed in the previous task.2.3 C OREFERENCE BASED LANGUAGE MODELFinally, we build a language model that uses coreference links to point to previous words. Beforegenerating a word, we first make the decision on whether it is an entity mention. If so, we decide6Under review as a conference paper at ICLR 2017which entity this mention belongs to, then we generate the word based on that entity. Denote thedocument as X=fxigLi=1, and the entities are E=feigNi=1, each entity has Mimentions, ei=fmijgMij=1, such that fxmijgMij=1refer to the same entity. We use a LSTM to model the document,the hidden state of each token is hi=LSTM (WExi; hi1). We use a set he=fhe0; he1; :::; heMgtokeep track of the entity states, where hejis the state of entity j.um and [I] 1think that is whats - Go ahead [Linda] 2. Well and thanks goes to [you] 1and to[the media] 3to help [us] 4...So [our] 4hat is off to all of [you] 5...[I]1umentity stateupdate processI[Linda]2ILinda[You]1YouLindaupdate statepush stateemptystate001012012push stateattn......attnand[I]1of[You]1newentityentity1Figure 5: Coreference based language model, example taken from Wiseman et al. (2016 ).Word generation : At each time step before generating the next word, we predict whether the wordis an entity mention:pcoref(vijhi1; he) =ATTN (he; hi1);di=∑vip(vi)hevip(zijhi1) =sigmoid (W[di; hi1]);where zidenotes whether the next word is an entity and if yes videnotes which entity thenext word corefers to. If the next word is an entity mention, then p(xijvi; hi1; he) =softmax (W1tanh( W2[hevi; hi1]))elsep(xijhi1) =softmax (W1hi1);p(xijx<i) ={p(xijhi1)p(zijhi1; he) ifzi= 0:p(xijvi; hi1; he)pcoref(vijhi1; he)p(zijhi1; he) ifzi= 1:(9)Entity state update : We update the entity state heat each time step. In the beginning, he=fhe0g,he0denotes the state of an virtual empty entity and is a learnable variable. If zi= 1andvi= 0, thenit indicates the next word is a new entity mention, then in the next step, we append hitohe, i.e.,he=fhe; hig, ifei>0, then we update the corresponding entity state with the new hidden state,he[vi] =hi. Another way to update the entity state is to use one LSTM to encode the mention statesand get the new entity state. Here we use the latest entity mention state as the new entity state forsimplicity. The detailed update process is shown in Figure 5.3 E XPERIMENTS4 D ATA SETS AND PREPROCESSINGDialogue : We use the DSTC2 data set. We only extracted the dialogue transcript from data set.There are about 3,200 dialogues in total. Since this is a small data set, we use 5-fold cross validationand report the average result over the 5 partitions. There may be multiple tokens in each table cell,for example in Table. 2, the name, address, post code and phone number have multiple tokens, wereplace them with one special token. For the name, address, post code and phone number of the j-throw, we replace the tokens in each cell with NAME j,ADDR j,POSTCODE j,PHONE j.If a table cell is empty, we replace it with an empty token EMPTY. We do a string match in thetranscript and replace the corresponding tokens in transcripts from the table with the special tokens.7Under review as a conference paper at ICLR 2017Each dialogue on average has 8 turns (16 sentences). We use a vocabulary size of 900, includingabout 400 table tokens and 500 words.Recipes : We crawl all recipes from www.allrecipes.com . There are about 31;000recipes intotal, and every recipe has a ingredient list and a corresponding recipe. We exclude the recipes thathave less than 10 tokens or more than 500 tokens, those recipes take about 0.1% of all data set. Onaverage each recipe has 118 tokens and 9 ingredients. We random shuffle the whole data set and take80% as training and 10% for validation and test. We use a vocabulary size of 10,000 in the model.Coref LM : We use the Xinhua News data set from Gigaword Fifth Edition and sample 100,000documents from it that has length in range from 100 to 500. Each document has on average 234tokens, so there are 23 million tokens in total. We use a tool to annotate all the entity mentionsand use the annotation in the training. We take 80% as training and 10% as validation and testrespectively. We ignore the entities that have only one mention and for the mentions that havemultiple tokens, we take the token that is most frequent in the all the mentions for this entity. Afterthe preprocessing, tokens that are entity mentions take about 10% of all tokens. We use a vocabularysize of 50,000 in the model.4.1 M ODEL TRAINING AND EVALUATIONWe train all models with simple stochastic gradient descent with clipping. We use a one-layer LSTMfor all RNN components. Hyper-parameters are selected using grid search based on the validationset. We use dropout after the input embedding and LSTM output. The learning rate is selected from[0.1, 0.2, 0.5, 1], maximum gradient norm is selected from [1, 2, 5, 10] and drop ratio is selectedfrom [0.2, 0.3, 0.5]. The batch size and LSTM dimension size is slightly different for differenttasks so as to make the model fit into memory. The number of epochs to train are different foreach task and we drop the learning rate after reaching a given number of epochs. We report theper-word perplexity for all tasks, specifically, we report the perplexity of all words, words that canbe generated from reference and non-reference words. For recipe generation, we also generate therecipe using beam size of 10 and evaluate the generated recipe with BLEU.model all table table oov wordseq2seq 1.350.01 4.98 0.38 1.99E7 7.75E6 1.23 0.01table attn 1.370.01 5.09 0.64 7.91E7 1.39E8 1.24 0.01table pointer 1.330.01 3.99 0.36 13602600 1.23 0.01table latent 1.360.01 4.99 0.20 3.78E7 6.08E7 1.24 0.01+sentence attnseq2seq 1.280.01 3.31 0.21 2.83E9 4.69E9 1.19 0.01table attn 1.280.01 3.17 0.21 1.67E7 9.5E6 1.20 0.01table pointer 1.270.01 2.99 0.19 82.86110 1.20 0.01table latent 1.280.01 3.26 0.25 1.27E7 1.41E7 1.20 0.01Table 4: Dialogue perplexity results. (All means all tokens, table means tokens from table, table oovdenotes table tokens that does not appear in the training set, word means non-table tokens). sentenceattn denotes we use attention mechanism over tokens from past turn. Table pointer and table latentdiffers in that table pointer, we provide supervised signal on when to generate a table token, whilein table latent it is a latent decision.modelval testpplBLEUpplBLEUall ing word all ing wordseq2seq 5.60 11.26 5.00 14.07 5.52 11.26 4.91 14.39attn 5.25 6.86 5.03 14.84 5.19 6.92 4.95 15.15pointer 5.15 5.86 5.04 15.06 5.11 6.04 4.98 15.29latent 5.02 5.10 5.01 14.87 4.97 5.19 4.94 15.41Table 5: Recipe result, evaluated in perplexity and BLEU score. ing denotes tokens from recipe thatappear in ingredients.8Under review as a conference paper at ICLR 2017modelval testall entity word all entity wordlm 33.08 44.52 32.04 33.08 43.86 32.10pointer 32.57 32.07 32.62 32.62 32.07 32.69pointer + init 30.43 28.56 30.63 30.42 28.56 30.66Table 6: Coreference based LM. pointer + init means we initialize the model with the LM weights.4.2 R ESULTS AND ANALYSISThe results for dialogue, recipe generation and coref language model are shown in Table 4,5and6respectively. We can see from Table 4that models that condition on table performs better inpredicting table tokens in general. Table pointer has the lowest perplexity for token in the table.Since the table token appears rarely in the dialogue, the overall perplexity does not differ much andthe non-table tokens perplexity are similar. With attention mechanism over the table, the perplexityof table token improves over basic seq2seq model, but not as good as directly pointing to cells in thetable. As expected, using sentence attention improves significantly over models without sentenceattention. Surprisingly, table latent performs much worse than table pointer. We also measure theperplexity of table tokens that appear only in test set. For models other than table pointer, becausethe tokens never appear in training set, the perplexity is quite high, while table pointer can predictthese tokens much more accurately. The recipe results in Table 5in general follows that findingsfrom the dialogue. But the latent model performs better than pointer model since that tokens iningredients that match with recipe does not necessarily come from the ingredients. Imposing asupervised signal will give wrong information to the model and hence make the result worse. Hencewith latent decision, the model learns to when to copy and when to generate it from the vocabulary.The coref LM results are shown in Table 6. We find that coref based LM performs much better onthe entities perplexities, but however is a little bit worse than for non-entity words. We found it is anoptimization problem and perhaps the model is stuck in local optimum. So we initialize the pointermodel with the weights learned from LM, the pointer model performs better than LM both for entityperplexity and non-entity words perplexity.5 R ELATED WORKRecently, there has been great progresses in modeling languages based on neural network, includinglanguage modeling ( Mikolov et al. ,2010 ;Jozefowicz et al. ,2016 ), machine translation ( Sutskeveret al. ,2014 ;Bahdanau et al. ,2014 ), question answering ( Hermann et al. ,2015 ) etc. Based on thesuccess of seq2seq models, neural networks are applied in modeling chit-chat dialogue ( Li et al. ,2016 ;Vinyals & Le ,2015 ;Sordoni et al. ,2015 ;Serban et al. ,2016 ;Shang et al. ,2015 ) and taskoriented dialogue ( Wen et al. ,2015 ;Bordes & Weston ,2016 ;Williams & Zweig ,2016 ;Wen et al. ,2016 ). Most of the chit-chat neural dialogue models are simply applying the seq2seq models. Forthe task oriented dialogues, most of them embed the seq2seq model in traditional dialogue systems,in which the table query part is not differentiable. while our model queries the database directly.Recipe generation was proposed in ( Kiddon et al. ,2016 ). Their model extents previous work onattention models ( Allamanis et al. ,2016 ) to checklists, whereas our work models explicit referencesto those checklists. Context dependent language models ( Mikolov et al. ,2010 ;Ji et al. ,2015 ;Wang& Cho ,2015 ) are proposed to capture long term dependency of text. There are also lots of workson coreference resolution ( Haghighi & Klein ,2010 ;Wiseman et al. ,2016 ). We are the first tocombine coreference with language modeling, to the best of our knowledge. Much effort has beeninvested in embedding a copying mechanism for neural models ( G ̈ulc ̧ehre et al. ,2016 ;Gu et al. ,2016 ;Ling et al. ,2016 ). In general, a gating mechanism is employed to combine the softmax overobserved words and a pointer network ( Vinyals et al. ,2015 ). These gates can be trained either bymarginalizing over both outcomes, or using heuristics (e.g. copy low frequency words). Our modelsare similar to models proposed in ( Ahn et al. ,2016 ;Merity et al. ,2016 ), where the generation ofeach word can be conditioned on a particular entry in knowledge lists and previous words. In ourwork, we describe a model with broader applications, allowing us to condition, on databases, listsand dynamic lists.9Under review as a conference paper at ICLR 20176 C ONCLUSIONWe introduce reference-aware language models which explicitly model the decision of from whereto generate the token at each step. Our model can also learns the decision by treating it as a latentvariable. We demonstrate on three tasks, table based dialogue modeling, recipe generation and corefbased LM, that our model performs better than attention based model, which does not incorporatethis decision explicitly. There are several directions to explore further based on our framework. Thecurrent evaluation method is based on perplexity and BLEU. In task oriented dialogues, we can alsotry human evaluation to see if the model can reply users’ query accurately. It is also interesting touse reinforcement learning to learn the actions in each step.REFERENCESSungjin Ahn, Heeyoul Choi, Tanel P ̈arnamaa, and Yoshua Bengio. A neural knowledge languagemodel. CoRR , abs/1608.00318, 2016.Miltiadis Allamanis, Hao Peng, and Charles A. Sutton. A convolutional attention network for ex-treme summarization of source code. CoRR , abs/1602.03001, 2016. URL http://arxiv.org/abs/1602.03001 .Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. CoRR , abs/1409.0473, 2014. URL http://arxiv.org/abs/1409.0473 .Antoine Bordes and Jason Weston. Learning end-to-end goal-oriented dialog. arXiv preprintarXiv:1605.07683 , 2016.Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. Incorporating copying mechanism insequence-to-sequence learning. CoRR , abs/1603.06393, 2016. URL http://arxiv.org/abs/1603.06393 .C ̧ aglar G ̈ulc ̧ehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. Pointingthe unknown words. CoRR , abs/1603.08148, 2016. URL http://arxiv.org/abs/1603.08148 .Aria Haghighi and Dan Klein. Coreference resolution in a modular, entity-centered model. InHuman Language Technologies: The 2010 Annual Conference of the North American Chapterof the Association for Computational Linguistics , pp. 385–393. Association for ComputationalLinguistics, 2010.Matthew Henderson, Blaise Thomson, and Jason Williams. Dialog state tracking challenge 2 & 3,2014.Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, MustafaSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances inNeural Information Processing Systems , pp. 1693–1701, 2015.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural Comput. , 9(8):1735–1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL http://dx.doi.org/10.1162/neco.1997.9.8.1735 .Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. Document contextlanguage models. arXiv preprint arXiv:1511.03962 , 2015.Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring thelimits of language modeling. arXiv preprint arXiv:1602.02410 , 2016.Chlo ́e Kiddon, Luke Zettlemoyer, and Yejin Choi. Globally coherent text generation with neuralchecklist models. In Proc. EMNLP , 2016.Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. Deep reinforce-ment learning for dialogue generation. In Proc. EMNLP , 2016.10Under review as a conference paper at ICLR 2017Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tom ́aˇs Ko ˇcisk ́y, Andrew Senior, FuminWang, and Phil Blunsom. Latent predictor networks for code generation. In Proc. ACL , 2016.Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixturemodels. arXiv preprint arXiv:1609.07843 , 2016.Tomas Mikolov, Martin Karafi ́at, Lukas Burget, Jan Cernock `y, and Sanjeev Khudanpur. Recurrentneural network based language model. In Interspeech , volume 2, pp. 3, 2010.Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Buildingend-to-end dialogue systems using generative hierarchical neural network models. In Proceedingsof the 30th AAAI Conference on Artificial Intelligence (AAAI-16) , 2016.Lifeng Shang, Zhengdong Lu, and Hang Li. Neural responding machine for short-text conversation.arXiv preprint arXiv:1503.02364 , 2015.Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generationof conversational responses. In Proc. NAACL , 2015.Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.InAdvances in neural information processing systems , pp. 3104–3112, 2014.Oriol Vinyals and Quoc V . Le. A neural conversational model. In Proc. ICML Deep LearningWorkshop , 2015.Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Proc. NIPS , 2015.Tian Wang and Kyunghyun Cho. Larger-context language modelling. arXiv preprintarXiv:1511.03729 , 2015.Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-hao Su, David Vandyke, and Steve J. Young.Semantically conditioned LSTM-based natural language generation for spoken dialogue systems.InProc. EMNLP , 2015.Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes,David Vandyke, and Steve Young. A network-based end-to-end trainable task-oriented dialoguesystem. arXiv preprint arXiv:1604.04562 , 2016.Jason D Williams and Geoffrey Zweig. End-to-end lstm-based dialog control optimized with super-vised and reinforcement learning. arXiv preprint arXiv:1606.01269 , 2016.Sam Wiseman, Alexander M Rush, and Stuart M Shieber. Learning global features for coreferenceresolution. arXiv preprint arXiv:1604.03035 , 2016.Steve Young, Milica Ga ˇsi ́c, Blaise Thomson, and Jason D Williams. Pomdp-based statistical spokendialog systems: A review. Proceedings of the IEEE , 101(5):1160–1179, 2013.11Under review as a conference paper at ICLR 2017A V ISUALIZATIONM: Hello , welcome to the Cambridge restaurant system ? You can ask for restaurants by area ,price range or food type . How may I help you ?U: and expensive restaurant that serves tuscan foodM: Sorry there is no tuscan restaurant in the expensive price rangeU: french foodM: cote is a great restaurant serving french food and it is in the expensive price rangeU: may i have the phone numberM: The phone number of cote is 01223 311053 .U: thank you good bye(a) Dialogue scriptNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONEcharlie chan cheap chinese east Regent Street City Cen-treC.B 2, 1 D.B 01223 361763chiquito restau-rant barexpensive mexican south 2G Cambridge LeisurePark Cherry HintonRoad Cherry HintonC.B 1, 7 D.Y 01223 400170city stop expensive food north Cambridge City Foot-ball Club Milton RoadChestertonEMPTY 01223 363270clowns cafe expensive italian centre EMPTY C.B 1, 1 L.N 01223 355711cocum expensive indian west 71 Castle Street CityCentreC.B 3, 0 A.H 01223 366668cote expensive french centre Bridge Street City Cen-treC.B 2, 1 U.F 01223 311053curry garden expensive indian centre 106 Regent Street CityCentreEMPTY 01223 302330curry king expensive indian centre 5 Jordans Yard BridgeStreet City CentreC.B 1, 2 B.D 01223 324351curry prince moderate indian east 451 Newmarket RoadFen DittonC.B 5, 8 J.J 01223 566388(b) Attention heat map: cote is a great restaurant serving french food and it is in the expensive price range.NAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONEcharlie chan cheap chinese east Regent Street City Cen-treC.B 2, 1 D.B 01223 361763chiquito restau-rant barexpensive mexican south 2G Cambridge LeisurePark Cherry HintonRoad Cherry HintonC.B 1, 7 D.Y 01223 400170city stop expensive food north Cambridge City Foot-ball Club Milton RoadChestertonEMPTY 01223 363270clowns cafe expensive italian centre EMPTY C.B 1, 1 L.N 01223 355711cocum expensive indian west 71 Castle Street CityCentreC.B 3, 0 A.H 01223 366668cote expensive french centre Bridge Street City Cen-treC.B 2, 1 U.F 01223 311053curry garden expensive indian centre 106 Regent Street CityCentreEMPTY 01223 302330curry king expensive indian centre 5 Jordans Yard BridgeStreet City CentreC.B 1, 2 B.D 01223 324351curry prince moderate indian east 451 Newmarket RoadFen DittonC.B 5, 8 J.J 01223 566388(c) Attention heap map: The phone number of cote is 01223 311053 .Table 7: Dialogue visualization 112Under review as a conference paper at ICLR 2017M: Hello , welcome to the Cambridge restaurant system ? You can ask for restaurants by area ,price range or food type . How may I help you ?U: cheap restaurantM: What kind of food would you like ?U: in the west part of townM: la margherita is a nice restaurant in the west of town in the cheap price rangeU: addressM: Sure , la margherita is on 15 Magdalene Street City CentreU: thank youM: la margherita is a nice restaurant in the west of town in the cheap price rangeU: good bye(a) Dialogue scriptNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONEindia house expensive indian west 31 Newnham RoadNewnhamEMPTY 01223 461661j restaurant cheap oriental centre 86 Regent Street CityCentreC.B 2, 1 D.P 01223 307581jinling noodlebarmoderate chinese centre 11 Peas Hill City Cen-treC.B 2, 3 P.P 01223 566188kohinoor cheap indian centre 74 Mill Road City Cen-treEMPTY 01223 323639kymmoy expensive oriental centre 52 Mill Road City Cen-treC.B 1, 2 A.S 01223 311911la margherita cheap italian west 15 Magdalene StreetCity CentreC.B 3, 0 A.F 01223 315232la mimosa expensive mediterranean centre Thompsons Lane FenDittonC.B 5, 8 A.Q 01223 362525la raza cheap spanish centre 4 - 6 Rose Crescent C.B 2, 3 L.L 01223 464550la tasca moderate spanish centre 14 -16 Bridge Street C.B 2, 1 U.F 01223 464630lan hong house moderate chinese centre 12 Norfolk Street CityCentreEMPTY 01223 350420(b) Attention heat map: la margherita is a nice restaurant in the west of town in the cheap price rangeNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONEindia house expensive indian west 31 Newnham RoadNewnhamEMPTY 01223 461661j restaurant cheap oriental centre 86 Regent Street CityCentreC.B 2, 1 D.P 01223 307581jinling noodlebarmoderate chinese centre 11 Peas Hill City Cen-treC.B 2, 3 P.P 01223 566188kohinoor cheap indian centre 74 Mill Road City Cen-treEMPTY 01223 323639kymmoy expensive oriental centre 52 Mill Road City Cen-treC.B 1, 2 A.S 01223 311911la margherita cheap italian west 15 Magdalene StreetCity CentreC.B 3, 0 A.F 01223 315232la mimosa expensive mediterranean centre Thompsons Lane FenDittonC.B 5, 8 A.Q 01223 362525la raza cheap spanish centre 4 - 6 Rose Crescent C.B 2, 3 L.L 01223 464550la tasca moderate spanish centre 14 -16 Bridge Street C.B 2, 1 U.F 01223 464630lan hong house moderate chinese centre 12 Norfolk Street CityCentreEMPTY 01223 350420(c) Attention heap map: Sure , la margherita is on 15 Magdalene Street City Centre .Table 8: Dialogue visualization 213Under review as a conference paper at ICLR 2017(a) part 1(b) part 2Figure 6: Recipe heat map example 1. The ingredient tokens appear on the left while the recipetokens appear on the top. The first row is the p(zvjsv).14Under review as a conference paper at ICLR 2017(a) part 1(b) part 2(c) part 3Figure 7: Recipe heat map example 2.15
B1ewdt9xe
Published as a conference paper at ICLR 2017DEEP PREDICTIVE CODING NETWORKS FOR VIDEOPREDICTION AND UNSUPERVISED LEARNINGWilliam Lotter, Gabriel Kreiman & David CoxHarvard UniversityCambridge, MA 02215, USAflotter,davidcox g@fas.harvard.edugabriel.kreiman@tch.harvard.eduABSTRACTWhile great strides have been made in using deep learning algorithms to solvesupervised learning tasks, the problem of unsupervised learning — leveraging un-labeled examples to learn about the structure of a domain — remains a difficultunsolved challenge. Here, we explore prediction of future frames in a video se-quence as an unsupervised learning rule for learning about the structure of thevisual world. We describe a predictive neural network (“PredNet”) architecturethat is inspired by the concept of “predictive coding” from the neuroscience lit-erature. These networks learn to predict future frames in a video sequence, witheach layer in the network making local predictions and only forwarding deviationsfrom those predictions to subsequent network layers. We show that these networksare able to robustly learn to predict the movement of synthetic (rendered) objects,and that in doing so, the networks learn internal representations that are usefulfor decoding latent object parameters (e.g. pose) that support object recognitionwith fewer training views. We also show that these networks can scale to com-plex natural image streams (car-mounted camera videos), capturing key aspectsof both egocentric movement and the movement of objects in the visual scene,and the representation learned in this setting is useful for estimating the steer-ing angle. Altogether, these results suggest that prediction represents a powerfulframework for unsupervised learning, allowing for implicit learning of object andscene structure.1 I NTRODUCTIONMany of the most successful current deep learning architectures for vision rely on supervised learn-ing from large sets of labeled training images. While the performance of these networks is un-doubtedly impressive, reliance on such large numbers of training examples limits the utility of deeplearning in many domains where such datasets are not available. Furthermore, the need for largenumbers of labeled examples stands at odds with human visual learning, where one or a few viewsof an object is often all that is needed to enable robust recognition of that object across a wide rangeof different views, lightings and contexts. The development of a representation that facilitates suchabilities, especially in an unsupervised way, is a largely unsolved problem.In addition, while computer vision models are typically trained using static images, in the real world,visual objects are rarely experienced as disjoint snapshots. Instead, the visual world is alive withmovement, driven both by self-motion of the viewer and the movement of objects within the scene.Many have suggested that temporal experience with objects as they move and undergo transforma-tions can serve as an important signal for learning about the structure of objects (F ̈oldi ́ak, 1991;Softky, 1996; Wiskott & Sejnowski, 2002; George & Hawkins, 2005; Palm, 2012; O’Reilly et al.,2014; Agrawal et al., 2015; Goroshin et al., 2015a; Lotter et al., 2015; Mathieu et al., 2016; Srivas-tava et al., 2015; Wang & Gupta, 2015; Whitney et al., 2016). For instance, Wiskott and Sejnowskiproposed “slow feature analysis” as a framework for exploiting temporal structure in video streams(Wiskott & Sejnowski, 2002). Their approach attempts to build feature representations that extractCode and video examples can be found at: https://coxlab.github.io/prednet/1Published as a conference paper at ICLR 2017slowly-varying parameters, such as object identity, from parameters that produce fast changes in theimage, such as movement of the object. While approaches that rely on temporal coherence havearguably not yet yielded representations as powerful as those learned by supervised methods, theynonetheless point to the potential of learning useful representations from video (Mohabi et al., 2009;Sun et al., 2014; Goroshin et al., 2015a; Maltoni & Lomonaco, 2015; Wang & Gupta, 2015).Here, we explore another potential principle for exploiting video for unsupervised learning: pre-diction of future image frames (Softky, 1996; Palm, 2012; O’Reilly et al., 2014; Goroshin et al.,2015b; Srivastava et al., 2015; Mathieu et al., 2016; Patraucean et al., 2015; Finn et al., 2016; V on-drick et al., 2016). A key insight here is that in order to be able to predict how the visual worldwill change over time, an agent must have at least some implicit model of object structure and thepossible transformations objects can undergo. To this end, we have designed a neural network archi-tecture, which we informally call a “PredNet,” that attempts to continually predict the appearanceof future video frames, using a deep, recurrent convolutional network with both bottom-up and top-down connections. Our work here builds on previous work in next-frame video prediction (Ranzatoet al., 2014; Michalski et al., 2014; Srivastava et al., 2015; Mathieu et al., 2016; Lotter et al., 2015;Patraucean et al., 2015; Oh et al., 2015; Finn et al., 2016; Xue et al., 2016; V ondrick et al., 2016;Brabandere et al., 2016), but we take particular inspiration from the concept of “predictive coding”from the neuroscience literature (Rao & Ballard, 1999; Rao & Sejnowski, 2000; Lee & Mumford,2003; Friston, 2005; Summerfield et al., 2006; Egner et al., 2010; Bastos et al., 2012; Spratling,2012; Chalasani & Principe, 2013; Clark, 2013; O’Reilly et al., 2014; Kanai et al., 2015). Predictivecoding posits that the brain is continually making predictions of incoming sensory stimuli (Rao &Ballard, 1999; Friston, 2005). Top-down (and perhaps lateral) connections convey these predictions,which are compared against actual observations to generate an error signal. The error signal is thenpropagated back up the hierarchy, eventually leading to an update of the predictions.We demonstrate the effectiveness of our model for both synthetic sequences, where we have accessto the underlying generative model and can investigate what the model learns, as well as naturalvideos. Consistent with the idea that prediction requires knowledge of object structure, we findthat these networks successfully learn internal representations that are well-suited to subsequentrecognition and decoding of latent object parameters (e.g. identity, view, rotation speed, etc.). Wealso find that our architecture can scale effectively to natural image sequences, by training usingcar-mounted camera videos. The network is able to successfully learn to predict both the movementof the camera and the movement of objects in the camera’s view. Again supporting the notionof prediction as an unsupervised learning rule, the model’s learned representation in this settingsupports decoding of the current steering angle.––inputoutputRepresentationPredictionTargetErrorFigure 1: Predictive Coding Network (PredNet). Left: Illustration of information flow within twolayers. Each layer consists of representation neurons ( Rl), which output a layer-specific prediction ateach time step ( ^Al), which is compared against a target ( Al) (Bengio, 2014) to produce an error term(El), which is then propagated laterally and vertically in the network. Right: Module operations forcase of video sequences.2Published as a conference paper at ICLR 20172 T HEPREDNETMODELThe PredNet architecture is diagrammed in Figure 1. The network consists of a series of repeatingstacked modules that attempt to make local predictions of the input to the module, which is thensubtracted from the actual input and passed along to the next layer. Briefly, each module of thenetwork consists of four basic parts: an input convolutional layer ( Al), a recurrent representationlayer (Rl), a prediction layer ( ^Al), and an error representation ( El). The representation layer, Rl, isa recurrent convolutional network that generates a prediction, ^Al, of what the layer input, Al, willbe on the next frame. The network takes the difference between Aland^Aland outputs an errorrepresentation, El, which is split into separate rectified positive and negative error populations. Theerror,El, is then passed forward through a convolutional layer to become the input to the next layer(Al+1). The recurrent prediction layer Rlreceives a copy of the error signal El, along with top-downinput from the representation layer of the next level of the network ( Rl+1). The organization of thenetwork is such that on the first time step of operation, the “right” side of the network ( Al’s andEl’s)is equivalent to a standard deep convolutional network. Meanwhile, the “left” side of the network(theRl’s) is equivalent to a generative deconvolutional network with local recurrence at each stage.The architecture described here is inspired by that originally proposed by (Rao & Ballard, 1999), butis formulated in a modern deep learning framework and trained end-to-end using gradient descent,with a loss function implicitly embedded in the network as the firing rates of the error neurons. Ourwork also shares motivation with the Deep Predictive Coding Networks of Chalasani & Principe(2013); however, their framework is based upon sparse coding and a linear dynamical system withgreedy layer-wise training, whereas ours is rooted in convolutional and recurrent neural networkstrained with backprop.While the architecture is general with respect to the kinds of data it models, here we focus on imagesequence (video) data. Consider a sequence of images, xt. The target for the lowest layer is setto the the actual sequence itself, i.e. At0=xt8t. The targets for higher layers, Atlforl >0, arecomputed by a convolution over the error units from the layer below, Etl1, followed by rectifiedlinear unit (ReLU) activation and max-pooling. For the representation neurons, we specificallyuse convolutional LSTM units (Hochreiter & Schmidhuber, 1997; Shi et al., 2015). In our setting,theRtlhidden state is updated according to Rt1l,Et1l, as well as Rtl+1, which is first spatiallyupsampled (nearest-neighbor), due to the pooling present in the feedforward path. The predictions,^Atlare made through a convolution of the Rtlstack followed by a ReLU non-linearity. For thelowest layer, ^Atlis also passed through a saturating non-linearity set at the maximum pixel value:SatLU (x;pmax):= min(pmax;x). Finally, the error response, Etl, is calculated from the differencebetween ^AtlandAtland is split into ReLU-activated positive and negative prediction errors, whichare concatenated along the feature dimension. As discussed in (Rao & Ballard, 1999), although notexplicit in their model, the separate error populations are analogous to the existence of on-center,off-surround and off-center, on-surround neurons early in the visual system.The full set of update rules are listed in Equations (1) to (4). The model is trained to minimizethe weighted sum of the activity of the error units. Explicitly, the training loss is formalized inEquation 5 with weighting factors by time, t, and layer,l, and where nlis the number of units inthelth layer. With error units consisting of subtraction followed by ReLU activation, the loss at eachlayer is equivalent to an L1 error. Although not explored here, other error unit implementations,potentially even probabilistic or adversarial (Goodfellow et al., 2014), could also be used.Atl=xt ifl= 0MAXPOOL(RELU(CONV(Etl1)))l>0(1)^Atl=RELU(CONV(Rtl)) (2)Etl= [RELU(Atl^Atl);RELU(^AtlAtl)] (3)Rtl=CONV LSTM (Et1l;Rt1l;UPSAMPLE (Rtl+1)) (4)Ltrain =XttXllnlXnlEtl (5)3Published as a conference paper at ICLR 2017Algorithm 1 Calculation of PredNet statesRequire:xt1:At0 xt2:E0l;R0l 03:fort= 1toTdo4: forl=Lto0do .UpdateRtlstates5: ifl=Lthen6: RtL=CONV LSTM(Et1L;Rt1L)7: else8: Rtl=CONV LSTM(Et1l;Rt1l;UPSAMPLE (Rtl+1))9: forl= 0toLdo .Update ^Atl;Atl;Etlstates10: ifl= 0then11: ^At0=SATLU(R ELU(C ONV(Rt0)))12: else13: ^Atl=RELU(C ONV(Rtl))14:Etl=[RELU(Atl^Atl); R ELU(^AtlAlt)]15: ifl<L then16: Atl+1=MAXPOOL(CONV(Elt))The order in which each unit in the model is updated must also be specified, and our implementa-tion is described in Algorithm 1. Updating of states occurs through two passes: a top-down passwhere theRtlstates are computed, and then a forward pass to calculate the predictions, errors, andhigher level targets. A last detail of note is that RlandElare initialized to zero, which, due to theconvolutional nature of the network, means that the initial prediction is spatially uniform.3 E XPERIMENTS3.1 R ENDERED IMAGE SEQUENCESTo gain an understanding of the representations learned in the proposed framework, we first trainedPredNet models using synthetic images, for which we have access to the underlying generativestimulus model and all latent parameters. We created sequences of rendered faces rotating with twodegrees of freedom, along the “pan” (out-of-plane) and “roll” (in-plane) axes. The faces start at arandom orientation and rotate at a random constant velocity for a total of 10frames. A different facewas sampled for each sequence. The images were processed to be grayscale, with values normalizedbetween 0and1, and 64x64pixels in size. We used 16K sequences for training and 800for bothvalidation and testing.Predictions generated by a PredNet model are shown in Figure 2. The model is able to accumulateinformation over time to make accurate predictions of future frames. Since the representation neu-rons are initialized to zero, the prediction at the first time step is uniform. On the second time step,with no motion information yet, the prediction is a blurry reconstruction of the first time step. Afterfurther iterations, the model adapts to the underlying dynamics to generate predictions that closelymatch the incoming frame.For choosing the hyperparameters of the model, we performed a random search and chose the modelthat had the lowest L1 error in frame prediction averaged over time steps 2-10on a validation set.Given this selection criteria, the best performing models tended to have a loss solely concentrated atthe lowest layer (i.e. 0= 1,l>0= 0), which is the case for the model shown. Using an equal lossat each layer considerably degraded predictions, but enforcing a moderate loss on upper layers thatwas one magnitude smaller than the lowest layer (i.e. 0= 1,l>0= 0:1) led to only slightly worsepredictions, as illustrated in Figure 9 in the Appendix. In all cases, the time loss weight, t, was set tozero for the first time step and then one for all time steps after. As for the remaining hyperparameters,the model shown has 5layers with 3x3filter sizes for all convolutions, max-pooling of stride 2, andnumber of channels per layer, for both AlandRlunits, of (1;32;64;128;256) . Model weights wereoptimized using the Adam algorithm (Kingma & Ba, 2014).4Published as a conference paper at ICLR 2017ActualPredictedtime→ActualPredictedActualPredictedFigure 2: PredNet next-frame predictions for sequences of rendered faces rotating with two degreesof freedom. Faces shown were not seen during training.Table 1: Evaluation of next-frame predictionson Rotating Faces Dataset (test set).MSE SSIMPredNetL0 0.0152 0.937PredNetLall 0.0157 0.921CNN-LSTM Enc.-Dec. 0.0180 0.907Copy Last Frame 0.125 0.631Quantitative evaluation of generative models is adifficult, unsolved problem (Theis et al., 2016), buthere we report prediction error in terms of mean-squared error (MSE) and the Structural SimilarityIndex Measure (SSIM) (Wang et al., 2004). SSIMis designed to be more correlated with perceptualjudgments, and ranges from 1and1, with a largerscore indicating greater similarity. We compare thePredNet to the trivial solution of copying the lastframe, as well as a control model that shares the overall architecture and training scheme of thePredNet, but that sends forward the layer-wise activations ( Al) rather than the errors ( El). Thismodel thus takes the form of a more traditional encoder-decoder pair, with a CNN encoder that haslateral skip connections to a convolutional LSTM decoder. The performance of all models on therotating faces dataset is summarized in Table 1, where the scores were calculated as an average overall predictions after the first frame. We report results for the PredNet model trained with loss onlyon the lowest layer, denoted as PredNet L0, as well as the model trained with an 0:1weight onupper layers, denoted as PredNet Lall. Both PredNet models outperformed the baselines on bothmeasures, with the L0model slightly outperforming Lall, as expected for evaluating the pixel-levelpredictions.Synthetic sequences were chosen as the initial training set in order to better understand what islearned in different layers of the model, specifically with respect to the underlying generative model(Kulkarni et al., 2015). The rotating faces were generated using the FaceGen software package (Sin-gular Inversions, Inc.), which internally generates 3D face meshes by a principal component analysisin “face space”, derived from a corpus of 3D face scans. Thus, the latent parameters of the imagesequences used here consist of the initial pan and roll angles, the pan and roll velocities, and the prin-cipal component (PC) values, which control the “identity” of the face. To understand the informationcontained in the trained models, we decoded the latent parameters from the representation neurons(Rl) in different layers, using a ridge regression. The Rlstates were taken at the earliest possibleinformative time steps, which, in the our notation, are the second and third steps, respectively, forthe static and dynamic parameters. The regression was trained using 4Ksequences with 500forvalidation and 1Kfor testing. For a baseline comparison of the information implicitly embeddedin the network architecture, we compare to the decoding accuracies of an untrained network withrandom initial weights. Note that in this randomly initialized case, we still expect above-chance de-coding performance, given past theoretical and empirical work with random networks (Pinto et al.,2009; Jarrett et al., 2009; Saxe et al., 2010).5Published as a conference paper at ICLR 2017Latent variable decoding accuracies of the pan and roll velocities, pan initial angle, and first PC areshown in the left panel of Figure 3. There are several interesting patterns. First, the trained modelslearn a representation that generally permits a better linear decoding of the underlying latent factorsthan the randomly initialized model, with the most striking difference in terms of the the pan rotationspeed (pan). Second, the most notable difference between the LallandL0versions occurs withthe first principle component, where the model trained with loss on all layers has a higher decodingaccuracy than the model trained with loss only on the lowest layer.Figure 3: Information contained in PredNet representation for rotating faces sequences. Left: De-coding of latent variables using a ridge regression ( pan: pan (out-of-frame) angular velocity, pan:pan angle, PC-1: first principal component of face, roll: roll (in-frame) angular velocity). Right:Orientation-invariant classification of static faces.The latent variable decoding analysis suggests that the model learns a representation that may gen-eralize well to other tasks for which it was not explicitly trained. To investigate this further, weassessed the models in a classification task from single, static images. We created a dataset of 25previously unseen FaceGen faces at 7pan angles, equally spaced between [2;2], and 8roll angles,equally spaced between [0;2). There were therefore 78 = 56 orientations per identity, whichwere tested in a cross-validated fashion. A linear SVM to decode face identity was fit on a model’srepresentation of a random subset of orientations and then tested on the remaining angles. For eachsize of the SVM training set, ranging from 1-40orientations per face, 50different random splitswere generated, with results averaged over the splits.For the static face classification task, we compare the PredNets to a standard autoencoder and avariant of the Ladder Network (Valpola, 2015; Rasmus et al., 2015). Both models were constructedto have the same number of layers and channel sizes as the PredNets, as well as a similar alternat-ing convolution/max-pooling, then upsampling/convolution scheme. As both networks are autoen-coders, they were trained with a reconstruction loss, with a dataset consisting of all of the individualframes from the sequences used to train the PredNets. For the Ladder Network, which is a denois-ing autoencoder with lateral skip connections, one must also choose a noise parameter, as well asthe relative weights of each layer in the total cost. We tested noise levels ranging from 0to0:5in increments of 0:1, with loss weights either evenly distributed across layers, solely concentratedat the pixel layer, or 1at the bottom layer and 0:1at upper layers (analogous to the PredNet Lallmodel). Shown is the model that performed best for classification, which consisted of 0:4noise andonly pixel weighting. Lastly, as in our architecture, the Ladder Network has lateral and top-downstreams that are combined by a combinator function. Inspired by (Pezeshki et al., 2015), where alearnable MLP improved results, and to be consistent in comparing to the PredNet, we used a purelyconvolutional combinator. Given the distributed representation in both networks, we decoded froma concatenation of the feature representations at all layers, except the pixel layer. For the PredNets,the representation units were used and features were extracted after processing one input frame.6Published as a conference paper at ICLR 2017Face classification accuracies using the representations learned by the L0andLallPredNets, a stan-dard autoencoder, and a Ladder Network variant are shown in the right panel of Figure 3. BothPredNets compare favorably to the other models at all sizes of the training set, suggesting they learna representation that is relatively tolerant to object transformations. Similar to the decoding accu-racy of the first principle component, the PredNet Lallmodel actually outperformed the L0variant.Altogether, these results suggest that predictive training with the PredNet can be a viable alternativeto other models trained with a more traditional reconstructive or denoising loss, and that the relativelayer loss weightings ( l’s) may be important for the particular task at hand.3.2 N ATURAL IMAGE SEQUENCESWe next sought to test the PredNet architecture on complex, real-world sequences. As a testbed, wechose car-mounted camera videos, since these videos span across a wide range of settings and arecharacterized by rich temporal dynamics, including both self-motion of the vehicle and the motionof other objects in the scene (Agrawal et al., 2015). Models were trained using the raw videos fromthe KITTI dataset (Geiger et al., 2013), which were captured by a roof-mounted camera on a cardriving around an urban environment in Germany. Sequences of 10frames were sampled from the“City”, “Residential”, and “Road” categories, with 57recording sessions used for training and 4used for validation. Frames were center-cropped and downsampled to 128x160pixels. In total, thetraining set consisted of roughly 41K frames.A random hyperparameter search, with model selection based on the validation set, resulted in a 4layer model with 3x3convolutions and layer channel sizes of (3;48;96;192) . Models were againtrained with Adam (Kingma & Ba, 2014) using a loss either solely computed on the lowest layer(L0) or with a weight of 1on the lowest layer and 0:1on the upper layers ( Lall). Adam parameterswere initially set to their default values ( = 0:001,1= 0:9,2= 0:999) with the learning rate, ,decreasing by a factor of 10halfway through training. To assess that the network had indeed learneda robust representation, we tested on the CalTech Pedestrian dataset (Doll ́ar et al., 2009), whichconsists of videos from a dashboard-mounted camera on a vehicle driving around Los Angeles.Testing sequences were made to match the frame rate of the KITTI dataset and again cropped to128x160pixels. Quantitative evaluation was performed on the entire CalTech test partition, splitinto sequences of 10frames.Sample PredNet predictions (for the L0model) on the CalTech Pedestrian dataset are shown inFigure 4, and example videos can be found at https://coxlab.github.io/prednet/ . Themodel is able to make fairly accurate predictions in a wide range of scenarios. In the top sequenceof Fig. 4, a car is passing in the opposite direction, and the model, while not perfect, is able to predictits trajectory, as well as fill in the ground it leaves behind. Similarly in Sequence 3, the model isable to predict the motion of a vehicle completing a left turn. Sequences 2and5illustrate that thePredNet can judge its own movement, as it predicts the appearance of shadows and a stationaryvehicle as they approach. The model makes reasonable predictions even in difficult scenarios, suchas when the camera-mounted vehicle is turning. In Sequence 4, the model predicts the position of atree, as the vehicle turns onto a road. The turning sequences also further illustrate the model’s abilityto “fill-in”, as it is able to extrapolate sky and tree textures as unseen regions come into view. As anadditional control, we show a sequence at the bottom of Fig. 4, where the input has been temporallyscrambled. In this case, the model generates blurry frames, which mostly just resemble the previousframe. Finally, although the PredNet shown here was trained to predict one frame ahead, it is alsopossible to predict multiple frames into the future, by feeding back predictions as the inputs andrecursively iterating. We explore this in Appendix 5.3.Table 2: Evaluation of Next-Frame Predictions onCalTech Pedestrian Dataset.MSE SSIMPredNetL0 3:131030.884PredNetLall 3:331030.875CNN-LSTM Enc.-Dec. 3:671030.865Copy Last Frame 7:951030.762Quantitatively, the PredNet models againoutperformed the CNN-LSTM Encoder-Decoder. To ensure that the difference inperformance was not simply because of thechoice of hyperparameters, we trained mod-els with four other sets of hyperparameters,which were sampled from the initial ran-dom search over the number of layers, fil-ter sizes, and number of filters per layer. For each of the four additional sets, the PredNet L0hadthe best performance, with an average error reduction of 14:7%and14:9%for MSE and SSIM,7Published as a conference paper at ICLR 20171PredictedActual2PredictedActual3PredictedActual4PredictedActual5PredictedActual6PredictedActual7PredictedActual8PredictedScrambledtime →Figure 4: PredNet predictions for car-cam videos. The first rows contain ground truth and the secondrows contain predictions. The sequence below the red line was temporally scrambled. The modelwas trained on the KITTI dataset and sequences shown are from the CalTech Pedestrian dataset.respectively, compared to the CNN-LSTM Encoder-Decoder. More details, as well as a thoroughinvestigation of systematically simplified models on the continuum between the PredNet and theCNN-LSTM Encoder-Decoder can be found in Appendix 5.1. Briefly, the elementwise subtractionoperation in the PredNet seems to be beneficial, and the nonlinearity of positive/negative splittingalso adds modest improvements. Finally, while these experiments measure the benefits of each com-ponent of our model, we also directly compare against recent work in a similar car-cam setting, byreporting results on a 64x64pixel, grayscale car-cam dataset released by Brabandere et al. (2016).Our PredNet model outperforms the model by Brabandere et al. (2016) by 29%. Details can befound in Appendix 5.2. Also in Appendix 5.2, we present results for the Human3.6M (Ionescuet al., 2014) dataset, as reported by Finn et al. (2016). Without re-optimizing hyperparameters, our8Published as a conference paper at ICLR 2017model underperforms the concurrently developed DNA model by Finn et al. (2016), but outperformsthe model by Mathieu et al. (2016).To test the implicit encoding of latent parameters in the car-cam setting, we used the internal rep-resentation in the PredNet to estimate the car’s steering angle (Bojarski et al., 2016; Biasini et al.,2016). We used a dataset released by Comma.ai (Biasini et al., 2016) consisting of 11videos total-ing about 7hours of mostly highway driving. We first trained networks for next-frame predictionand then fit a linear fully-connected layer on the learned representation to estimate the steering an-gle, using a MSE loss. We again concatenate the Rlrepresentation at all layers, but first spatiallyaverage pool lower layers to match the spatial size of the upper layer, in order to reduce dimension-ality. Steering angle estimation results, using the representation on the 10thtime step, are shownin Figure 5. Given just 1K labeled training examples, a simple linear readout on the PredNet L0representation explains 74% of the variance in the steering angle and outperforms the CNN-LSTMEnc.-Dec. by 35%. With 25K labeled training examples, the PredNet L0has a MSE (in degrees2)of2:14. As a point of reference, a CNN model designed to predict the steering angle (Biasiniet al., 2016), albeit from a single frame instead of multiple frames, achieve a MSE of ~ 4whentrained end-to-end using 396K labeled training examples. Details of this analysis can be found inAppendix 8. Interestingly, in this task, the PredNet Lallmodel actually underperformed the L0model and slightly underperformed the CNN-LSTM Enc.-Dec, again suggesting that the lparam-eter can affect the representation learned, and different values may be preferable in different endtasks. Nonetheless, the readout from the Lallmodel still explained a substantial proportion of thesteering angle variance and strongly outperformed the random initial weights. Overall, this anal-ysis again demonstrates that a representation learned through prediction, and particularly with thePredNet model with appropriate hyperparameters, can contain useful information about underlyinglatent parameters.Figure 5: Steering angle estimation accuracy on the Comma.ai dataset (Biasini et al., 2016). Left:Example steering angle curve with model estimations for a segment in the test set. Decoding wasperformed using a fully-connected readout on the PredNet representation trained with 25K labeledtraining examples. PredNet representation was trained for next-frame prediction on Comma.ai train-ing set. Right: Mean-squared error of steering angle estimation.4 D ISCUSSIONAbove, we have demonstrated a predictive coding inspired architecture that is able to predict futureframes in both synthetic and natural image sequences. Importantly, we have shown that learning topredict how an object or scene will move in a future frame confers advantages in decoding latentparameters (such as viewing angle) that give rise to an object’s appearance, and can improve recog-nition performance. More generally, we argue that prediction can serve as a powerful unsupervisedlearning signal, since accurately predicting future frames requires at least an implicit model of theobjects that make up the scene and how they are allowed to move. Developing a deeper understand-ing of the nature of the representations learned by the networks, and extending the architecture, by,for instance, allowing sampling, are important future directions.9Published as a conference paper at ICLR 2017ACKNOWLEDGMENTSWe would like to thank Rasmus Berg Palm for fruitful discussions and early brainstorming. Wewould also like to thank the developers of Keras (Chollet, 2016). This work was supported by IARPA(contract D16PC00002), the National Science Foundation (NSF IIS 1409097), and the Center forBrains, Minds and Machines (CBMM, NSF STC award CCF-1231216).REFERENCESPulkit Agrawal, Jo ̃ao Carreira, and Jitendra Malik. Learning to see by moving. CoRR , 2015.Andre M. Bastos, W. Martin Usrey, Rick A. Adams, George R. Mangun, Pascal Fries, and Karl J.Friston. Canonical microcircuits for predictive coding. Neuron , 2012.Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequenceprediction with recurrent neural networks. CoRR , 2015.Yoshua Bengio. How auto-encoders could provide credit assignment in deep networks via targetpropagation. CoRR , 2014.Riccardo Biasini, George Hotz, Sam Khalandovsky, Eder Santana, and Niel van der Westhuizen.Comma.ai research, 2016. URL https://github.com/commaai/research .Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, PrasoonGoyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao,and Karol Zieba. End to end learning for self-driving cars. CoRR , 2016.Bert De Brabandere, Xu Jia, Tinne Tuytelaars, and Luc Van Gool. Dynamic filter networks. CoRR ,2016.Rakesh Chalasani and Jose C. Principe. Deep predictive coding networks. CoRR , 2013.Franc ̧ois Chollet. Comma.ai, 2016. URL http://keras.io/ .Andy Clark. Whatever next? predictive brains, situated agents, and the future of cognitive science.Behavioral and Brain Sciences , 2013.Piotr Doll ́ar, Christian Wojek, Bernt Schiele, and Pietro Perona. Pedestrian detection: A benchmark.InCVPR , 2009.Tobias Egner, Jim M. Monti, and Christopher Summerfield. Expectation and surprise determineneural population responses in the ventral visual stream. J Neurosci , 2010.Chelsea Finn, Ian J. Goodfellow, and Sergey Levine. Unsupervised learning for physical interactionthrough video prediction. CoRR , 2016.Peter F ̈oldi ́ak. Learning invariance from transformation sequences. Neural Computation , 1991.Karl Friston. A theory of cortical responses. Philos Trans R Soc Lond B Biol Sci , 2005.Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: Thekitti dataset. International Journal of Robotics Research (IJRR) , 2013.Dileep George and Jeff Hawkins. A hierarchical bayesian model of invariant pattern recognitionin the visual cortex. In Proceedings of the International Joint Conference on Neural Networks.IEEE , 2005.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS . 2014.Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, and Yann LeCun. Unsupervisedlearning of spatiotemporally coherent metrics. CoRR , 2015a.Ross Goroshin, Micha ̈el Mathieu, and Yann LeCun. Learning to linearize under uncertainty. CoRR ,2015b.10Published as a conference paper at ICLR 2017Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation , 1997.Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3.6m: Large scaledatasets and predictive methods for 3d human sensing in natural environments. IEEE Transactionson Pattern Analysis and Machine Intelligence , 2014.Kevin Jarrett, Koray Kavukcuoglu, MarcAurelio Ranzato, and Yann LeCun. What is the best multi-stage architecture for object recognition? In ICCV . 2009.Ryota Kanai, Yutaka Komura, Stewart Shipp, and Karl Friston. Cerebral hierarchies : predictiveprocessing , precision and the pulvinar. Philos Trans R Soc Lond B Biol Sci , 2015.Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR , 2014.Tejas D. Kulkarni, Will Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. Deep convolutionalinverse graphics network. CoRR , 2015.Tai Sing Lee and David Mumford. Hierarchical bayesian inference in the visual cortex. J Opt SocAm A Opt Image Sci Vis , 2003.William Lotter, Gabriel Kreiman, and David Cox. Unsupervised learning of visual structure usingpredictive generative networks. CoRR , 2015.Davide Maltoni and Vincenzo Lomonaco. Semi-supervised tuning from temporal coherence. CoRR ,2015.Micha ̈el Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyondmean square error. ICLR , 2016.Vincent Michalski, Roland Memisevic, and Kishore Konda. Modeling deep temporal dependencieswith recurrent ”grammar cells”. In NIPS . 2014.Hossein Mohabi, Ronan Collobert, and Jason Weston. Deep learning from temporal coherence invideo. In ICML . 2009.Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L. Lewis, and Satinder P. Singh. Action-conditional video prediction using deep networks in atari games. CoRR , 2015.Randall C. O’Reilly, Dean Wyatte, and John Rohrlich. Learning through time in the thalamocorticalloops. CoRR , 2014.Rasmus Berg Palm. Prediction as a candidate for learning deep hierarchical models of data. Master’sthesis, Technical University of Denmark , 2012.Viorica Patraucean, Ankur Handa, and Roberto Cipolla. Spatio-temporal video autoencoder withdifferentiable memory. CoRR , 2015.Mohammad Pezeshki, Linxi Fan, Philemon Brakel, Aaron C. Courville, and Yoshua Bengio. De-constructing the ladder network architecture. CoRR , 2015.Nicolas Pinto, David Doukhan, James J. DiCarlo, and David D. Cox. A high-throughput screeningapproach to discovering good forms of biologically inspired visual representation. PLoS ComputBiol, 2009.Marc’Aurelio Ranzato, Arthur Szlam, Joan Bruna, Micha ̈el Mathieu, Ronan Collobert, and SumitChopra. Video (language) modeling: a baseline for generative models of natural videos. CoRR ,2014.Rajesh P. N. Rao and Dana H. Ballard. Predictive coding in the visual cortex: a functional interpre-tation of some extra-classical receptive-field effects. Nature Neuroscience , 1999.Rajesh P. N. Rao and T. J. Sejnowski. Predictive sequence learning in recurrent neocortical circuits.NIPS , 2000.11Published as a conference paper at ICLR 2017Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, and Tapani Raiko. Semi-supervised learning with ladder network. CoRR , 2015.Eder Santana and George Hotz. Learning a driving simulator. CoRR , 2016.Andrew Saxe, Maneesh Bhand, Zhenghao Chen, Pang Wei Koh, Bipin Suresh, and Andrew Y .Ng. On random weights and unsupervised feature learning. In Workshop: Deep Learning andUnsupervised Feature Learning (NIPS) . 2010.Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo.Convolutional LSTM network: A machine learning approach for precipitation nowcasting. CoRR ,2015.Singular Inversions, Inc. FaceGen. http://facegen.com .William R. Softky. Unsupervised pixel-prediction. NIPS , 1996.M. W. Spratling. Unsupervised learning of generative and discriminative weights encoding elemen-tary image components in a predictive coding model of cortical function. Neural Computation ,2012.Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of videorepresentations using lstms. CoRR , 2015.Christopher Summerfield, Tobias Egner, Matthew Greene, Etienne Koechlin, Jennifer Mangels, andJoy Hirsch. Predictive codes for forthcoming perception in the frontal cortex. Science , 314, 2006.Lin Sun, Kui Jia, Tsung-Han Chan, Yuqiang Fang, Gang Wang, and Shuicheng Yan. Dl-sfa: Deeply-learned slow feature analysis for action recognition. CVPR , 2014.Lucas Theis, Aaron van den Oord, and Matthias Bethge. A note on the evaluation of generativemodels. ICLR , 2016.Harri Valpola. From neural pca to deep unsupervised learning. CoRR , 2015.Carl V ondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics.CoRR , 2016.Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos.CoRR , 2015.Zhou Wang, Alan Bovik, Hamid Sheikh, and Eero Simoncelli. Image quality assessment: Fromerror visibility to structural similarity. IEEE Transactions on Image Processing , 2004.William F. Whitney, Michael Chang, Tejas D. Kulkarni, and Joshua B. Tenenbaum. Understandingvisual concepts with continuation learning. CoRR , 2016.Laurenz Wiskott and Terrence J. Sejnowski. Learning invariance from transformation sequences.Neural Computation , 2002.Tianfan Xue, Jiajun Wu, Katherine L. Bouman, and William T. Freeman. Visual dynamics: Proba-bilistic future frame synthesis via cross convolutional networks. CoRR , 2016.12Published as a conference paper at ICLR 20175 A PPENDIX5.1 A DDITIONAL CONTROL MODELSTable 3 contains results for additional variations of the PredNet and CNN-LSTM Encoder-Decoderevaluated on the CalTech Pedestrian Dataset after being trained on KITTI. We evaluate the modelsin terms of pixel prediction, thus using the PredNet model trained with loss only on the lowest layer(PredNetL0) as the base model. In addition to mean-squared error (MSE) and the Structural Sim-ilarity Index Measure (SSIM), we include calculations of the Peak Signal-To-Noise Ratio (PSNR).For each model, we evaluate it with the original set of hyperparameters (controlling the number oflayers, filter sizes, and number of filters per layer), as well as with the four additional sets of hy-perparameters that were randomly sampled from the initial random search (see main text for moredetails). Below is an explanation of the additional control models:•PredNet (no E split) : PredNet model except the error responses ( El) are simply linear(^AlAl) instead of being split into positive and negative rectifications.•CNN-LSTM Enc.-Dec. (2x Alfilts) : CNN-LSTM Encoder-Decoder model ( Al’s arepassed instead of El’s) except the number of filters in Alis doubled. This controls forthe total number of filters in the model compared to the PredNet, since the PredNet has fil-ters to produce ^Alat each layer, which is integrated into the model’s feedforward response.•CNN-LSTM Enc.-Dec. (except pass E0): CNN-LSTM Encoder-Decoder model exceptthe error is passed at the lowest layer. All remaining layers pass the activations Al. Withtraining loss taken at only the lowest layer, this variation allows us to determine if the“prediction” subtraction operation in upper layers, which is essentially unconstrained andlearnable in the L0case, aids in the model’s performance.•CNN-LSTM Enc.-Dec. (+/- split) : CNN-LSTM Encoder-Decoder model except the ac-tivationsAlare split into positive and negative populations before being passed to otherlayers in the network. This isolates the effect of the additional nonlinearity introduced bythis procedure.Table 3: Quantitative evaluation of additional controls for next-frame prediction in CalTech Pedes-trian Dataset after training on KITTI. First number indicates score with original hyperparameters.Number in parenthesis indicates score averaged over total of five different hyperparameters.MSE (x 103) PSNR SSIMPredNet 3.13 (3.33) 25.8 (25.5) 0.884 (0.878)PredNet (no Elsplit) 3.20 (3.37) 25.6 (25.4) 0.883 (0.878)CNN-LSTM Enc.-Dec. 3.67 (3.91) 25.0 (24.6) 0.865 (0.856)CNN-LSTM Enc.-Dec. (2x Alfilts) 3.82 (3.97) 24.8 (24.6) 0.857 (0.853)CNN-LSTM Enc.-Dec. (except pass E0) 3.41 (3.61) 25.4 (25.1) 0.873 (0.866)CNN-LSTM Enc.-Dec. (+/- split) 3.71 (3.84) 24.9 (24.7) 0.861 (0.857)Copy Last Frame 7.95 20.0 0.762Equalizing the number of filters in the CNN-LSTM Encoder-Decoder (2x Alfilts) cannot accountfor its performance difference with the PredNet, and actually leads to overfitting and a decrease inperformance. Passing the error at the lowest layer ( E0) in the CNN-LSTM Enc.-Dec. improvesperformance, but still does not match the PredNet, where errors are passed at all layers. Finally,splitting the activations Alinto positive and negative populations in the CNN-LSTM Enc.-Dec. doesnot help, but the PredNet with linear error activation (“no Elsplit”) performs slightly worse than theoriginal split version. Together, these results suggest that the PredNet’s error passing operation canlead to improvements in next-frame prediction performance.5.2 C OMPARING AGAINST OTHER MODELSWhile our main comparison in the text was a control model that isolates the effects of the moreunique components in the PredNet, here we directly compare against other published models. Wereport results on a 64x64pixel, grayscale car-cam dataset and the Human3.6M dataset (Ionescuet al., 2014) to compare against the two concurrently developed models by Brabandere et al. (2016)13Published as a conference paper at ICLR 2017and Finn et al. (2016), respectively. For both comparisons, we use a model with the same hyperpa-rameters (# of layers, # of filters, etc.) of the PredNet L0model trained on KITTI, but train fromscratch on the new datasets. The only modification we make is to train using an L2 loss insteadof the effective L1 loss, since both models train with an L2 loss and report results using L2-basedmetrics (MSE for Brabandere et al. (2016) and PSNR for Finn et al. (2016)). That is, we keep theoriginal PredNet model intact but directly optimize using MSE between actual and predicted frames.We measure next-frame prediction performance after inputting 3frames and 10frames, respectively,for the 64x64car-cam and Human3.6M datasets, to be consistent with the published works. We alsoinclude the results using a feedforward multi-scale network, similar to the model of Mathieu et al.(2016), on Human3.6M, as reported by Finn et al. (2016).Table 4: Evaluation of Next-Frame Predictionson64x64Car-Cam Dataset.MSE (per-pixel)DFN (Brabandere et al., 2016) 1:71103PredNet 1:16103Copy Last Frame 3:58103Table 5: Evaluation of Next-Frame Predic-tions on Human3.6MPSNRDNA (Finn et al., 2016) 42.1PredNet 38.9FF multi-scale (Mathieu et al., 2016) 26.7Copy Last Frame 32.0On a dataset similar to KITTI, our model outperforms the model proposed by Brabandere et al.(2016). On Human3.6M, our model outperforms a model similar to (Mathieu et al., 2016), butunderperforms Finn et al. (2016), although we note we did not perform any hyperparameter opti-mization.5.3 M ULTIPLE TIMESTEPPREDICTIONActualOrig.ModelFine -TunedActualOrig.ModelFine -TunedActualOrig.ModelFine -Tunedt+1PredictionsExtrapolationst+2t+3t+4 t+5 lastseenframeFigure 6: Extrapolation sequences generated by feeding PredNet predictions back into model. Leftof the orange line: Normal t+ 1predictions; Right: Generated by recursively using the predictionsas input. First row: Ground truth sequences. Second row: Generated frames of the original model,trained to solely predict t+ 1. Third row: Model fine-tuned for extrapolation.14Published as a conference paper at ICLR 2017While the models presented here were originally trained to predict one frame ahead, they can bemade to predict multiple frames by treating predictions as actual input and recursively iterating.Examples of this process are shown in Figure 6 for the PredNet L0model. Although the next framepredictions are reasonably accurate, the model naturally breaks down when extrapolating further intothe future. This is not surprising since the predictions will unavoidably have different statistics thanthe natural images for which the model was trained to handle (Bengio et al., 2015). If we additionallytrain the model to process its own predictions, the model is better able to extrapolate. The third rowfor every sequence shows the output of the original PredNet fine-tuned for extrapolation. Startingfrom the trained weights, the model was trained with a loss over 15time steps, where the actualframe was inputted for the first 10and then the model’s predictions were used as input to the networkfor the last 5. For the first 10time steps, the training loss was calculated on the Elactivations asusual, and for the last 5, it was calculated directly as the mean absolute error with respect to theground truth frames. Despite eventual blurriness (which might be expected to some extent due touncertainty), the fine-tuned model captures some key structure in its extrapolations after the tenthtime step. For instance, in the first sequence, the model estimates the general shape of an upcomingshadow, despite minimal information in the last seen frame. In the second sequence, the model isable to extrapolate the motion of a car moving to the right. The reader is again encouraged to visithttps://coxlab.github.io/prednet/ to view the predictions in video form. Quantitatively,the MSE of the model’s predictions stay well below the trivial solution of copying the last seenframe, as illustrated in Fig 7. The MSE increases fairly linearly from time steps 2-10, even thoughthe model was only trained for up to t+ 5prediction.TimeStepsAhead12345678910Mean-SquaredError00 :0050 :010 :0150 :020 :0250 :030 :035CopyLastSeenFramePredNet t+5Fine-TunedFigure 7: MSE of PredNet predictions as a function of number of time steps ahead predicted. Modelwas fine-tuned for up to t+ 5prediction.5.4 A DDITIONAL STEERING ANGLE ANALYSISIn Figure 8, we show the steering angle estimation accuracy on the Comma.ai (Biasini et al., 2016)dataset using the representation learned by the PredNet L0model, as a function of the number offrames inputted into the model. The PredNet’s representation at all layers was concatenated (afterspatially pooling lower layers to a common spatial resolution) and a fully-connected readout was fitusing MSE. For each level of the number of training examples, we average over 10cross-validationsplits. To serve as points of reference, we include results for two static models. The first model is anautoencoder trained on single frame reconstruction with appropriately matching hyperparameters.A fully-connected layer was fit on the autoencoder’s representation to estimate the steering angle inthe same fashion as the PredNet. The second model is the default model in the posted Comma.aicode (Biasini et al., 2016), which is a five layer CNN. This model is trained end-to-end to estimate15Published as a conference paper at ICLR 2017Figure 8: Steering angle estimation accuracy as a function of the number of input frames.the steering angle given the current frame as input, with a MSE loss. In addition to 25K examples,we trained a version using all of the frames in the Comma dataset (~ 396K). For all models, the finalweights were chosen at the minimum validation error during training. Given the relatively smallnumber of videos in the dataset compared to the average duration of each video, we used 5% of eachvideo for validation and testing, chosen as a random continuous chunk, and discarded the 10 framesbefore and after the chosen segments from the training set.As illustrated in Figure 8, the PredNet’s performance gets better over time, as one might expect,as the model is able to accumulate more information. Interestingly, it performs reasonably wellafter just one time step, in a regime that is orthogonal to the training procedure of the PredNet wherethere are no dynamics. Altogether, these results again point to the usefulness of the model in learningunderlying latent parameters.5.5 P REDNETLallNEXT-FRAME PREDICTIONSFigures 9 and 10 compare next-frame predictions by the PredNet Lallmodel, trained with a predic-tion loss on all layers ( 0= 1,l>0= 0:1), and the PredNet L0model, trained with a loss onlyon the lowest layer. At first glance, the difference in predictions seem fairly minor, and indeed, interms of MSE, the Lallmodel only underperformed the L0version by 3% and 6%, respectively, forthe rotating faces and CalTech Pedestrian datasets. Upon careful inspection, however, it is apparentthat theLallpredictions lack some of the finer details of the L0predictions and are more blurry inregions of high variance. For instance, with the rotating faces, the facial features are less definedand with CalTech, details of approaching shadows and cars are less precise.16Published as a conference paper at ICLR 2017ActualPredNet L0PredNet LallError Lall-L0time→ActualPredNet L0PredNet LallError Lall-L0Figure 9: Next-frame predictions of PredNet Lallmodel on the rotating faces dataset and comparisontoL0version. The ”Error LallL0” visualization shows where the pixel error was smaller for the L0model than the Lallmodel. Green regions correspond to where L0was better and red correspondsto whereLallwas better.17Published as a conference paper at ICLR 2017ActualPredNet L0PredNet LallError Lall-L0time→ActualPredNet L0PredNet LallError Lall-L0Figure 10: Next-frame predictions of PredNet Lallmodel on the CalTech Pedestrian dataset andcomparison to L0version. The ”Error LallL0” visualization shows where the pixel error wassmaller for the L0model than the Lallmodel. Green regions correspond to where L0was better andred corresponds to where Lallwas better.18
Bk0FWVcgx
Published as a conference paper at ICLR 2017TOPOLOGY AND GEOMETRY OF HALF-RECTIFIEDNETWORK OPTIMIZATIONC. Daniel FreemanDepartment of PhysicsUniversity of California at BerkeleyBerkeley, CA 94720, USAdaniel.freeman@berkeley.eduJoan BrunaCourant Institute of Mathematical SciencesNew York UniversityNew York, NY 10011, USAbruna@cims.nyu.eduABSTRACTThe loss surface of deep neural networks has recently attracted interest in theoptimization and machine learning communities as a prime example of high-dimensional non-convex problem. Some insights were recently gained using spinglass models and mean-field approximations, but at the expense of strongly sim-plifying the nonlinear nature of the model.In this work, we do not make any such assumption and study conditions on the datadistribution and model architecture that prevent the existence of bad local minima.Our theoretical work quantifies and formalizes two important folklore facts: (i) thelandscape of deep linear networks has a radically different topology from that ofdeep half-rectified ones, and (ii) that the energy landscape in the non-linear caseis fundamentally controlled by the interplay between the smoothness of the datadistribution and model over-parametrization. Our main theoretical contributionis to prove that half-rectified single layer networks are asymptotically connected,and we provide explicit bounds that reveal the aforementioned interplay.The conditioning of gradient descent is the next challenge we address. We studythis question through the geometry of the level sets, and we introduce an algo-rithm to efficiently estimate the regularity of such sets on large-scale networks.Our empirical results show that these level sets remain connected throughout allthe learning phase, suggesting a near convex behavior, but they become exponen-tially more curvy as the energy level decays, in accordance to what is observed inpractice with very low curvature attractors.1 I NTRODUCTIONOptimization is a critical component in deep learning, governing its success in different areas ofcomputer vision, speech processing and natural language processing. The prevalent optimizationstrategy is Stochastic Gradient Descent, invented by Robbins and Munro in the 50s. The empiricalperformance of SGD on these models is better than one could expect in generic, arbitrary non-convexloss surfaces, often aided by modifications yielding significant speedups Duchi et al. (2011); Hintonet al. (2012); Ioffe & Szegedy (2015); Kingma & Ba (2014). This raises a number of theoreticalquestions as to why neural network optimization does not suffer in practice from poor local minima.The loss surface of deep neural networks has recently attracted interest in the optimization and ma-chine learning communities as a paradigmatic example of a hard, high-dimensional, non-convexproblem. Recent work has explored models from statistical physics such as spin glasses Choroman-ska et al. (2015), in order to understand the macroscopic properties of the system, but at the expenseof strongly simplifying the nonlinear nature of the model. Other authors have advocated that the realdanger in high-dimensional setups are saddle points rather than poor local minima Dauphin et al.(2014), although recent results rigorously establish that gradient descent does not get stuck on saddlepoints Lee et al. (2016) but merely slowed down. Other notable recent contributions are Kawaguchi(2016), which further develops the spin-glass connection from Choromanska et al. (2015) and re-solves the linear case by showing that no poor local minima exist; Sagun et al. (2014) which alsoCurrently on leave from UC Berkeley.1Published as a conference paper at ICLR 2017discusses the impact of stochastic vs plain gradient, Soudry & Carmon (2016), that studies Empir-ical Risk Minimization for piecewise multilayer neural networks under overparametrization (whichneeds to grow with the amount of available data), and Goodfellow et al. (2014), which provided in-sightful intuitions on the loss surface of large deep learning models and partly motivated our work.Additionally, the work Safran & Shamir (2015) studies some topological properties of homogeneousnonlinear networks and shows how overparametrization acts upon these properties, and the pioneer-ing Shamir (2016) studied the distribution-specific hardness of optimizing non-convex objectives.Lastly, several papers submitted concurrently and independently of this one deserve note, particu-larly Swirszcz et al. (2016) which analyzes the explicit criteria under which sigmoid-based neuralnetworks become trapped by poor local minima, as well as Tian (2017), which offers a complemen-tary study of two layer ReLU based networks, and their learning dynamics.In this work, we do not make any linearity assumption and study conditions on the data distributionand model architecture that prevent the existence of bad local minima. The loss surface F()ofa given model can be expressed in terms of its level sets , which contain for each energy levelall parameters yielding a loss smaller or equal than . A first question we address concernsthe topology of these level sets, i.e. under which conditions they are connected. Connected levelsets imply that one can always find a descent direction at each energy level, and therefore that nopoor local minima can exist. In absence of nonlinearities, deep (linear) networks have connectedlevel sets Kawaguchi (2016). We first generalize this result to include ridge regression (in the twolayer case) and provide an alternative, more direct proof of the general case. We then move to thehalf-rectified case and show that the topology is intrinsically different and clearly dependent on theinterplay between data distribution and model architecture. Our main theoretical contribution is toprove that half-rectified single layer networks are asymptotically connected, and we provide explicitbounds that reveal the aforementioned interplay.Beyond the question of whether the loss contains poor local minima or not, the immediate follow-upquestion that determines the convergence of algorithms in practice is the local conditioning of theloss surface. It is thus related not to the topology but to the shape or geometry of the level sets.As the energy level decays, one expects the level sets to exhibit more complex irregular structures,which correspond to regions where F()has small curvature. In order to verify this intuition, weintroduce an efficient algorithm to estimate the geometric regularity of these level sets by approx-imating geodesics of each level set starting at two random boundary points. Our algorithm usesdynamic programming and can be efficiently deployed to study mid-scale CNN architectures onMNIST, CIFAR-10 and RNN models on Penn Treebank next word prediction. Our empirical resultsshow that these models have a nearly convex behavior up until their lowest test errors, with a singleconnected component that becomes more elongated as the energy decays. The rest of the paper isstructured as follows. Section 2 presents our theoretical results on the topological connectednessof multilayer networks. Section 3 presents our path discovery algorithm and Section 4 covers thenumerical experiments.2 T OPOLOGY OF LEVEL SETSLetPbe a probability measure on a product space XY , where we assume XandYare Euclideanvector spaces for simplicity. Let f(xi;yi)gibe an iid sample of size Ldrawn from Pdefining thetraining set. We consider the classic empirical risk minimization of the formFe() =1LLXl=1k(xi;)yik2+R(); (1)where (x;)encapsulates the feature representation that uses parameters 2RSandR()is aregularization term. In a deep neural network, contains the weights and biases used in all layers.For convenience, in our analysis we will also use the oracle risk minimization:Fo() =E(X;Y)Pk(X;)Yk2+R(): (2)Our setup considers the case where Rconsists on either `1or`2norms, as we shall describe below.They correspond to well-known sparse and ridge regularization respectively.2Published as a conference paper at ICLR 20172.1 P OOR LOCAL MINIMA CHARACTERIZATION FROM TOPOLOGICAL CONNECTEDNESSWe define the level set of F()asF() =f2RS;F()g: (3)The first question we study is the structure of critical points of Fe()andFo()when is a mul-tilayer neural network. For simplicity, we consider first a strict notion of local minima: 2RSisa strict local minima of Fif there is >0withF(0)> F()for all02B(;)and06=.In particular, we are interested to know whether Fehas local minima which are not global minima.This question is answered by knowing whether F()is connected at each energy level :Proposition 2.1. IfF()is connected for all then every local minima of F()is a global minima.Strict local minima implies that rF() = 0 andHF()0, but avoids degenerate cases whereFis constant along a manifold intersecting . In that scenario, if Udenotes that manifold, ourreasoning immediately implies that if F()are connected, then for all >0there exists0withdist(0;U)andF(0)<F(). In other words, some element at the boundary of Umust be asaddle point. A stronger property that eliminates the risk of gradient descent getting stuck at Uisthatallelements at the boundary of Uare saddle points. This can be guaranteed if one can showthat there exists a path connecting any to the lowest energy level such that Fis strictly decreasingalong it.Such degenerate cases arise in deep linear networks in absence of regularization. If =(W1;:::;WK)denotes any parameter value, with N1;:::NKdenoting the hidden layer sizes, andFk2GL+Nk(R)are arbitrary elements of the general linear group of invertible NkNkmatriceswith positive determinant, thenU=fW1F11;F1W2F12;:::;FKWK;Fk2GL+Nk(R)g:In particular,Uhas a Lie Group structure. In the half-rectified nonlinear case, the general lineargroup is replaced by the Lie group of homogeneous invertible matrices Fk=diag(1;:::;Nk)withj>0.This proposition shows that a sufficient condition to prevent the existence of poor local minima ishaving connected level sets, but this condition is not necessary: one can have isolated local minimalying at the same energy level. This can be the case in systems that are defined up to a discretesymmetry group, such as multilayer neural networks. However, as we shall see next, this case putsthe system in a brittle position, since one needs to be able to account for all the local minima (andthere can be exponentially many of them as the parameter dimensionality increases) and verify thattheir energy is indeed equal.2.2 T HELINEAR CASEWe first consider the particularly simple case where Fis a multilayer network defined by(x;) =WK:::W 1x; = (W1;:::;WK): (4)and the ridge regression R() =kk2. This model defines a non-convex (and non-concave) lossFe(). When= 0, it has been shown in Saxe et al. (2013) and Kawaguchi (2016) that in this case,every local minima is a global minima. We provide here an alternative proof of that result that usesa somewhat simpler argument and allows for >0in the caseK= 2.Proposition 2.2. LetW1;W2;:::;WKbe weight matrices of sizes nknk+1,k < K , and letFe(),Fo()denote the risk minimizations using as in (4). Assume that njmin(n1;nK)forj= 2:::K1. Then Fe()(and Fo) is connected for all and allKwhen= 0, and for>0whenK= 2; and therefore there are no poor local minima in these cases. Moreover, any can be connected to the lowest energy level with a strictly decreasing path.Let us highlight that this result is slightly complementary than that of Kawaguchi (2016), Theorem2.3. Whereas we require njmin(n1;nK)forj= 2:::K1and our analysis does not informabout the order of the saddle points, we do not need full rank assumptions on Xnor the weightsWk.3Published as a conference paper at ICLR 2017This result does also highlight a certain mismatch between the picture of having no poor local min-ima and generalization error. Incorporating regularization drastically changes the topology, and thefact that we are able to show connectedness only in the two-layer case with ridge regression is pro-found; we conjecture that extending it to deeper models requires a different regularization, perhapsusing more general atomic norms Bach (2013). But we now move our interest to the nonlinear case,which is more relevant to our purposes.2.3 H ALF-RECTIFIED NONLINEAR CASEWe now study the setting given by(x;) =WKWK1:::W 1x; = (W1;:::;WK); (5)where(z) = max(0 ;z). The biases can be implemented by replacing the input vector xwithx= (x;1)and by rebranding each parameter matrix asWi=Wibi01;wherebicontains the biases for each layer. For simplicity, we continue to use Wiandxin thefollowing.2.3.1 N ONLINEAR MODELS ARE GENERALLY DISCONNECTEDOne may wonder whether the same phenomena of global connectedness also holds in the half-rectified case. A simple motivating counterexample shows that this is not the case in general. Con-sider a simple setup with X2R2drawn from a mixture of two Gaussians N1andN1, and letY= (XZ)Z, whereZis the (hidden) mixture component taking f1;1gvalues. Let^Y= (X;fW1;W2g)be a single-hidden layer ReLU network, with two hidden units. Let Abea configuration that bisects the two mixture components, and let Bthe same configuration, butswapping the bisectrices. One can verify that they can both achieve arbitrarily small risk by lettingthe covariance of the mixture components go to 0. However, any path that connects AtoBmustnecessarily pass through a point in which W1has rank 1, which leads to an estimator with risk atleast1=2.In fact, it is easy to see that this counter-example can be extended to any generic half-rectified ar-chitecture, if one is allowed to adversarially design a data distribution. For any given (X;)witharbitrary architecture and current parameters = (Wi), letP=fA1;:::;ASgbe the underly-ing tessellation of the input space given by our current choice of parameters; that is, (X;)ispiece-wise linear and Pcontains those pieces. Now let Xbe any arbitrary distribution with densityp(x)>0for allx2Rn, for example a Gaussian, and let YjXd= (X;). Since is invariantunder a subgroup of permutations of its hidden layers, it is easy to see that one can find two pa-rameter values A=andB=such thatFo(A) =Fo(B) = 0 , but any continuous path (t)fromAtoBwill have a different tessellation and therefore won’t satisfy Fo((t)) = 0 . Moreover,one can build on this counter-example to show that not only the level sets are disconnected, but alsothat there exist poor local minima. Let 0be a different set of parameters, and Y0jXd= (X;0)be a different target distribution. Now consider the data distribution given by the mixtureXjp(x); zBernoulli (); YjX;zd=z(X;) + (1z)(X;0):By adjusting the mixture component we can clearly change the risk at and0and make themdifferent, but we conjecture that this preserves the status of local minima of and0. Appendix Econstructs a counter-example numerically.This illustrates an intrinsic difficulty in the optimization landscape if one is after universal guaranteesthat do not depend upon the data distribution. This difficulty is non-existent in the linear case andnot easy to exploit in mean-field approaches such as Choromanska et al. (2015), and shows thatin general we should not expect to obtain connected level sets. However, connectedness can berecovered if one is willing to accept a small increase of energy and make some assumptions on thecomplexity of the regression task. Our main result shows that the amount by which the energy isallowed to increase is upper bounded by a quantity that trades-off model overparametrization andsmoothness in the data distribution.4Published as a conference paper at ICLR 2017For that purpose, we start with a characterization of the oracle loss, and for simplicity let us assumeY2Rand let us first consider the case with a single hidden layer and `1regularization:R() =kk1.2.3.2 P RELIMINARIESBefore proving our main result, we need to introduce preliminary notation and results. We firstdescribe the case with a single hidden layer of size m.We definee(m) = minW12Rmn;kW1(i)k21;W22RmEfj(X;)Yj2g+kW2k1: (6)to be the oracle risk using mhidden units with norm 1and using sparse regression. It is a wellknown result by Hornik and Cybenko that a single hidden layer is a universal approximator undervery mild assumptions, i.e. limm!1e(m) = 0 . This result merely states that our statistical setup isconsistent, and it should not be surprising to the reader familiar with classic approximation theory.A more interesting question is the rate at which e(m)decays, which depends on the smoothness ofthe joint density (X;Y )Prelative to the nonlinear activation family we have chosen.For convenience, we redefine W=W1and=W2andZ(W) = max(0;WX ). We also writez(w) = max(0;hw;Xi)where (X;Y )Pandw2RNis any deterministic vector. Let X=EPXXT2RNNbe the covariance operator of the random input X. We assumekXk<1.A fundamental property that will be essential to our analysis is that, despite the fact that Zisnonlinear, the quantity [w1;w2]Z:=EPfz(w1)z(w2)gis locally equivalent to the linear metrichw1;w2iX=EPfwT1XXTw2g=hw1;Xw2i, and that the linearization error decreases with theangle between w1andw2. Without loss of generality, we assume here that kw1k=kw2k= 1, andwe writekwk2Z=Efjz(w)j2g.Proposition 2.3. Let= cos1(hw1;w2i)be the angle between unitary vectors w1andw2and letwm=w1+w2kw1+w2kbe their unitary bisector. Then1 + cos2kwmk2Z2kXk1cos2+ sin2[w1;w2]Z1 + cos2kwmk2Z: (7)The termkXkis overly pessimistic: we can replace it by the energy of Xprojected into thesubspace spanned by w1andw2(which is bounded by 2kXk). Whenis small, a Taylor expansionof the trigonometric terms reveals that23kXkhw1;w2i=23kXkcos=23kXk(122+O(4))(12=4)kwmk2ZkXk(2=4 +2) +O(4)[w1;w2]Z+O(4);and similarly[w1;w2]Zhw1;w2ikwmk2ZkXkhw1;w2i:The local behavior of parameters w1;w2on our regression problem is thus equivalent to that of hav-ing a linear layer, provided w1andw2are sufficiently close to each other. This result can be seen asaspoiler of what is coming: increasing the hidden layer dimensionality mwill increase the chancesto encounter pairs of vectors w1;w2with small angle; and with it some hope of approximating theprevious linear behavior thanks to the small linearization error.In order to control the connectedness, we need a last definition. Given a hidden layer of size mwithcurrent parameters W2Rnm, we define a “robust compressibility” factor asW(l;;m) = minkk0l;supij\( ~wi;wi)jEfjYZ(~W)j2+kk1g;(lm): (8)This quantity thus measures how easily one can compress the current hidden layer representation,by keeping only a subset of lits units, but allowing these units to move by a small amount controlledby. It is a form of n-width similar to Kolmogorov width Donoho (2006) and is also related torobust sparse coding from Tang et al. (2013); Ekanadham et al. (2011).5Published as a conference paper at ICLR 20172.3.3 M AIN RESULTOur main result considers now a non-asymptotic scenario given by some fixed size mof the hid-den layer. Given two parameter values A= (WA1;WA2)2 W andB= (WB1;WB2)withFo(fA;Bg), we show that there exists a continuous path : [0;1]! W connectingAandBsuch that its oracle risk is uniformly bounded by max(;), wheredecreases with modeloverparametrization.Theorem 2.4. For anyA;B2W and2RsatisfyingFo(fA;Bg), there exists a continuouspath: [0;1]!W such that(0) =A,(1) =BandFo((t))max(;);with (9)= infl;maxne(l);WA1(m;0;m);WA1(ml;;m); (10)WB1(m;0;m);WB1(ml;;m)o+C1+O(2); (11)whereC1is an absolute constant depending only on andP.Some remarks are in order. First, our regularization term is currently a mix between `2norm con-straints on the first layer and `1norm constraints on the second layer. We believe this is an artifact ofour proof technique, and we conjecture that more general regularizations yield similar results. Next,this result uses the data distribution through the oracle bound e(m)and the covariance term. Theextension to empirical risk is accomplished by replacing the probability measure Pby the empiricalmeasure ^P=1LPl((x;y)(xl;yl)). However, our asymptotic analysis has to be carefully re-examined to take into account and avoid the trivial regime when MoutgrowsL. A consequence ofTheorem 2.4 is that as mincreases, the model becomes asymptotically connected, as proven in thefollowing corollary.Corollary 2.5. Asmincreases, the energy gap satisfies=O(m1n)and therefore the level setsbecome connected at all energy levels.This is consistent with the overparametrization results from Safran & Shamir (2015); Shamir (2016)and the general common knowledge amongst deep learning practitioners. Our next sections ex-plore this question, and refine it by considering not only topological properties but also some roughgeometrical measure of the level sets.3 G EOMETRY OF LEVEL SETS3.1 T HEGREEDY ALGORITHMThe intuition behind our main result is that, for smooth enough loss functions and for sufficientoverparameterization, it should be “easy” to connect two equally powerful models—i.e., two modelswithFoA;B. A sensible measure of this ease-of-connectedness is the normalized lengthof the geodesic connecting one model to the other: jA;B(t)j=jABj. This length representsapproximately how far of an excursion one must make in the space of models relative to the euclideandistance between a pair of models. Thus, convex models have a geodesic length of 1, becausethe geodesic is simply linear interpolation between models, while more non-convex models havegeodesic lengths strictly larger than 1.Because calculating the exact geodesic is difficult, we approximate the geodesic paths via a dynamicprogramming approach we call Dynamic String Sampling. We comment on alternative algorithmsin Appendix A.For a pair of models with network parameters i,j, each withFe()below a threshold L0, we aimto efficienly generate paths in the space of weights where the empirical loss along the path remainsbelowL0. These paths are continuous curves belonging to F()–that is, the level sets of the lossfunction of interest.6Published as a conference paper at ICLR 2017Algorithm 1 Greedy Dynamic String Sampling1:L0 Threshold below which path will be found2:1 randomly initialize 1, train (xi1)toL03:2 randomly initialize 2, train (xi2)toL04:BeadList (1;2)5:Depth 06:procedure FINDCONNECTION (1;2)7:t t such thatd(1;2;t)dtt= 0 ORt= 0:58: 3 train(xi;t1+ (1t)2)toL09: BeadList insert(3, after 1, BeadList)10:MaxError 1 maxt(Fe(t3+ (1t)1))11:MaxError 2 maxt(Fe(t2+ (1t)3))12: ifMaxError 1>L 0then return FindConnection (1;3)13: ifMaxError 2>L 0then return FindConnection (3;2)14: Depth Depth +1The algorithm recursively builds a string of models in the space of weights which continuouslyconnectitoj. Models are added and trained until the pairwise linearly interpolated loss, i.e.max tFe(ti+ (1t)j)fort2(0;1), is below the threshold, L0, for every pair of neighboringmodels on the string. We provide a cartoon of the algorithm in Appendix C.3.2 F AILURE CONDITIONS AND PRACTICALITIESWhile the algorithm presented will faithfully certify two models are connected if the algorithmconverges, it is worth emphasizing that the algorithm does not guarantee that two models are dis-connected if the algorithm fails to converge. In general, the problem of determining if two modelsare connected can be made arbitrarily difficult by choice of a particularly pathological geometry forthe loss function, so we are constrained to heuristic arguments for determining when to stop run-ning the algorithm. Thankfully, in practice, loss function geometries for problems of interest are notintractably difficult to explore. We comment more on diagnosing disconnections more carefully inAppendix E.Further, if the MaxError exceedsL0for every new recursive branch as the algorithm progresses,the worst case runtime scales as O(exp(Depth )). Empirically, we find that the number of newmodels added at each depth does grow, but eventually saturates, and falls for a wide variety ofmodels and architectures, so that the typical runtime is closer to O(poly( Depth ))—at least upuntil a critical value of L0.To aid convergence, either of the choices in line 7of the algorithm works in practice—choosing tata local maximum can provide a modest increase in algorithm runtime, but can be unstable if the thecalculated interpolated loss is particularly flat or noisy. t=:5is more stable, but slower. Finally,we find that training 3toL0for<1in line 8of the algorithm tends to aid convergence withoutnoticeably impacting our numerics. We provide further implementation details in 4.4 N UMERICAL EXPERIMENTSFor our numerical experiments, we calculated normalized geodesic lengths for a variety of regressionand classification tasks. In practice, this involved training a pair of randomly initialized models tothe desired test loss value/accuracy/perplexity, and then attempting to connect that pair of models viathe Dynamic String Sampling algorithm. We also tabulated the average number of “beads”, or thenumber intermediate models needed by the algorithm to connect two initial models. For all of thebelow experiments, the reported losses and accuracies are on a restricted test set. For more completearchitecture and implementation details, see our GitHub page.The results are broadly organized by increasing model complexity and task difficulty, from easiestto hardest. Throughout, and remarkably, we were able to easily connect models for every datasetand architecture investigated except the one explicitly constructed counterexample discussed in Ap-pendix E.1. Qualitatively, all of the models exhibit a transition from a highly convex regime at highloss to a non-convex regime at low loss, as demonstrated by the growth of the normalized length aswell as the monotonic increase in the number of required “beads” to form a low-loss connection.7Published as a conference paper at ICLR 20174.1 P OLYNOMIAL REGRESSIONWe studied a 1-4-4-1 fully connected multilayer perceptron style architecture with sigmoid nonlin-earities and RMSProp/ADAM optimization. For ease-of-analysis, we restricted the training and testdata to be strictly contained in the interval x2[0;1]andf(x)2[0;1]. The number of requiredbeads, and thus the runtime of the algorithm, grew approximately as a power-law, as demonstratedin Table 1 Fig. 1. We also provide a visualization of a representative connecting path between twomodels of equivalent power in Appendix D.0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14L01.21.41.61.82.0NormalizedLength(1a)2-102-92-82-72-62-52-42-3L021222324252627NumberofBeads (1b)0.00 0.05 0.10 0.15 0.20 0.25 0.30L01.01.52.02.53.0NormalizedLength(2a)2-72-62-52-42-32-2L0212223242526NumberofBeads (2b)0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0%errorontestset1.001.021.041.061.081.10NormalizedLength(3a)0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0%errorontestset3.54.04.55.05.56.06.57.07.5NumberofBeads (3b)10 20 30 40 50 60 70 80 90%errorontestset1.01.21.41.61.82.0NormalizedLength(4a)20 30 40 50 60 70 80 90%errorontestset2.53.03.54.04.55.05.56.06.57.0NumberofBeads (4b)100 200 300 400 500 600 700 800 900Perplexityontestset1.01.21.41.61.82.0NormalizedLength(5a)100 200 300 400 500 600 700 800 900Perplexityontestset2.53.03.54.04.55.05.56.06.57.0NumberofBeads (5b)Figure 1: (Column a) Average normalized geodesic length and (Column b) average number of beadsversus loss. (1) A quadratic regression task. (2) A cubic regression task. (3) A convnet for MNIST.(4) A convnet inspired by Krizhevsky for CIFAR10. (5) A RNN inspired by Zaremba for PTB nextword prediction.The cubic regression task exhibits an interesting feature around L0=:15in Table 1 Fig. 2, wherethe normalized length spikes, but the number of required beads remains low. Up until this point, the8Published as a conference paper at ICLR 2017cubic model is strongly convex, so this first spike seems to indicate the onset of non-convex behaviorand a concomitant radical change in the geometry of the loss surface for lower loss.4.2 C ONVOLUTIONAL NEURAL NETWORKSTo test the algorithm on larger architectures, we ran it on the MNIST hand written digit recognitiontask as well as the CIFAR10 image recognition task, indicated in Table 1, Figs. 3 and 4. Again,the data exhibits strong qualitative similarity with the previous models: normalized length remainslow until a threshold loss value, after which it grows approximately as a power law. Interestingly,the MNIST dataset exhibits very low normalized length, even for models nearly at the state of theart in classification power, in agreement with the folk-understanding that MNIST is highly convexand/or “easy”. The CIFAR10 dataset, however, exhibits large non-convexity, even at the modest testaccuracy of 80%.4.3 R ECURRENT NEURAL NETWORKSTo gauge the generalizability of our algorithm, we also applied it to an LSTM architecture for solvingthe next word prediction task on the PTB dataset, depicted in Table 1 Fig. 5. Noteably, even for aradically different architecture, loss function, and data set, the normalized lengths produced by theDSS algorithm recapitulate the same qualitative features seen in the above datasets—i.e., modelscan be easily connected at high perplexity, and the normalized length grows at lower and lowerperplexity after a threshold value, indicating an onset of increased non-convexity of the loss surface.5 D ISCUSSIONWe have addressed the problem of characterizing the loss surface of neural networks from the per-spective of gradient descent algorithms. We explored two angles – topological and geometricalaspects – that build on top of each other.On the one hand, we have presented new theoretical results that quantify the amount of uphill climb-ing that is required in order to progress to lower energy configurations in single hidden-layer ReLUnetworks, and proved that this amount converges to zero with overparametrization under mild con-ditions. On the other hand, we have introduced a dynamic programming algorithm that efficientlyapproximates geodesics within each level set, providing a tool that not only verifies the connected-ness of level sets, but also estimates the geometric regularity of these sets. Thanks to this informa-tion, we can quantify how ‘non-convex’ an optimization problem is, and verify that the optimizationof quintessential deep learning tasks – CIFAR-10 and MNIST classification using CNNs, and nextword prediction using LSTMs – behaves in a nearly convex fashion up until they reach high accuracylevels.That said, there are some limitations to our framework. In particular, we do not address saddle-pointissues that can greatly affect the actual convergence of gradient descent methods. There are also anumber of open questions; amongst those, in the near future we shall concentrate on:Extending Theorem 2.4 to the multilayer case . We believe this is within reach, since themain analytic tool we use is that small changes in the parameters result in small changes inthe covariance structure of the features. That remains the case in the multilayer case.Empirical versus Oracle Risk . A big limitation of our theory is that right now it does notinform us on the differences between optimizing the empirical risk versus the oracle risk.Understanding the impact of generalization error and stochastic gradient in the ability to dosmall uphill climbs is an open line of research.Influence of symmetry groups . Under appropriate conditions, the presence of discrete sym-metry groups does not prevent the loss from being connected, but at the expense of increas-ing the capacity. An important open question is whether one can improve the asymptoticproperties by relaxing connectedness to being connected up to discrete symmetry.Improving numerics with Hyperplane method . Our current numerical experiments employ agreedy (albeit faster) algorithm to discover connected components and estimate geodesics.We plan to perform experiments using the less greedy algorithm described in Appendix A.9Published as a conference paper at ICLR 2017ACKNOWLEDGMENTSWe would like to thank Mark Tygert for pointing out the reference to the -nets and Kolmogorovcapacity, and Martin Arjovsky for spotting several bugs in early version of the results. We wouldalso like to thank Maithra Raghu and Jascha Sohl-Dickstein for enlightening discussions, as well asYasaman Bahri for helpful feedback on an early version of the manuscript. CDF was supported bythe NSF Graduate Research Fellowship under Grant DGE-1106400.REFERENCESFrancis Bach. Convex relaxations of structured matrix factorizations. arXiv preprintarXiv:1309.3117 , 2013.Anna Choromanska, Mikael Henaff, Michael Mathieu, G ́erard Ben Arous, and Yann LeCun. Theloss surfaces of multilayer networks. In Proc. AISTATS , 2015.Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and YoshuaBengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op-timization. In Advances in Neural Information Processing Systems , pp. 2933–2941, 2014.David L Donoho. Compressed sensing. IEEE Transactions on information theory , 52(4):1289–1306, 2006.John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning andstochastic optimization. Journal of Machine Learning Research , 12(Jul):2121–2159, 2011.Chaitanya Ekanadham, Daniel Tranchina, and Eero P Simoncelli. Recovery of sparse translation-invariant signals with continuous basis pursuit. IEEE transactions on signal processing , 59(10):4735–4744, 2011.Ian J Goodfellow, Oriol Vinyals, and Andrew M Saxe. Qualitatively characterizing neural networkoptimization problems. arXiv preprint arXiv:1412.6544 , 2014.Geoffrey Hinton, N Srivastava, and Kevin Swersky. Lecture 6a overview of mini–batch gradientdescent. Coursera Class , 2012.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. arXiv preprint arXiv:1502.03167 , 2015.Kenji Kawaguchi. Deep learning without poor local minima. arXiv preprint arXiv:1605.07110 ,2016.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent convergesto minimizers. University of California, Berkeley , 1050:16, 2016.Itay Safran and Ohad Shamir. On the quality of the initial basin in overspecified neural networks.arXiv preprint arXiv:1511.04210 , 2015.Levent Sagun, V Ugur Guney, Gerard Ben Arous, and Yann LeCun. Explorations on high dimen-sional landscapes. arXiv preprint arXiv:1412.6615 , 2014.Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynam-ics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 , 2013.Ohad Shamir. Distribution-specific hardness of learning neural networks. arXiv:1609.01037 , 2016.Daniel Soudry and Yair Carmon. No bad local minima: Data independent training error guaranteesfor multilayer neural networks. arXiv preprint arXiv:1605.08361 , 2016.Grzegorz Swirszcz, Wojciech Marian Czarnecki, and Razvan Pascanu. Local minima in training ofneural networks. arXiv preprint arXiv:1611.06310 , 2016.10Published as a conference paper at ICLR 2017Gongguo Tang, Badri Narayan Bhaskar, Parikshit Shah, and Benjamin Recht. Compressed sensingoff the grid. IEEE Transactions on Information Theory , 59(11):7465–7490, 2013.Yuandong Tian. Symmetry-breaking convergence analysis of certain two-layered neural networkswith relu nonlinearity. ICLR Workshop 2017 , 2017.Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprintarXiv:1011.3027 , 2010.A C ONSTRAINED DYNAMIC STRING SAMPLINGWhile the algorithm presented in Sec. 3.1 is fast for sufficiently smooth families of loss surfaceswith few saddle points, here we present a slightly modified version which, while slower, providesmore control over the convergence of the string. We did not use the algorithm presented in thissection for our numerical studies.Instead of training intermediate models via full SGD to a desired accuracy as in step 8of the al-gorithm, intermediate models are be subject to a constraint that ensures they are “close” to theneighboring models on the string. Specifically, intermediate models are constrained to the uniquehyperplane in weightspace equidistant from its two neighbors. This can be further modified by ad-ditional regularization terms to control the “springy-ness” of the string. These heuristics could bechosen to try to more faithfully sample the geodesic between two models.In practice, for a given model on the string, i, these two regularizations augment the standard lossby:~F() =F() +(ki1ik+ki+1ik) +k(i1i+1)=2k(i1i+1)=2k(i(i1i+1)=2)k(i(i1i+1)=2)kk. Theregularization term controls the “springy-ness” of the weightstring, and the regularization termcontrols how far off the hyperplane a new model can deviate.Because adapting DSS to use this constraint is straightforward, here we will describe an alternative“breadth-first” approach wherein models are trained in parallel until convergence. This alternativeapproach has the advantage that it will indicate a disconnection between two models “sooner” intraining. The precise geometry of the loss surface will dictate which approach to use in practice.Given two random models iandjwherejijj< L 0, we aim to follow the evolution ofthe family of models connecting itoj. Intuitively, almost every continuous path in the space ofrandom models connecting itojhas, on average, the same (high) loss. For simplicity, we chooseto initialize the string to the linear segment interpolating between these two models. If this entiresegment is evolved via gradient descent, the segment will either evolve into a string which is entirelycontained in a basin of the loss surface, or some number of points will become fixed at a higher loss.These fixed points are difficult to detect directly, but will be indirectly detected by the persistence ofa large interpolated loss between two adjacent models on the string.The algorithm proceeds as follows:(0.) Initialize model string to have two models, iandj.1. Begin training all models to the desired loss, keeping the instantaneous loss, L0(t), of all modelsbeing trained approximately constant.2. If the pairwise interpolated loss between nandn+1exceedsL0(t), insert a new model at themaximum of the interpolated loss (or halfway) between these two models.3. Repeat steps (1) and (2) until all models (and interpolated errors) are below a threshold lossL0(tnal) :=L0, or until a chosen failure condition (see 3.2).B P ROOFSB.1 P ROOF OF PROPOSITION 2.1Suppose that 1is a local minima and 2is a global minima, but F(1)> F(2). If=F(1),then clearly 1and2both belong to F(). Suppose now that F()is connected. Then we11Published as a conference paper at ICLR 2017could find a smooth (i.e. continuous and differentiable) path (t)with(0) =1,(1) =2andF((t))=F(1). But this contradicts the strict local minima status of 1, and therefore F()cannot be connected .B.2 P ROOF OF PROPOSITION 2.2Let us first consider the case with = 0. We proceed by induction over the number of layers K.ForK= 1, the lossF()is convex. Let A,Bbe two arbitrary points in a level set . ThusF(A)andF(B). By definition of convexity, a linear path is sufficient in that case toconnectAandB:F((1t)A+tB)(1t)F(A) +tF(B):Suppose the result is true for K1. LetA= (WA1;:::;WAK)andB= (WB1;:::;WBK)withF(A),F(B). Sincenjmin(n1;nK)forj= 2:::K1, we can find k=f1;K1gsuch thatnkmin(nk1;nk+1). For eachW1;:::;WK, we denote ~Wj=Wjforj6=k;k1and~Wk=Wk1Wk. By induction hypothesis, the loss expressed in terms of~= (~W1;:::; ~WK1)is connected between ~Aand~B. Let ~Wk(t)the corresponding linear pathprojected in the layer k. We need to produce a path in the variables Wk1(t),Wk(t)such that:iWk1(0) =WAk1,Wk1(1) =WBk1,iiWk(0) =WAk,Wk(1) =WBk,iiiWk(t)Wk1(t) =~Wk1(t)fort2(0;1).We construct it as follows. LetWk(t) =tWBk+ (1t)WAk+t(1t)V ;Wk1(t) =Wk(t)y~Wk1(t);whereWk(t)y= (Wk(t)TWk(t))1Wk(t)Tdenotes the pseudoinverse and Vis ank1nkmatrix drawn from a iid distribution. Conditions (i) and (ii) are immediate from the definition, andcondition (iii) results from the fact thatWk(t)Wk(t)y=INk;sinceWk(t)has full rank for all t2(0;1).Finally, let us prove that the result is also true when K= 2 and > 0. We construct the pathusing the variational properties of atomic norms Bach (2013). When we pick the ridge regressionregularization, the corresponding atomic norm is the nuclear norm:kXk= minUVT=X12(kUk2+kVk2):The path is constructed by exploiting the convexity of the variational norm kXk. LetA=(WA1;WA2)andB= (WB1;WB2), and we define ~W=W1W2. Since ~WfA;Bg=WfA;Bg1WfA;Bg2 , it results thatk~WfA;Bgk12(kWfA;Bg1k2+kWfA;Bg2k2): (12)From (12) it results that the loss Fo(W1;W2)can be minored by another loss expressed in terms of~Wof the formEfjY~WXj2g+ 2k~Wk;which is convex with respect to ~W. Thus a linear path in ~Wfrom ~WAto~WBis guaranteed to bebelowFo(fA;Bg). Let us define8t; W 1(t);W2(t) = arg minUVT=~W(t)(kUk2+kVk2):One can verify that we can first consider a path (A1(s);A2(s))from (WA1;WA2)to(W1(0);W2(0)such that8s1(s)2(s) =~WAandk1(s)k2+k2(s)k2decreases;12Published as a conference paper at ICLR 2017and similarly for (WB1;WB2)to(W1(1);W2(1). The path (Af1;2g(s);Wf1;2g(t);Bf1;2g(s))satisfies(i-iii) by definition. We also verify thatkW1(t)k2+kW2(t)k2= 2k~W(t)k2(1t)k~W(0)k+ 2tk~W(1)k(1t)(kWk21(0) +kWk22(0)) +t(kWk21(1) +kWk22(1)):Finally, we verify that the paths we have just created, when applied to Aarbitrary and B=aglobal minimum, are strictly decreasing, again by induction. For K= 1, this is again an immediateconsequence of convexity. For K > 1, our inductive construction guarantees that for any 0<t< 1,the path(t) = (Wk(t))kKsatisfiesFo((t))<Fo(A). This concludes the proof .B.3 P ROOF OF PROPOSITION 2.3LetA(w1;w2) =fx2Rn;hx;w 1i0;hx;w 2i0g:By definition, we havehw1;w2iZ=Efmax(0;hX;w 1i) max(0;hX;w 2i)g (13)=ZA(w1;w2)hx;w 1ihx;w 2idP(x); (14)=ZQ(A(w1;w2))hQ(x);w1ihQ(x);w2i(dP(Q(x))); (15)whereQis the orthogonal projection onto the space spanned by w1andw2anddP(x) =dP(x1;x2)is the marginal density on that subspace. Since this projection does not interfere with the rest of theproof, we abuse notation by dropping the Qand still referring to dP(x)as the probability density.Now, letr=12kw1+w2k=1+cos()2andd=w2w12. By construction we havew1=rwmd; w 2=rwm+d;and thushx;w 1ihx;w 2i=r2jhx;wmij2jhx;dij2: (16)By denoting C(wm) =fx2Rn;hx;wmi0g, observe that A(w1;w2)C(wm). Let us denotebyB=C(wm)nA(w1;w2)the disjoint complement. It results thathw1;w2iZ=ZA(w1;w2)hx;w 1ihx;w 2idP(x)=ZC(wm)[r2jhx;wmij2jhx;dij2]dP(x)r2ZBjhx;wmij2dP(x) +ZBjhx;dij2dP(x)= r2kwmk2Zr2ZBjhx;wmij2dP(x)|{z}E1ZA(w1;w2)jhx;dij2dP(x)|{z}E2:(17)We conclude by bounding each error term E1andE2separately:0E1r2jsin()j2ZBkxk2dP(x)r2jsin()j22kXk; (18)since every point in Bby definition has angle greater than =2fromwm. Also,0E2kdk2ZA(w1;w2)kxk2dP(x)1cos()22kXk (19)by direct application of Cauchy-Schwartz. The proof is completed by plugging the bounds from(18) and (19) into (17) .13Published as a conference paper at ICLR 2017B.4 P ROOF OF THEOREM 2.4Consider a generic andlm. A path from AtoBwill be constructed by concatenating thefollowing paths:1. fromAtolA, the best linear predictor using the same first layer as A,2. fromlAtosA, the best (ml)-term approximation using perturbed atoms from A,3. fromsAtothe oraclelterm approximation,4. fromtosB, the best (ml)-term approximation using perturbed atoms from B,5. fromsBtolB, the best linear predictor using the same first layer as B,6. fromlBtoB.The proof will study the increase in the loss along each subpath and aggregate the resulting increaseinto a common bound.Subpaths (1) and (6) only involve changing the parameters of the second layer while leaving the first-layer weights fixed, which define a convex loss. Therefore a linear path is sufficient to guaranteethat the loss along that path will be upper bounded by on the first end and WA1(m;0;m)on theother end.Concerning subpaths (3) and (4), we notice that they can also be constructed using only parametersof the second layer, by observing that one can fit into a single nmparameter matrix both the(ml)-term approximation and the oracle l-term approximation. Indeed, let us describe subpath(3) in detail ( subpath (4) is constructed analogously by replacing the role of sAwithsB). Let ~WAthe first-layer parameter matrix associated with the ml-sparse solution sA, and letAdenoteits second layer coefficients, which is a m-dimensional vector with at most mlnon-zero coeffi-cients. LetWbe the first-layer matrix of the l-term oracle approximation, and the correspondingsecond-layer coefficients. Since there are only mlcolumns of ~WAthat are used, correspondingto the support of A, we can consider a path that replaces the remaining lcolumns with thosefromWwhile keeping the second-layer vector Afixed. Since the modified columns correspondto zeros inA, such paths have constant loss. Call Wthe resulting first-layer matrix, containingboth the active mlactive columns of ~WAand thelcolumns ofWin the positions determined bythe zeros of A. Now we can consider the linear subpath that interpolates between Aandwhilekeeping the first layer fixed at W. Since again this is a linear subpath that only moves second-layercoefficients, it is non-increasing thanks to the convexity of the loss while fixing the first layer. Weeasily verify that at the end of this linear subpath we are using the oracle l-term approximation,which has loss e(l), and therefore subpath (3) incurs in a loss that is bounded by its extremal valuesWA1(ml;;m )ande(l).Finally, we need to show how to construct the subpaths (2) and (5), which are the most delicatestep since they cannot be bounded using convexity arguments as above. Let ~WAbe the resultingperturbed first-layer parameter matrix with mlsparse coefficients A. Let us consider an auxiliaryregression of the formW= [WA;~WA]2Rn2m:and regression parameters1= [1; 0];2= [0;A]:ClearlyEfjY1Wj2g+k1k1=EfjY1WAj2g+k1k1and similarly for 2. By convexity, the augmented linear path (t) = (1t)1+t2thus satisfies8t;L(t) =EfjY(t)Wj2g+k(t)k1max(L(0);L(1)):Let us now approximate this augmented linear path with a path in terms of first and second layerweights. We consider1(t) = (1t)WA+t~WA;and2(t) = (1t)1+tA:14Published as a conference paper at ICLR 2017We have thatFo(f1(t);2(t)g) =EfjY2(t)Z(1(t))j2g+k2(t)k1 (20)EfjY2(t)Z(1(t))j2g+((1t)k1k1+tkAk1)=L(t) +EfjY2(t)Z(1(t))j2gEfjY(1t)1Z(WA)tAZ(~WA)j2g: (21)Finally, we verify thatEfjY2(t)Z(1(t))j2gEfjY(1t)1Z(WA)tAZ(~WA)j2g (22)4max(EjYj2;pEjY2j)kXk(1=2+pEjY2j1) +o(2):Indeed, from Proposition 2.3, and using the fact that8iM; t2[0;1];\((1t)wAi+t~wAi;wAi);\((1t)wAi+t~wAi; ~wAi)we can write(1t)1;iz(wAi) +tA;iz( ~wAi)d=2(t)iz(1(t)i) +ni;withEfjnij2g4j2(t)ij2kXk2+O(4)andEjnij2j2(t)ijpkXkusing concavity ofthe moments. ThusEfjY2(t)Z(1(t))j2gEfjY(1t)1Z(WA)tAZ(~WA)j2g2E(Xi(Y2(t)Z(1(t)))ni)+E(jXinij2)4pEjY2jkXkk2k+2(k2k1)2kXk4max(1;pEjY2j)kXk(k2k1+k2k21) +o(2)4max(pEjY2j;EjY2j)kXk(1+pEjY2j2) +o(2);which proves (22).We have just constructed a path from AtoB, in which all subpaths except (2) and (5) have energymaximized at the extrema due to convexity, given respectively by ,W1A(m;0;m),W1A(ml;;m ),e(l),W1B(ml;;m ), andW1B(m;0;m). For the two subpaths (2) and (5), (22) showsthat it is sufficient to add the corresponding upper bound to the linear subpath, which is of the formC+o(2)whereCis an explicit constant independent of . Sincelandare arbitrary, we arefree to pick the infimum, which concludes the proof. B.5 P ROOF OF COROLLARY 2.5Let us consider a generic first layer weight matrix W2Rnm. Without loss of generality, we canassume thatkwkk= 1for allk, since increasing the norm of kwkkwithin the unit ball has no penaltyin the loss, and we can compensate this scaling in the second layer thanks to the homogeneity ofthe half-rectification. Since this results in an attenuation of these second layer weights, they too areguaranteed not to increase the loss.From Vershynin (2010) [Lemma 5.2] we verify that the covering number N(Sn1;)of the Eu-clidean unit sphere Sn1satisfiesN(Sn1;)1 +2n;which means that we can cover the unit sphere with an -net of sizeN(Sn1;).Let0< < n1(1 +n1)1, and let us pick, for each m,m=m1n. Let us consider itscorresponding -net of sizeum=N(Sn1;m)'1 +2mn'm1:15Published as a conference paper at ICLR 2017Since we have mvectors in the unit sphere, it results from the pigeonhole principle that at least oneelement of the net will be associated with at least vm=mu1m'mvectors; in other words, weare guaranteed to find amongst our weight vector Wa collection Qmofvm'mvectors that areall at an angle at most 2mapart. Let us now apply Theorem 2.4 by picking n=vmand=m.We need to see that the terms involved in the bound all converge to 0asm!1 .The contribution of the oracle error e(vm)e(m)goes to zero as m! 1 by the fact thatlimm!1e(m)exists (it is a decreasing, positive sequence) and that vm!1 .Let us now verify that (mvm;m;m)also converges to zero. We are going to prune the firstlayer by removing one by one the vectors in Qm. Removing one of these vectors at a time incurs inan error of the order of m. Indeed, let wkbe one of such vectors and let 0be the solution ofmin0E(0) = min0=(f;k)2RkEfjYTfZ(Wk)kz(wk)j2g+(kfk1+jkj);whereWkis a shorthand for the matrix containing the rest of the vectors that have not been dis-carded yet. Removing the vector wkfrom the first layer increases the loss by a factor that is upperbounded by E(p)E(), where(p)j=0jforj <k1;0k1+0kotherwise.;since nowpis a feasible solution for the pruned first layer.Let us finally bound E(p)E().Since\(wk;wk1)m, it results from Proposition 2.3 thatz(wk)d=z(wk1) +n;withEfjnj2gC2for some constant Cindependent of m. By redefining p1=YTpZ(Wk)12nandp2=12n, we haveEfjYTpZ(Wk)j2gEfjY0TZ(Wk)kz(wk)j2g=Efjp1+p2j2gEfjp1p2j2g= 4Efjp1p2jgvuutE(YTpZ(Wk)12n2)pEfjnj2g(C+)'m;whereConly depends on EfjYj2g. We also verify that kpk1k0k1.It results that removing jQmjof such vectors incurs an increase of the loss at most jQmjm'mm1n=m+1n. Since we picked such that+1n<0, this term converges to zero. Theproof is finished.C C ARTOON OF ALGORITHMRefer to Fig. 2.D V ISUALIZATION OF CONNECTIONBecause the weight matrices are anywhere from high to extremely high dimensional, for the pur-poses of visualization we projected the models on the connecting path into a three dimensionsal sub-space. Snapshots of the algorithm in progress for the quadratic regression task are indicated in Fig.3. This was done by vectorizing all of the weight matrices for all the beads for a given connectingpath, and then performing principal component analysis to find the three highest weight projectionsfor the collection of models that define the endpoints of segments for a connecting path—i.e., the16Published as a conference paper at ICLR 2017θθii θθjj θθii θθjj γγθθii,θθjj tt∗ θθii θθjj θθii,jj θθii θθjj θθii,jj tt∗ θθii θθjj θθii,jj θθ θθii θθjj θθii,jj θθii θθjj θθii,jj θθ θθii,iijj θθii,iijj aa) bb) cc) dd) ee) ff) gg) Figure 2: A cartoon of the algorithm. a) :The initial two models with approximately the sameloss,L0.b) :The interpolated loss curve, in red, and its global maximum, occuring at t=t.c) :The interpolated model (i;j;t)is added and labeled i;j.d) :Stochastic gradient descent isperformed on the interpolated model until its loss is below L0.e) :New interpolated loss curvesare calculated between the models, pairwise on a chain. f) :As in stepc), a new model is insertedat the maxima of the interpolated loss curve between iandi;j.g) :As in stepd), gradient descentis performed until the model has low enough loss.17Published as a conference paper at ICLR 2017Figure 3: Snapshots of Dynamic String Sampling in action for the quadratic regression task. Thestring’s coordinates are its projections onto the three most important principal axes of the fullyconverged string. (Top Left) One step into the algorithm, note the high loss between all of thevertices of the path. (Top Right) An intermediate step of the algorithm. Portions of the string haveconverged, but there are still regions with high interpolated loss. (Bottom Left) Near the end of thealgorithm. Almost the entire string has converged to low loss. (Bottom Right) The algorithm hasfinished. A continuous path between the models has been found with low loss.idiscussed in the algorithm. We then projected the connecting string of models onto these threedirections.The color of the strings was chosen to be representative of the test loss under a log mapping, so thatextremely high test loss mapped to red, whereas test loss near the threshold mapped to blue. Ananimation of the connecting path can be seen on our Github page.Finally, projections onto pairs of principal components are indicated by the black curves.E A D ISCONNECTIONE.1 A D ISCONNECTIONAs a sanity check for the algorithm, we also applied it to a problem for which we know that it is notpossible to connect models of equivalent power by the arguments of section 2.3.1. The input datais 3 points in R2, and the task is to permute the datapoints, i.e. map fx1;x2;x3g!fx2;x3;x1g.This map requires at least 12 parameters in general for the three linear maps which take xi!xjfori;j2ff1;2g;f2;3g;f3;1gg. Our archticture was a 2-3-2 fully connected neural network witha single relu nonlinearity after the hidden layer—a model which clearly has 12 free parameters byconstruction. The two models we tried to connect were a single model, , and a copy of with thefirst two neurons in the hidden layer permuted, ~. The algorithm fails to converge when initializedwith these two models. We provide a visualization of the string of models produced by the algorithmin Fig. 4.In general, a persistent high interpolated loss between two neighboring beads on the string of modelscould arise from either a slowly converging, connected pair of models or from a truly disconnectedpair of models. “Proving” a disconnection at the level of numerical experiments is intractable ingeneral, but a collection of negative results—i.e., failures to converge—are highly suggestive of atrue disconnection.18Published as a conference paper at ICLR 2017Figure 4: These three figures are projections of the components of the 12-dimensional weight ma-trices which comprise the models on the string produced by the DSS algorithm. The axes are theprincipal components of the weight matrices, and the colors indicate test error for the model. Formore details on the figure generation, see Appendix D. (Left) The string of models after 1 step. Notethe high error at all points except the middle and the endpoints. (Middle) An intermediate stageof the algorithm. Part of the string has converged, but a persistent high-error segment still exists.(Right) Even after running for many steps, the error persists, and the algorithm does not converge.19
SJttqw5ge
Under review as a conference paper at ICLR 2017COMMUNICATING HIERARCHICAL NEURALCONTROLLERS FORLEARNINGZERO-SHOT TASK GENERALIZATIONJunhyuk Oh, Satinder Singh, Honglak LeeUniversity of MichiganAnn Arbor, MI, USAfjunhyuk,baveja,honglak g@umich.eduPushmeet KohliMicrosoft ResearchRedmond, WA, USApkohli@microsoft.comABSTRACTThe ability to generalize from past experience to solve previously unseen tasks is akey research challenge in reinforcement learning (RL). In this paper, we considerRL tasks defined as a sequence of high-level instructions described by natural lan-guage and study two types of generalization: to unseen and longer sequences ofpreviously seen instructions, and to sequences where the instructions themselveswere previously not seen. We present a novel hierarchical deep RL architecturethat consists of two interacting neural controllers: a meta controller that reads in-structions and repeatedly communicates subtasks to a subtask controller that inturn learns to perform such subtasks. To generalize better to unseen instructions,we propose a regularizer that encourages to learn subtask embeddings that capturecorrespondences between similar subtasks. We also propose a new differentiableneural network architecture in the meta controller that learns temporal abstrac-tions which makes learning more stable under delayed reward. Our architectureis evaluated on a stochastic 2D grid world and a 3D visual environment wherethe agent should execute a list of instructions. We demonstrate that the proposedarchitecture is able to generalize well over unseen instructions as well as longerlists of instructions.1 I NTRODUCTIONHumans can often generalize to novel tasks even without any additional learning by leveraging pastlearning experience. We would like our artificial agents to have similar “zero-shot” generalizationcapabilities. For example, after learning to solve tasks with instructions such as ‘Go to X (or Y)’ and‘Pick up Y (or Z)’, our agent should be able to infer the underlying goal of new tasks with instruc-tions like ‘Go to Z’, which requires disentangling the verbs (‘Go to/Pick up’) and the nouns/objects(‘X, Y , Z’). Furthermore, we would like our agents to learn to compose policies to solve novel taskscomposed of sequences of seen and unseen instructions. Developing the ability to achieve suchgeneralizations is a key challenge in artificial intelligence and the subfield of reinforcement learning(RL).Figure 1: Example of grid-world and in-structions. The agent is tasked to exe-cute longer sequences of instructions aftertrained on short sequences of instructions; inaddition previously unseen instructions canbe given during evaluation (blue text). Theagent can get more rewards if it deals withrandomly appearing enemies (red outlinedbox) regardless of current instructions.In this paper, we study the problem of zero-shot task gen-eralization in RL by introducing the “instruction execu-tion” problem where the agent is required to learn throughinteraction with its environment how to achieve an overalltask specified by a list of high-level instructions (see Fig-ure 1). As motivation for this problem consider a humanowner training its new household robot to execute com-plex tasks specified by natural language text that decom-pose the task into a sequence of instructions. Given thatit is infeasible to explicitly train the robot on all possibleinstruction-sequences, this problem involves two types ofgeneralizations: to unseen and longer sequences of previ-ously seen instructions, and sequences where the some ofthe instructions themselves were previously not seen. Ofcourse, the usual RL problem of learning policies throughinteraction to accomplish the goals of an instruction re-mains part of the problem as well. We assume that theagent does notreceive any signal on completing or fail-1Under review as a conference paper at ICLR 2017ing to complete individual instructions from the environment/owner and so the informative rewardsignal is delayed until the end. Furthermore, there can be random events in the environment thatrequire the agent to interrupt whatever it is doing and deviate from the instructions to maintain somebackground task as described in Figure 1. Altogether this makes for a challenging zero-shot taskgeneralization RL problem.Brief Background: RL tasks composed of sequences of subtasks have been studied before and anumber of hierearchical RL approaches designed for them. Typically these have the form of a metacontroller and a set of lower-level controllers for subtasks (Sutton et al., 1999; Dietterich, 2000;Parr and Russell, 1997). The meta controller is limited to selecting one from a set of lower-levelcontrollers to employ at any time. This makes it impossible for the low-level controller to generalizeto new subtasks without training a new low-level controller separately. Much of the previous workalso assumes that the overall task is fixed (e.g., Taxi domain (Dietterich, 2000; Ghavamzadeh andMahadevan, 2003)). Transfer learning across multiple compositional tasks has typically been studiedin RL formulations in which new tasks are only presented via a new reward function from theenvironment (Singh, 1991; 1992) and so there is no opportunity for fast model-free generalization.To the best of our knowledge, zero-shot model-free generalization to new or longer tasks as well asunseen tasks has not been well-studied in the RL setting.Our Architecture: This paper presents a hierarchical deep RL architecture (see Figure 2) that con-sists of two interacting neural controllers: a meta controller that repeatedly chooses an instructionand conditioned on the current state of the environment translates it into subtask-arguments (detailson this in later sections) and communicates those to the subtask controller that in turn chooses prim-itive actions given the subtask. This makes the subtask controller a parameterized option (Suttonet al., 1999) module in which the parameters are the subtask-arguments mentioned above. On top ofthe subtask controller, the meta controller is trained to select proper subtask-arguments dependingon observations from the environment, feedback from the subtask controller about termination, andthe task instructions. In order to generalize over unseen instructions, we propose analogy-makingregularization (discussed in Section 4.1) which encourages to learn subtask embeddings that capturecorrespondences between similar subtasks. In addition, we propose a new differentiable neural ar-chitecture in the meta controller that implicitly learns temporal abstractions so that it can operate ata larger time-scale and update the subtask-arguments to the subtask controller only when needed.Our Results: We developed a 2D grid world environment where the agent can interact with manyobjects as illustrated in Figure 1 based on MazeBase (Sukhbaatar et al., 2015) (see Section 6.1 fordetails). The empirical results show that the meta-controller’s ability to learn temporal abstractionsand a form of analogy-making regularization were all key in allowing our hierarchical architectureto generalize in a zero-shot fashion to unseen tasks. We also demonstrated that the same architecturecan also generalize to unseen and longer instructions in a 3D visual environment.2 R ELATED WORKHierarchical Reinforcement Learning. In addition to hierarchical RL described in Section 1,there is a line of work on portable options for solving sequential tasks (Konidaris et al., 2012;Konidaris and Barto, 2007). They proposed agent-space options that can be re-used to deal withnew problems. However, the optimal sequence of options (e.g., picking up a key followed by open-ing a door) is fixed throughout training and evaluation in their problem. On the other hand, the agentis required to perform new sequences of tasks depending on given instructions in our work. Ourwork is also closely related to Programmable HAM (PHAM) (Andre and Russell, 2000; 2002) inthat PHAM is designed to execute a given program. However, the program explicitly specifies thepolicy in PHAM which effectively reduces state-action space. In contrast, a list of instructions is apartial description of the task in our work, which means that the policy is not forced to follow theinstructions but to use them as a guide to maximize its reward. For example, interrupt conditionsneed be manually specified by the program in PHAM, while they are not specified in the instructionsbut should be learned by the agent in our framework.Hierarhical RL has been recently combined with deep learning. Kulkarni et al. (2016) proposedhierarchical Deep Q-Learning and demonstrated improved exploration in a challenging Atari game.Tessler et al. (2016) proposed a similar architecture that allows the high-level controller to chooseprimitive actions directly. Bacon and Precup (2015) proposed option-critic architecture which learnsoptions without any domain knowledge and demonstrated that it can learn distinct options in Atari2Under review as a conference paper at ICLR 2017games. Vezhnevets et al. (2016) proposed a deep architecture that automatically learns macro-actions. Unlike these recent works that aim to solve a single task, the goal of our work is to build amulti-task policy that can generalize over many different sequences of tasks.Zero-shot Task Generalization and Parameterized Option. There has been only a few stud-ies that aim to generalize over new tasks in a zero-shot fashion (i.e., without additional learning).da Silva et al. (2012) proposed the concept of parameterized skill which maps a set of task descrip-tions to policies. Similarly, Isele et al. (2016) proposed a method for zero-shot task generalizationwhich uses task descriptors to predict the parameter of the policy and proposed a coupled dictionarylearning with sparsity constraints to enable zero-shot learning. Schaul et al. (2015) proposed univer-sal value function approximators (UVFA) that learn a value function given a state and goal pair andshowed that their framework can generalize over unseen goals. Borsa et al. (2016) proposed to learna representation of state and action shared across different tasks. However, the proposed approachlacks the ability to solve new tasks in a zero-shot way. Our subtask controller implements the idea ofparameterized skill or universal option. Unlike the previous works, however, we propose to build ahigh-level controller (meta controller) on top of the subtask controller to deal with sequential tasks.Instruction Execution. There has been a line of work for building agents that can execute naturallanguage instructions: Tellex et al. (2011; 2014) for robotics and MacMahon et al. (2006); Chen andMooney (2011); Mei et al. (2015) for a simulated environment. However, these approaches focuson natural language understanding to map instructions to a sequence of actions or groundings ina supervised setting. In contrast, we focus on generalization to different sequences of instructionswithout any supervision for language understanding or for actions. Branavan et al. (2009) also tacklea similar problem of mapping from natural language instructions to a sequence of actions throughRL. However, the agent is given a single sentence at a time from the environment, while the agenthas to deal with a full list of instructions in our problem. In addition, they do not discuss how to dealwith unseen instructions which is the main focus of our paper.3 O VERVIEWFigure 2: Overview of our architectureGoal. We aim to learn a multi-task policy which is a map-ping:SM!A whereSis a set of states (or obser-vations),Mis a set of lists of instructions, and Ais a setof primitive actions. More importantly, since Mcan be ar-bitrary large, our goal is to find an optimal policy on avery small set of lists of instructions M0M such thatis also optimal in the entire set of lists of instructions M.Hierarchical Structure and Communication Protocol.As illustrated in Figure 2, the proposed architecture consistsof a meta controller which selects a subtask and a subtaskcontroller which executes the given subtask. The subtask isfurther decomposed into several arguments. More specif-ically, a space of subtasks Gis defined using the Carte-sian product of their arguments G(1)G(n), whereG(i)is a set of the i-th arguments (e.g.,G=fVisit;Pick upgf A;Bg). In addition, the subtask controller provides a useful informationto meta controller by giving a terminal signal for the given subtask. This communication protocolallows each controller to not only focus on their own independent roles but also communicate witheach other to learn a complex closed-loop policy.Subtask Controller. The subtask controller is a mapping SG!AB which maps a state and asubtask to an action and a termination signal ( B=f0;1g) indicating whether the subtask is finishedor not. The subtask controller is trained prior to training the meta controller. The main challenge forthe subtask controller is that only a subset of subtasks ( UG ) is observed during training, and itshould be able to generalize over unseen subtasks without experiencing them. Section 4 describeshow to construct the subtask architecture parameterized by a neural network and discusses how togeneralize over unseen subtasks.Meta Controller. The meta controller is a mapping SMGB!G which decides a subtaskfrom a state, a list of instructions, a subtask that is currently being executed, and whether the subtaskis finished as input. Thus, the meta controller should understand natural language instructions andpass proper subtask arguments to the subtask controller.3Under review as a conference paper at ICLR 2017ObservationSubtaskargumentsActionTermination?SubtaskembeddingInputOutputRecurrent(a) Subtask controllerObservationContextSubtaskargumentsSubtaskargumentsRetrieved instructionSubtasktermination?InstructionmemorySubtaskUpdaterUpdateYesNoInstructions (b) Meta controllerFigure 3: Proposed neural network architectures. See text for details.It is important to note that natural language instructions are not directly subtasks; indeed there is nota one-to-one correspondence between instructions and subtask-arguments. This is due to a numberof important reasons. First, instructions such as ’Pick up all X’ are executed by repeatedly solving asubtask [Pick up, X]. Second, the meta controller sometimes needs to interrupt ongoing subtasks andreplace them with other subtasks that are not relevant to the instruction because of the backgroundtask based on the stochastic events as described in Figure 1.Another challenge for the meta controller is that it should deal with partial observability induced bythe list of instructions. This is because the agent is notgiven which instruction to execute at eachtime-step from the environment but given just a full list of instructions. Thus, the meta controllershould remember how many instructions it has executed and decide when to move to the next in-struction. Section 5.1 describes how to construct a memory-based neural network to deal with thischallenge.Finally, it is desirable for the meta controller to operate in a larger time-scale due to the fact that asubtask does not change frequently once it is chosen. We describe a novel way to implicitly learnsuch a temporal scale of the meta controller through neural networks in Section 5.2.4 S UBTASK CONTROLLERGiven an observation st2S and subtask arguments g=g(1);:::;g(n)2G, the subtask controlleris defined as the following functions:Policy:(atjst;g) Termination: (btjst;g) =P(st2T g)whereis the policy optimized for the subtask. is a termination function which is a probabilitythat the state is terminal or not for given a subtask. Tgis the set of terminal states. The subtaskcontroller is parameterized by which is represented by a neural network as illustrated in Figure 3a.The network learns a representation of the subtask '(g), and it is used to condition the entire networkthrough multiplicative interactions as suggested by Memisevic and Hinton (2010); Lei Ba et al.(2015); Bertinetto et al. (2016). Further details are described in Appendix F.4.1 L EARNING TO GENERALIZE BY ANALOGY -MAKINGWhen learning a non-linear subtask embedding from arguments, '(g), it is desirable for the networkto learn prior knowledge about the relationship between different subtask arguments in order to inferthe goal of unseen configurations of arguments. To this end, we propose a novel analogy-makingregularizer inspired by Reed et al. (2015); Hadsell et al. (2006); Reed et al. (2014). The main idea isto learn correspondences between subtasks. For example, if target objects and ‘Visit/Pick up’ tasksare independent, we can enforce [Visit, X] : [Visit, Y] :: [Pick up, X] : [Pick up, Y] for any X and Yin the embedding space so that the agent learns to perform [Pick up, Y] as it performs [Pick up, X]and vice versa.More specifically, we define several constraints as follows:k'(gA)'(gB)'(gC) +'(gD)k0 ifgA:gB::gC:gD (1)k'(gA)'(gB)(gC) +'(gD)kdis ifgA:gB6=gC:gD (2)k'(gA)'(gB)kdiff ifgA6=gB (3)4Under review as a conference paper at ICLR 2017where gk=hg(1)k;g(2)k;:::;g(n)ki2Gare subtask arguments. Eq. (1) represents the analogy-makingrelationship, while Eq. (2) and and Eq. (3) prevent trivial solutions. To satisfy the above constraints,we propose the following objective functions based on contrastive loss (Hadsell et al., 2006):Lsim=E(gA;gB;gC;gD)Gsimk'(gA)'(gB)(gC) +'(gD)k2(4)Ldis=E(gA;gB;gC;gD)Gdishmax (0;disk'(gA)'(gB)(gC) +'(gD)k)2i(5)Ldiff=E(gA;gB)Gdiffhmax (0;diffk'(gA)'(gB)k)2i(6)whereGsim;Gdis;Gdiff consist of subtask arguments satisfying conditions in Eq. (1), Eq. (2) andEq. (3) respectively. dis;diff are threshold distances (hyperparameters). The final analogy-making regularizer is the weighted sum of the above three objectives.Analogies Under Non-independence. Although we use analogy-making regularizer so that allconfigurations of subtasks arguments are valid and independent from each other throughout themain experiment, our analogy-making regularizer can also be used to inject prior knowledge so thatthe agent generalizes to unseen subtasks in a specific way. For example, if some objects should behandled in a different way given the same subtask, we can apply analogy-making regularizer so thatEq. 1 is satisfied only between the same type of objects. This is further discussed in Appendix B.4.2 T RAININGThe subtask controller is trained on a subset of subtasks ( U G ) by directly providing subtaskarguments. The policy of the subtask controller is trained through the actor-critic method (Kondaand Tsitsiklis, 1999) with generalized advantage estimation (GAE) (Schulman et al., 2015). We alsofound that pre-training the subtask controller through policy distillation (Rusu et al., 2015; Parisottoet al., 2015) gives slightly better results. The idea of policy distillation is to train separate policiesfor each subtask and use them to provide action labels to train the subtask controller. Throughouttraining, the subtask controller is also made to predict whether the current state is terminal or notthrough a binary classification objective, and analogy-making regularizer is applied to the subtaskembedding separately. The full details of the learning objectives are described in Appendix E.1.5 M ETA CONTROLLERThe role of the meta controller is to decide subtask arguments gt2Gfrom an observation st2S, alist of instructions M2M , the previously selected subtask gt1, and its termination signal ( b)from the subtask controller. Section 5.1 describes the overall architecture of the meta controller fordealing with the partial observability induced by the list of instructions as discussed in Section 3. Wedescribe a novel way to learn the time-scale of the meta controller so that it can implicitly operate ina large time-scale in Section 5.2.5.1 A RCHITECTUREIn order to keep track of its progress on instruction execution, the meta controller maintains itsinternal state by computing a context vector (described in Section 5.1.1) and by focusing on one in-struction at a time from the list of instructions M(described in Section 5.1.2). The entire architectureis illustrated in Figure 3b and further details are described in Appendix F.5.1.1 C ONTEXTGiven the sentence embedding rt1retrieved at the previous time-step from the instructions (de-scribed in Section 5.1.2), the previously selected subtask gt1, and the subtask termination btbtjst;gt1, the meta controller computes the context vector ( ht) through a neural network:ht=fst;rt1;gt1;btwherefis a neural network parameterized by . Intuitively, gt1andbtprovide information aboutwhich subtask was being solved by the subtask controller and whether it has been finished or not.Note that the subtask does not necessarily match with the retrieved instruction ( rt1), e.g., whenthe agent is dealing with the background task. By combining all the information, htencodes thespatio-temporal context which is used to determine which instruction to solve and the next subtask.5Under review as a conference paper at ICLR 20175.1.2 S UBTASK UPDATERThe meta controller has a subtask updater that constructs a memory structure from the list of instruc-tions, retrieves an instruction by maintaining a pointer into the memory structure, and computes thesubtask arguments.Instruction Memory. Given instructions as a list of sentences M= (m1;m2;:::;mK), whereeach sentence consists of a list of words, mi=w1;:::;wjmij, the ‘subtask updater constructsmemory blocks M2REK, where each column is E-dimensional embedding of a sentence. Thesubtask module maintains a memory pointer defined over memory locations, pt2RK, which isused for instruction retrieval. Memory construction and retrieval is formally described as:Memory: M= ['w(m1);'w(m2);:::;'w(mK)] Retrieval: rt=Mpt:Here'w(mi)2REis the embedding of the i-th sentence (e.g., Bag-of-words). The memorypointer ptis a non-negative vector which sums up to 1. rt2REis the retrieved sentence embeddingwhich is used for computing the subtask-arguments. Intuitively, if the memory pointer is a one-hotvector, rtindicates a single instruction from the whole list of instructions. The meta controllershould learn to manage ptso that it can focus on the correct instruction at each time-step, which isfurther described below.Location-based Memory Addressing. Since instructions should be executed sequentially, we usea location-based memory addressing mechanism (Zaremba and Sutskever, 2015; Graves et al., 2014)to manage the memory pointer. Specifically, the subtask updater shifts the memory pointer by [1;1]as:pt=ltpt1where ltSoftmax'shift(ht)(7)whereis a convolution operator, and 'shiftis a multi-layer perceptron (MLP). lt2R3is aninternal action that shifts the memory pointer ( pt) by either -1, 0, or +1. This mechanism is illustratedin Figure 9b.Subtask Arguments. The subtask updater takes the context ( ht), updates the memory pointer ( pt),retrieves a sentence embedding ( rt), and finally computes subtask-arguments as follows:(gtjht;rt) =Yig(i)tjht;rtwhereg(i)tjht;rt/exp'goali(ht;rt)where'goaliis an MLP for the i-th subtask argument.5.2 D IFFERENTIABLE TEMPORAL ABSTRACTIONSAlgorithm 1 Subtask update (Hard)Input: ht;pt1;rt1;gt1Output: pt;rt;gtct'update(ht)ifct= 1then .UpdateltSoftmax'shift(ht)pt ltpt1 .Shiftrt M>pt .Retrievegt(gtjht;rt).Subtaskelsept pt1;rt rt1;gt gt1end ifAlthough the subtask updater can update the memorypointer and compute correct subtask-arguments in prin-ciple, making a decision at every time-step can be ineffi-cient because subtasks do not change very frequently. In-stead, having temporally-extended actions can be usefulfor dealing with delayed reward by operating at a largertime-scale (Sutton et al., 1999). Although one could usethe termination signal of the subtask controller to definethe temporal scale of the meta controller, this approachwould result in an open-loop policy that is not able to in-terrupt ongoing subtasks, which is necessary to deal withstochastic events.To address this challenge, we introduce an internal binary action ctwhich decides whether to updatethe subtask updater or not. This action is defined as: ct'update(ht). Ifct= 1, the subtaskupdater updates the memory pointer, retrieves an instruction, and updates the subtask arguments.Otherwise, the meta controller continues communicating the current subtask arguments withoutinvolving the subtask updater. During training of the update decision, we use L1 regularization onthe probability of update to penalize frequent updates as in Vezhnevets et al. (2016). The entirescheme is described in Algorithm 1.6Under review as a conference paper at ICLR 2017Algorithm 2 Subtask update (Soft)Input: ht;pt1;rt1;gt1Output: pt;rt;gtct 'update(ht)lt Softmax'shift(ht)~pt ltpt1~rt M>~ptpt ct~pt+ (1ct)pt1rt ct~rt+ (1ct)rt1g(i)tctg(i)tjht;~rt+ (1ct)g(i)t18iHowever, the update decision introduces a non-differentiablevariable which is known to be difficult to optimize in prac-tice. Thus, we propose a differentiable relaxation of the updatedecision. The key idea is to take the weighted sum of both‘update’ and ‘no update’ scenarios. This idea is described inAlgorithm 2. We found that training the meta controller us-ing Algorithm 2 followed by fine-tuning using Algorithm 1 iscrucial for training the meta controller. Note that Algorithm 2reduces to Algorithm 1 if we sample ctandltinstead of takingthe weighted sum, which justifies our initialization trick.5.3 T RAININGThe meta controller is trained on a training set of lists of instructions. Actor-critic method is usedto update the parameters of the meta controller, while a pre-trained subtask controller is given andfixed. Since the meta controller also learns a subtask embedding '(gt1)and has to deal withunseen subtasks during evaluation, we applied analogy-making regularization to its embedding.More details of the objective functions are provided in Appendix E.6 E XPERIMENTS AND RESULTSOur experiments were designed to explore the following hypotheses: our proposed hierarchicalarchitecture will generalize better than a non-hierarchical controller, that analogy-making regu-larization and learning temporal abstractions in the meta controller will both separately be ben-eficial for task generalization. We are also interested in understanding the qualitative proper-ties of our agent’s behavior. The demo videos are available at the following website: https://sites.google.com/a/umich.edu/junhyuk-oh/task-generalization .6.1 E XPERIMENTAL SETTINGEnvironment. We developed a 2D grid world based on MazeBase (Sukhbaatar et al., 2015) wherethe agent can interact with many objects as illustrated in Figure 1. Unlike the original MazeBase,an observation is represented as a binary 3D tensor: xt2R181010where 18is the number ofobject types and 1010is the size of the grid world. Each channel is a binary mask indicating thepresence of each object type. There are agent, blocks, water, and 15 types of objects with which theagent can interact (see Appendix D), and all of them are randomly placed for each episode.The agent has 13 primitive actions: No-operation ,Move (North/South/West/East, referred to as“NSWE”), Pick up (NSWE), and Transform (NSWE). Move actions move the agent by one cell inthe specified direction. Pick up actions remove the adjacent object in the corresponding relativeposition, and depending on the object type Transform actions either remove it or transform it toanother object.The agent receives a time penalty ( 0:1) for each time-step. Water cells act as obstacles which give0:3when the agent visits them. The agent receives +1reward when it finishes all instructions inthe correct order. Throughout the episode, an enemy randomly appears, moves, and disappears after10 steps. Transforming an enemy gives +0:9reward. More details are described in the appendix D.Subtasks and Instructions. The subtask space is defined as the Cartesian product of two argu-ments:G=fVisit;Pick up;TransformgfX1;X2;:::;X 15gwhereXiis an object type. The agentshould be on the same cell of the target object to finish ‘Visit’ task. For ‘Pick up’ and ‘Transform’tasks, the agent should perform the corresponding primitive action to the target object. If there aremultiple target objects in the world, the agent can perform the action to any of the target objects.The instructions are represented as a sequence of sentences, each of which is one of the following:Visit X ,Pick up X ,Transform X ,Pick up all X , and Transform all X where ‘X’ is the target objecttype. While the first three instructions require the agent to perform the corresponding subtask, thelast two instructions require the agent to repeat the same subtask until the target objects completelydisappear from the world.Task Split. Among 45 subtasks in G, only 30 subtasks are presented to the subtask controllerduring training. 3 subtasks from the training subtasks and 3 subtasks from the unseen subtasks7Under review as a conference paper at ICLR 2017AgentTrain UnseenReward Success Accuracy Reward Success Accuracyw/o Analogy 0.56 99.9% 100.0% -1.88 60:8% 49:6%w/ Analogy 0.56 99.9% 100.0% 0.55 99.8% 99.6%Table 1: Performance of subtask controller. ‘Analogy’ indicates analogy-making regularization. ‘Accuracy’represents termination prediction accuracy. We assume a termination prediction is correct only if predictionsare correct throughout the whole episode.were selected as the validation set to pick the best-performing subtask controller. For training themeta controller, we created four sets of sequences of instructions: training, validation, and two testsets. The training tasks consist of sequences of up to 4 instructions sampled from the set of traininginstructions. The validation set consists of sequences of 7 instructions with small overlaps withthe training instructions and unseen instructions. The two test sets consist of 20 seen and unseeninstructions respectively. More details of the task split are described in the appendix D.Flat Controller. To understand the advantage of using the communicating hierarchical structureof our controllers, we trained a flat controller which is almost identical to the meta controller archi-tecture except that it directly chooses primitive actions without using the subtask controller. Detailsof the flat controller architecture are described in the appendix F. The flat controller is pre-trainedon the training set of subtasks. To be specific, we removed the instruction memory and fed a singleinstruction as an additional input (i.e., rtis fixed throughout the episode). We found that the flatcontroller could not learn any reasonable policy without this pre-training step which requires mod-ification of the architecture based on domain knowledge. After pre-training, we fine-tuned the flatcontroller with the instruction memory on lists of instructions. Note that the flat controller is alsocapable of executing instructions as well as dealing with random events in principle.6.2 T RAINING DETAILSThe subtask controller consists of 3 convolution layers and 2 fully-connected layers and takes thelast 2 observations concatenated through channels as input. Each subtask argument ( g(i)) is linearlytransformed and multiplied with each other to compute the joint subtask embedding. This is furtherlinearly transformed into the weight of the first convolution layer, and the weight of the first fully-connected layer. The meta controller takes the current observation as input and has 2 convolutionlayers and 2 fully-connected layers where the parameters of the first convolution layer and the firstfully-connected layer are predicted by the joint embedding of rt1;'(gt1), andbt.We implemented synchronous actor-critic with 16 CPU threads based on MazeBase (Sukhbaataret al., 2015), each of which samples a mini-batch of episodes ( K) in parallel. The parameters areupdated after 16Kepisodes. The details of architectures and hyperparameters are described inthe appendix F.Curriculum Learning via a Forgiving World. We conducted curriculum training by changingthe size of the grid world, the density of objects, and the number of instructions according to theagent’s success rate. In addition, we trained the soft-architectures on an easier forgiving environmentwhich generates target objects whenever they do not exist. Crucially, this allows the agent to recoverfrom past mistakes in which it removed needed target objects. The soft-architectures are fine-tunedon the original (and far more unforgiving) environment which does not regenerate target objectsin the middle of the episode. Training directly in the original environment without first training inthe forgiving environment leads to too much failture at executing the task and the agent does notlearn successfuly. Finally, the hard-architectures are initialized by the soft-architectures and furtherfine-tuned on the original environment.6.3 E VALUATION OF SUBTASK CONTROLLERTo see how well the subtask controller performs separately from the meta controller, we evaluatedit on the training set of subtasks and unseen subtasks in Table 1. It is shown that analogy-makingregularization is crucial for generalization to unseen subtasks. This result suggests that analogy-making regularization plays an important role in learning the relationship between different subtasksand enabling generalization to unseen subtasks.In addition, we observed that the subtask controller learned a non-trivial policy by exploiting causalrelationships. For example, when [Pick up, egg] is given as the subtask arguments, but a duckis very close to the agent, it learned to transform the duck and pick up the resulting egg because8Under review as a conference paper at ICLR 2017Train Test #1 Test #2 Test #3 Test #4Set of instructions Seen Seen Unseen Seen w/o all Unseen w/o allNum of instructions 4 20 20 20 20ForgivingShortest Path -1.56 (99:6%) -11.94 (99:1%) -9.62 (99:1%)Near-Optimal -0.96 (99:6%) -9.99 (99:1%) -8.19 (99:1%)Flat -1.64 (85:8%) -14.53 (65:9%) -17.25 (23:7%) -12.38 (60:4%) -14.18 (16:7%)Hierarchical-TA-Analogy -1.05 (92:4%)-11.06 (86:2%)-13.69 (51:2%) -8.54 (91:9%) -9.91 (75:2%)OriginalShortest Path -1.62 (99:7%) -11.94 (99:4%) -8.72 (99:6%)Near-Optimal -1.34 (99:5%) -10.30 (99:3%) -7.62 (99:4%)Flat -2.38 (76:0%) -18.83 (0:1%) -18.92 (0:0%) -15.09 (0:0%) -15.17 (0:0%)Hierarchical -2.04 (72:8%) -16.85 (16:6%) -17.66 (6:9%) -10.99 (49:4%) -11.40 (47:4%)Hierarchical-Analogy -1.74 (81:0%) -15.89 (28:0%) -17.23 (11:3%) -10.11 (61:8%) -10.66 (57:7%)Hierarchical-TA -1.38 (92:6%) -12.96 (62:9%) -17.19 (13:0%) -9.11 (74:4%) -10.37 (61:2%)Hierarchical-TA-Analogy -1.26 (95:5%)-11.30 (81:3%)-14.75 (40:3%) -8.24 (85:5%) -9.51 (73:9%)Table 2: Performance of meta controller. Each column corresponds to different evaluation sets of instructions,while each row corresponds to different configurations of our architecture and the flat controller. Test #3and Test #4 do not include ‘Transform/Pick up all X’ instructions. ‘TA’ indicates the meta controller withtemporal abstraction. Each entry in the table represents reward with success rate in parentheses averaged over10-best runs among 20 independent runs. ‘Shortest Path’ is a hand-designed policy which executes instructionsoptimally based on the shortest path but ignores enemies. ‘Near-Optimal’ is a near-optimal policy that executesinstructions based the shortest path and transforms enemies when they are close to the agent. ‘Forgiving’rows show the result from the forgiving environment used for curriculum learning where target objects areregenerated whenever they do not exist in the world.5 10 15 20Num of instructions−20−15−10−505Reward5 10 15 20Num of instructions0.00.20.40.60.81.0Success rate5 10 15 20Num of instructions050100150200250#steps5 10 15 20Num of instructions05101520#instructions completedShortest-HueristicFlat (Seen)Flat (Unseen)Hierarchy (Seen)Hierarchy (Unseen)Hierarchy-Analogy (Seen)Hierarchy-Analogoy (Unseen)Hierarchy-TA (Seen)Hierarchy-TA (Unseen)Hierarchy-TA-Analogy (Seen)Hierarchy-TA-Analogy (Unseen)Figure 4: Performance per number of instructions. From left to right, the plots show reward, success rate, thenumber of steps, and the average number of instructions completed respectively. Solid and dashed curves showthe performances on seen instructions and unseen instructions respectively.transforming the duck transforms it to an egg in our environment. More analysis of the subtaskcontroller and the effect of analogy-making regularization is discussed in the appendix A and B.6.4 E VALUATION OF META CONTROLLERWe evaluated the meta controller separately from the subtask controller by providing the best-performing subtask controller during training and evaluation. The results are summarized in Table 2and Figure 4. Note that there is a discrepancy between reward and success rate, because success rateis measured only based on the instruction execution, while reward takes into account the backgroundtask (i.e., handling randomly appearing enemy) as well as the instruction execution.Overall performance. Table 2 shows that our hierarchical agent with temporal abstraction andanalogy-making regularization, denoted Hierarchical-TA-Analogy in the table, can handle 20 seeninstructions (Test #1) and 20 unseen instructions (Test #2) correctly with reasonably high successrates. In addition, that agent learned to deal with enemies whenever they appear, and thus it out-performs the ‘Shortest Path’ policy which is near-optimal in executing instructions while ignoringenemies. We further investigated how the number of instructions affects the performance in Figure 4.Although the performance is degraded as the number of instructions increases, our architecture fin-ishes 18 out of 20 seen instructions and 12 out of 20 unseen instructions on average. These resultsshow that our agent is able to generalize to longer compositions of instructions as well as unseeninstructions by just learning to solve short sequences of a subset of instructions.Flat vs. Hierarchy. All our hierarchical controllers outperform the flat controller both on thetraining tasks and longer/unseen instructions (see Table 2). We observed that the flat controllerlearned a sub-optimal policy which assumes that ‘Transform/Pick up X’ instructions are identical to‘Transform/Pick up all X’ instructions. In other words, it always transforms or picks up all existingtargets. Although this simple strategy is a reasonable sub-optimal policy because such wrong actionsare not explicitly penalized in our environment other than through the accumulating penalty per-9Under review as a conference paper at ICLR 2017UpdateShiftABCDABCD-10+1Figure 5: Analysis of the learned policy. ‘Update’ shows our agent’s internal update decision. ‘Shift’ showsour agent’s memory-shift decision which is either -1, 0, or +1 from top to bottom. The bottom text shows theinstruction indicated by the memory pointer, while the top text shows the subtask chosen by the meta controller.(A) the agent transforms the pig given ‘Transform Pig’ instruction and decides to update the subtask (Updateis true) and move to the next instruction. (B) an enemy (red) appears while the agent is executing ‘Pick up allmeat’ instruction (green boxes for meat). The agent changes the subtask to [Transform, enemy]. (C) the agentsuccessfully transforms the enemy and sets the subtask to [Pick up, meat] to resume executing the instruction.(D) the agent picks up the last meat in the world, moves the memory pointer to the next instruction, and sets anew subtask according to the next instruction.time-step, it often unnecessarily removes objects that can be potentially target objects in the futureinstructions. This is why the flat controller performs reasonably well on the short sequences ofinstructions (training) where such cases are rare and on the forgiving environment where targetobjects are restored whenever needed. But, it completely fails on longer instructions in the originalenvironment because the entire task becomes unsolvable when target objects are removed in error.This implies that the flat controller struggles with detecting when a subtask is finished precisely,whereas our hierarchical controllers can easily detect when a subtask is done, because the subtaskcontroller in our communicating architecture provides a termination signal to the meta controller.In addition, the flat controller tends to ignore enemies, while the hierarchical controllers try to dealwith enemies whenever they exist by changing the subtask-arguments communicated by the metacontroller to the subtask controller, which is a better strategy to maximize the reward. The flatcontroller instead has to use primitive actions to deal with both instructions and enemies. Thisimplies that our communicating hierarchical controllers have more advantages for context switchingbetween different sources of tasks (i.e., executing instructions and dealing with enemies).Finally, we observed that the flat controller often makes many mistakes on unseen instructions (e.g.,transform X given ‘Visit X’ as instruction). In contrast, the hierarchical controllers do not make suchmistakes as the subtask controller generalizes well to unseen instructions as discussed in Section 6.3.Effect of Analogy-making. Table 2 shows that analogy-making significantly improves general-ization performance especially on Test #2 (Hierarchical-Analogy outperforms Hierarchical, andHierarchical-TA-Analogy outperforms Hierarchical-TA). This implies that given an unseen targetobject for the ‘Transform/Pick up all’ instruction, the meta controller without analogy-making tendsto fail to check if the target object exists or not. On the other hand, there is almost no improvementby using analogy-making on Test #3 and Test #4 where there are no ‘all’ instruction. This is becausethe meta controller can simply rely on the subtask termination ( bt) given by the subtask controllerto check if the current instruction is finished for non-‘all’ instructions, and the subtask controller(trained with analogy-making) successfully generalizes to unseen subtasks and provides accuratetermination signals to the meta controller. The empirical results showing that analogy-making con-sistently improves generalization performance in both non-analogy-making controllers suggests thatanalogy-making is crucial for generalization to unseen tasks.Effect of Temporal Abstraction. To see the effect of temporal abstractions, we trained a baselinethat updates the memory pointer and the subtask at every time-step which is shown as ‘Hierarchical’and ‘Hierarchical-Analogy’ in Table 2. It turns out that the agent without temporal abstractionsperforms much worse both on the training tasks and testing tasks. We hypothesize that temporalcredit assignment becomes easier with temporal abstractions because the subtask updater (describedin Section 5.1.2) can operate at a larger time-scale by decoupling the update decision from the10Under review as a conference paper at ICLR 2017ABCPick up brownVisit blueVisit redPick up yellowTransform redTransform purplePick up yellowPick up purplePick up yellowTransform purpleVisit yellowVisit redPick up brownVisit yellowPick up purpleTransform blueTransform brownVisit bluePick up purpleTransform blueFirst-person-view(Observation)Top-down-view(Not visible)DABCDFigure 6: Learned policy in 3D environment. The agent observes ‘First-person-view’ images, while ‘Top-down-view’ is not available to the agent. The right text shows the list of instructions. (A) The agent cannot see thetarget block (blue) at this point due to the partially observable nature of the environment and the randomnessof the topology. The agent learned to explore the map to find the target block. (B) Although the currentinstruction is ‘Transform purple’, the agent decides to transform the green block because transforming a greenblock gives a large positive reward (stochastic event). (C) After dealing with the stochastic event, the agentresumes executing the instruction (Traansform purple). (D) The agent finishes the whole list of instructions.Train Test #1 Test #2Set of instructions Seen Seen UnseenNum of instructions 4 20 20Flat -1.87 (92:2%) -22.35 (68:7%) -39.24 (0:0%)Ours -1.41 (95:0%) -15.60 (92:2%) -17.80 (84:3%)Table 3: Performance on 3D environment.subtask selection. In particular, given ‘all’ instructions, the agent should repeat the same subtaskwhile not changing the memory pointer for a long time and the reward is even more delayed. Thiscan possibly confuse the subtask updater without temporal abstractions because it should make thesame decision for the entire time-steps of such instructions. In contrast, the subtask updater withtemporal abstractions can get a direct feedback from the long-term future, since one decision madeby the subtask updater results in multiple primitive actions. We conjecture that this is why the agentslearn more stably with temporal abstractions under delayed reward.Analysis of The Learned Policy. We visualized our agent’s behavior on a task with a long list ofinstructions in Figure 5. We observed that our meta controller learned to communicate the correctsubtask-arguments to the subtask controller and learned to move precisely to the next instructionby shifting the memory pointer whenever the instruction is finished. More interestingly, wheneveran enemy appears, our meta controller immediately changes the subtask to [Transform, enemy]regardless of the instruction and resumes executing the instruction after dealing with the enemy.Throughout the background task and the ‘all’ instructions, the meta controller keeps the memorypointer unchanged as illustrated in (B-D) in the figure. In addition, the agent learned to update thememory pointer and the subtask-argument almost only when it is needed, which provides the subtaskupdater with temporally-extended actions. This is not only computationally efficient but also usefulfor learning a better policy as discussed above.6.5 E VALUATION IN 3D V ISUAL ENVIRONMENTWe developed a similar set of tasks in Minecraft environment based on Oh et al. (2016) as shownin Figure 6. In this environment, the agent can observe only the first-person-view images, whichnaturally involves partial observability. In this environment, even executing a simple instruction(e.g., Visit X) requires the agent to explore the topology to find the target.An observation is represented as a 6464RGB image ( xt2R36464). There are 7 different typesof colored blocks: red, blue, green, yellow, brown, purple, and black which correspond to differenttypes of objects in the grid world experiment. Like 2D grid world environment, the topology of11Under review as a conference paper at ICLR 2017walls and the colored blocks are randomly generated for every episode. A wall not only acts as anobstacle but also occludes the objects behind it as shown in Figure 6, which makes the task morechallenging.The agent has 9 actions: Look (Left/Right/Up/Down), Move (Forward/Backward), Pick up ,Trans-form , and No operation .Look left /right actions change the yaw of the agent by 90 degree, whileLook up /down actions change the pitch of the agent by 45 degree. Move forward /backward actionsmove the agent by one block according to the agent’s looking direction. Pick up removes the blockin front of the agent, and Transform changes the block in front of the agent to the black-coloredblock.We used the same reward function used in the 2D grid world experiment. In addition, a green blockrandomly appears and transforming a green block gives +0:9reward regardless of instructions,which acts as a stochastic event. Each instruction is one of the following: Visit X, Pick up X, andTransform X where ‘X’ is the target color. We excluded ‘all’ instructions in this environment becausewe found that solving ‘all’ instructions given a limited amount of time is extremely challenging evenfor humans due to the partial observability.We used almost the same architectures used in the 2D grid world experiment except that a longshort-term memory (Hochreiter and Schmidhuber, 1997) is added on top of the final convolutionlayer both in the subtask controller and the meta controller, as it is one of the simplest ways to dealwith partial observability (Hausknecht and Stone, 2015; Mnih et al., 2016; Oh et al., 2016). Wefollowed the same training scheme used in the 2D grid world experiment.Table 3 shows that our proposed architecture significantly outperforms the flat controller baselineespecially on the test sets of instructions. We observed that the flat controller tends to strugglewith detecting when an instruction is finished and completely fails on unseen instructions, while ourarchitecture performs well on unseen and longer instructions. As shown in Figure 6, our architecturelearned to find the target blocks, detect when an instruction is finished, and deal with the stochasticevent. This result demonstrates that the proposed approach can also be applied to a more complexvisual environment.7 C ONCLUSIONIn this paper, we explored zero-shot task generalization in RL with a new problem where the agentis required to execute a sequence of instructions and to generalize over longer sequences of (un-seen) instructions without additional learning. To solve the problem, we presented a hierarchicaldeep RL architecture in which a meta controller learns a closed-loop policy of subtask-argumentcommunications to a subtask controller which executes the given subtask and communicates its ac-complishment back to the meta controller. Our architecture not only generalizes to unseen tasksafter training but also deals with random events relevant to a background task. In addition, we pro-posed several techniques that led to improvements in both training and generalization performance.First, analogy-making regularization turned out to be crucial for generalization to unseen subtasks.Second, learning temporal abstractions improved the performance by making the subtask updateroperate at a larger time-scale. One interesting line of future work would be to define and solvericher task instructions such as conditional statements (i.e., IF-THEN-ELSE) and loop instructions(i.e., collect 3 target objects). Moreover, end-to-end training of the whole hierarchy and discoveringthe subtask decomposition would be important future work.REFERENCESD. Andre and S. J. Russell. Programmable reinforcement learning agents. In NIPS , 2000.D. Andre and S. J. Russell. State abstraction for programmable reinforcement learning agents. InAAAI/IAAI , 2002.P.-L. Bacon and D. Precup. The option-critic architecture. In NIPS Deep Reinforcement LearningWorkshop , 2015.L. Bertinetto, J. F. Henriques, J. Valmadre, P. H. Torr, and A. Vedaldi. Learning feed-forward one-shot learners. arXiv preprint arXiv:1606.05233 , 2016.12Under review as a conference paper at ICLR 2017D. Borsa, T. Graepel, and J. Shawe-Taylor. Learning shared representations for value functions inmulti-task reinforcement learning. 2016.S. R. K. Branavan, H. Chen, L. S. Zettlemoyer, and R. Barzilay. Reinforcement learning for mappinginstructions to actions. In ACL/IJCNLP , 2009.D. L. Chen and R. J. Mooney. Learning to interpret natural language navigation instructions fromobservations. In Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI-2011) ,2011.B. C. da Silva, G. Konidaris, and A. G. Barto. Learning parameterized skills. In ICML , 2012.T. G. Dietterich. Hierarchical reinforcement learning with the maxq value function decomposition.J. Artif. Intell. Res.(JAIR) , 13:227–303, 2000.M. Ghavamzadeh and S. Mahadevan. Hierarchical policy gradient algorithms. In ICML , pages226–233, 2003.A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401 ,2014.R. Hadsell, S. Chopra, and Y . LeCun. Dimensionality reduction by learning an invariant map-ping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR’06) , volume 2, pages 1735–1742. IEEE, 2006.M. Hausknecht and P. Stone. Deep recurrent q-learning for partially observable mdps. arXiv preprintarXiv:1507.06527 , 2015.S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780,1997.D. Isele, M. Rostami, and E. Eaton. Using task features for zero-shot knowledge transfer in lifelonglearning. In IJCAI , 2016.V . R. Konda and J. N. Tsitsiklis. Actor-critic algorithms. In NIPS , volume 13, pages 1008–1014,1999.G. Konidaris and A. G. Barto. Building portable options: Skill transfer in reinforcement learning.InIJCAI , 2007.G. Konidaris, I. Scheidwasser, and A. G. Barto. Transfer in reinforcement learning via sharedfeatures. Journal of Machine Learning Research , 13:1333–1371, 2012.T. D. Kulkarni, K. R. Narasimhan, A. Saeedi, and J. B. Tenenbaum. Hierarchical deep rein-forcement learning: Integrating temporal abstraction and intrinsic motivation. arXiv preprintarXiv:1604.06057 , 2016.J. Lei Ba, K. Swersky, S. Fidler, et al. Predicting deep zero-shot convolutional neural networks usingtextual descriptions. In Proceedings of the IEEE International Conference on Computer Vision ,pages 4247–4255, 2015.M. MacMahon, B. Stankiewicz, and B. Kuipers. Walk the talk: Connecting language, knowledge,and action in route instructions. In Proceedings of the 21st National Conference on ArtificialIntelligence (AAAI-2006) , 2006.H. Mei, M. Bansal, and M. R. Walter. Listen, attend, and walk: Neural mapping of navigationalinstructions to action sequences. arXiv preprint arXiv:1506.04089 , 2015.R. Memisevic and G. E. Hinton. Learning to represent spatial transformations with factored higher-order boltzmann machines. Neural Computation , 22(6):1473–1492, 2010.V . Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu.Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783 , 2016.13Under review as a conference paper at ICLR 2017J. Oh, V . Chockalingam, S. Singh, and H. Lee. Memory-based control of active perception andaction in minecraft. In ICML , 2016.E. Parisotto, J. L. Ba, and R. Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforce-ment learning. arXiv preprint arXiv:1511.06342 , 2015.R. Parr and S. J. Russell. Reinforcement learning with hierarchies of machines. In NIPS , 1997.S. Reed, K. Sohn, Y . Zhang, and H. Lee. Learning to disentangle factors of variation with manifoldinteraction. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1431–1439, 2014.S. E. Reed, Y . Zhang, Y . Zhang, and H. Lee. Deep visual analogy-making. In Advances in NeuralInformation Processing Systems , pages 1252–1260, 2015.A. A. Rusu, S. G. Colmenarejo, C. Gulcehre, G. Desjardins, J. Kirkpatrick, R. Pascanu, V . Mnih,K. Kavukcuoglu, and R. Hadsell. Policy distillation. arXiv preprint arXiv:1511.06295 , 2015.T. Schaul, D. Horgan, K. Gregor, and D. Silver. Universal value function approximators. In Pro-ceedings of The 32nd International Conference on Machine Learning , pages 1312–1320, 2015.J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous controlusing generalized advantage estimation. arXiv preprint arXiv:1506.02438 , 2015.S. P. Singh. The efficient learning of multiple task sequences. In NIPS , 1991.S. P. Singh. Transfer of learning by composing solutions of elemental sequential tasks. MachineLearning , 8(3-4):323–339, 1992.S. Sukhbaatar, A. Szlam, G. Synnaeve, S. Chintala, and R. Fergus. Mazebase: A sandbox forlearning from games. arXiv preprint arXiv:1511.07401 , 2015.R. S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporalabstraction in reinforcement learning. Artificial intelligence , 112(1):181–211, 1999.S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. G. Banerjee, S. J. Teller, and N. Roy. Under-standing natural language commands for robotic navigation and mobile manipulation. In AAAI ,2011.S. Tellex, R. A. Knepper, A. Li, D. Rus, and N. Roy. Asking for help using inverse semantics. InRobotics: Science and Systems , 2014.C. Tessler, S. Givony, T. Zahavy, D. J. Mankowitz, and S. Mannor. A deep hierarchical approach tolifelong learning in minecraft. CoRR , abs/1604.07255, 2016.A. S. Vezhnevets, V . Mnih, J. Agapiou, S. Osindero, A. Graves, O. Vinyals, K. Kavukcuoglu, et al.Strategic attentive writer for learning macro-actions. arXiv preprint arXiv:1606.04695 , 2016.W. Zaremba and I. Sutskever. Reinforcement learning neural turing machines. arXiv preprintarXiv:1505.00521 , 2015.14Under review as a conference paper at ICLR 2017A L EARNED VALUE FUNCTION VISUALIZATIONWe visualized the value function learned by the critic network of the subtask controller in Fig-ure 7. As expected from its generalization performance, our subtask controller trained with analogy-making regularization learned high values around the target objects given unseen subtasks.(a) Observation (b) Visit egg (c) Pick up cow (d) Transform meatFigure 7: Value function visualization given unseen subtasks. (b-d) visualizes learned values for each positionof the agent in a grid world (a). The agent estimates high values around the target object in the world.B I NJECTING PRIOR KNOWLEDGE THROUGH ANALOGY -MAKINGAs discussed in Section 4.1, the assumption that subtask arguments are independent from eachother may not hold in the real-world. In this experiment, we simulate such a case by introducing anew subtask, Interact with X , which requires the agent to perform either ‘Pick up’ or ‘Transform’depending on object type. We divided objects into two groups: Group A should be picked up given‘Interact with’ subtasks, while Group B should be transformed.Although it is impossible to generalize to unseen target objects in this setting, humans can stilleasily generalize if someone teaches them by saying ‘Interact with X as you do with Y’ where X isunseen but Y is seen. We claim that our analogy-making regularizer can be used to mimic such ageneralization scenario. To empirically verify this, we presented only a subset of target objects to theagent for ‘Interact with X’ subtasks during training, while the agent observes all target objects forthe original subtasks (i.e., Visit, Pick up, Transform). In the meantime, we applied analogy-makingregularization only within Group A and Group B separately.The result in Table 4 shows that the subtask controller successfully generalizes to unseen targetobjects by picking up target objects for Group A and transforming them for Group B. This resultsuggests that analogy-making can also be used as a tool for injecting (minimal but sufficient) priorknowledge so that the agent generalizes to unseen tasks in a specific way without having any expe-rience on such tasks.AgentTrain UnseenReward Success Accuracy Reward Success Accuracyw/o Analogy 0.55 99.9% 99.9% -3.23 42:1% 44:1%w/ Analogy 0.55 99.9% 99.9% 0.55 99.8% 99.6%Table 4: Injecting prior knowledge through analogy-making. ‘Unseen’ column shows performances on unseen‘Interact with X’ subtasks. ‘Reward’, ‘Success’, and ‘Accuracy’ represent reward, success rate, and terminationprediction accuracy, respectively.C H ARD VS . SOFTTable 5 compares the hard-architecture described in Algorithm 1 against the soft-architecture de-scribed in Algorithm 2. It is shown that the hard-architecture outperforms the soft-architecture onunseen and longer instructions, while the soft-architecture performs as well as the hard-architectureon the training set of instructions. This is because the soft-architecture tends to diffuse the memorypointer over memory locations when it is not certain about its decision. In fact, there is no advantageof using the soft-architecture in this problem because the agent should focus on one instruction ata time. Nevertheless, training the soft-architecture is very important because it is used to initializethe hard-architecture. Otherwise, we observed that it is difficult to train the hard-architecture fromscratch because its non-differentiable operations make optimization difficult.15Under review as a conference paper at ICLR 2017Train Test #1 Test #2 Test #3 Test #4Set of instructions Seen Seen Unseen Seen w/o all Unseen w/o allNum of instructions 4 20 20 20 20Soft -1.27 (95:1%) -11.80 (74:8%) -16.24 (22:0%) -7.93 (88:9%) -9.53 (72:6%)Hard -1.26 (95:5%) -11.30 (81:3%) -14.75 (40:3%) -8.24 (85:5%) -9.51 (73:9%)Table 5: Comparison of the hard-architecture and the soft-architecture.D E NVIRONMENT AND TASKSEnvironment. The types of objects are illustrated in Figure 8. ‘Transform’ action either trans-forms an object to a different object or removes it depending on its type as descirbed in Figure 8.BlockWaterAgentCowPigRockTreeBoxDuckEnemyCandyMilkMeatStoneWoodDiamondEggHeartFigure 8: Example of grid-world with object specification. The arrows represent the outcome of object trans-formation. Objects without arrows disappear when transformed. The agent is not allowed to go through blocksand gets a penalty for going through water.Task Split. For training and evaluating the subtask controller, we constructed a training set ofsubtasks for training and a validation set for selecting the best-performing agent. These sets arealso used to pre-train the flat controller. The details of the sets of subtasks are described in Table 6.For training the meta controller, we constructed a training set of instructions and a validation setof instructions described in Table 7. By sampling instructions from such sets of instructions, wegenerated different sets of sequences of instructions for training, validation and evaluation in Table 8.Train (Seen) ValidationVisit Pick up Transform Visit Pick up TransformCow X X XEnemy X XPig X X XRock X X XTree X XCandy X X XDiamond X XMilk X XPork X XWood X X XBox X X XDuck X XEgg X XHeart X XStone X XTable 6: Set of subtasks. ‘Train (Seen)’ shows subtasks used to train the subtask controller. The other uncheckedsubtasks are used as the unseen set of subtasks for evaluation.Train (Seen) ValidationVisit Pick up Transform Pick up all Transform all Visit Pick up Transform Pick up all Transform allCow X X X XPig X X X XRock X X X X XTree X X XCandy XX X XDiamond XX X XMilk XX X XPork XX XWood XX X XBox X X X X XDuck X X X X XEgg X X X XHeart X X X XStone X X X XTable 7: Set of instructions. ‘Train’ and ‘Validation’ columns show the set of instructions used for training andvalidation. The unseen set of instructions are defined as the unchecked instructions in ‘Train’ column.16Under review as a conference paper at ICLR 2017Train Validation Test #1 Test #2 Test #3 Test #4Set of instructions Seen Unseen Seen Unseen Seen w/o all Unseen w/o allMax num of instructions 4 7 20 20 20 20Max steps 60 90 250 250 200 200Table 8: Task split.E D ETAILS OF LEARNING OBJECTIVESE.1 S UBTASK CONTROLLERThe subtask controller is first trained through policy distillation (Rusu et al., 2015; Parisotto et al.,2015) and fine-tuned using actor-critic method (Konda and Tsitsiklis, 1999) with generalized ad-vantage estimation (GAE) (Schulman et al., 2015). The subtask controller is also trained to predictwhether the current state is terminal or not through binary classification objective.The idea of policy distillation is to first train separate teacher policies ( gT(ajs)) for each subtask ( g)through reinforcement learning and train a single policy ( g(ajs)) to mimic teachers’ behavior byminimizing KL divergence between them as follows:rLRL=EgUhEsghrDKLgTjjg+rLtermii(8)whereDKLgTjjg=PagT(ajs) loggT(ajs)g(ajs)andU G is the training set of subtasks.Lterm =log(st;g) =logP(st2Tg)is the cross-entropy loss for termination predic-tion. Intuitively, we sample a mini-batch of subtasks ( g), use the subtask controller to generateepisodes, and train it to predict teachers’ actions. This method has been shown to be efficient formulti-task learning.After policy distillation, the subtask controller is fine-tuned through actor-critic with generalizedadvantage estimation (GAE) (Schulman et al., 2015) as follows:rLRL=EgUhEsghrlog(atjst;g)^A(;)t+rLtermii(9)where ^A(;)t =P1l=0()lVt+landVt=rt+V(st+1;0)V(st;0).0is optimized tominimize Eh(RtV(st;0))2i.;2[0;1]are a discount factor and a weight for balancingbetween bias and variance of the advantage estimation.The final update rule for the subtask controller is:/(rLRL+rLAM) (10)whereLAM=Lsim+1Ldis+2Ldiff is the analogy-making regularizer defined as the weightedsum of three objectives described by Eq (4)-(6). 1;2;are hyperparameters for each objective.E.2 M ETA CONTROLLERActor-critic method with GAE is used to update the parameter of the meta controller. as follows:rLRL=8>>><>>>:EhctPirlogg(i)tjht;rt+rlogP(ltjht)^A(;)t(Hard)+rlogP(ctjht)^A(;)t+r'update(ht)1iEhPirlogg(i)tjht;rt^A(;)ti(Soft)(11)wherectP(ctjht)/'update(ht), andP(ltjht)/Softmax'shift(ht).is a weight forthe update penalty.The final update rule for the meta controller is:/(rLRL+rLAM) (12)whereLAMis the analogy-making regularizer. 1;2;are hyperparameters for each objective.17Under review as a conference paper at ICLR 2017F A RCHITECTURES AND HYPERPARAMETERSCNNSubtaskargumentsSubtaskembeddingTerminationprobability(a) Subtask controllerVisit APick up BPick up all CTransform DUpdateNoupdate+10-1CNNContextSubtaskEmbeddingSubtaskargumentsSubtaskTerminationRetrievedInstruction (b) Meta controllerFigure 9: Proposed neural network architectures.Parameter Prediction. Parameter prediction approaches construct a neural network with param-eters predicted by condition variables (e.g., exempler, class embedding). This approach has beenshown to be effective for achieving zero-shot and one-shot generalization in image classificationproblems (Lei Ba et al., 2015; Bertinetto et al., 2016). More formally, given an input ( x), the output(y) of a convolution and a fully-connected layer with parameters predicted by a condition variable(g) can be written as:Convolution: y='(g)x+b Fully-connected: y=W0diag('(g))Wx+bwhere'is the embedding of the condition variable learned by a multi-layer perceptron (MLP). Notethat we use matrix factorization (similar to (Memisevic and Hinton, 2010)) to reduce the numberof parameters for the fully-connected layer. Intuitively, the condition variable is converted to theweight of the convolution or fully-connected layer through multiplicative interactions. We usedthis approach as a building block to condition the policy network on the subtask embedding in thesubtask controller and the meta controller.Subtask controller. The teacher architecture used for policy distillation is Conv1(32x3x3-1)-Pool(2)-Conv2(64x3x3-1)-FC1(256).1The network has two fully-connected output layers for ac-tions and baseline respectively. The subtask controller architecture consists of Conv1(3x1x1-1)-Conv2(64x1x1-1)-Pool(2)-Conv3(128x3x3-1)-FC1(256) taking two recent observations as in-put. In addition, the subtask controller takes two subtask arguments ( g(1);g(2)) and computesReLU (W(1)g(1)W(2)g(2))to compute the subtask embedding. It is further linearly transformedinto the weight of Conv1 and the (factorized) weight of FC1. Finally, the network has three fully-connected output layers for actions ( '), termination probability ( '), and baseline. In ‘Concat’baseline architecture, the subtask embedding is linearly transformed and concatenated into the ob-servation as 18 channels and FC1 as 256-dimensional vector.We used RMSProp optimizer with the smoothing parameter of 0:97and epsilon of 1e6. Whentraining the teacher policy through actor-critic, we used a learning rate of 1e3. For training thesubtask controller, we used a learning rate of 1e3and1e4for policy distillation and actor-criticfine-tuning respectively. We used dis=diff= 3;= 0:1for analogy-making regularization andthe termination prediction objective. = 0:99and= 0:96are used as a discount factor and abalancing weight for GAE. 16 threads with batch size of 8 are used to run 168episodes in parallel,and the parameter is updated after each run (1 iteration = 168episodes). For better exploration,we applied entropy regularization with a weight of 0:01and linearly decreased it to zero for the first7500 iterations. The total number of iterations was 10K for both policy distillation and actor-criticfine-tuning.Meta Controller. The meta controller consists of Conv1(3x1x1-1)-Pool(2)-FC1(256) taking thecurrent observation as input. The embedding of previously selected subtask ( '(gt1)), the previ-ously retrieved instruction ( rt1), and the subtask termination ( bt) are concatenated and given as1For convolution layers, NxKxK-P represents N kernels with size of KxK and padding of P. The number inPool and FC represents the pooling size and the number of hidden units.18Under review as a conference paper at ICLR 2017input for one-layer MLP to compute the joint embedding. This is further linearly transformed intothe weight of Conv1 and FC1. The output of FC1 is used as the context vector ( ht). We used thebag-of-words (BoW) representation as a sentence embedding which computes the sum of all wordembeddings in a sentence: 'w(mi) =Pjmijj=1Wmwjwhere Wmis the word embedding matrix,each of which is 256-dimensional. An MLP with one hidden layer with 256 units is for 'shift, alinear layer is used for 'update.'goalis an MLP with one hidden layer with 256 units that takesthe concatenation of rtandhtas an input and computes the probability over subtask arguments asthe outputs. The baseline network takes the concatenation of the memory pointer pt, a binary maskdefined over memory locations indicating the presence of instruction, and the final hidden layer of'goal.We used the same hyperparameters used in the subtask controller except that the batch size was 32(1 iteration = 1632episodes). We trained the soft-architecture with a learning rate of 2:5e4using curriculum learning for 150K iterations, and fine-tune it with a learning rate of 1e4withoutcurriculum learning for 25K iterations. Finally, we initialized the hard-architecture to the soft-architecture and fine-tune it using a learning rate of 1e4for 25K iterations. = 0:0001 is used topenalize update decision.Flat Controller. The flat controller architecture consists of Conv1(3x1x1-1)-Conv2(64x1x1-1)-Pool(2)-Conv3(128x3x3-1)-FC1(256) taking two recent observations as input. The previously re-trieved instruction ( rt1) is transformed through an MLP with two hidden layers to compute theweight of Conv1 and FC1. The rest of the architecture is identical to the meta controller except thatit does not learn temporal abstractions ( 'update) and has a softmax output over primitive actions.Curriculum Learning. For training all architectures, we randomly sampled the size of the gridworld fromf7;8;9;10g, the density of blocks and water cells are sampled from [0;0:1], and thedensity of objects are sampled from [0;0:6]for subtask pre-training, [0;0:15]for training on theeasier environment, [0;0:3]for training on the original environment. We sampled the number ofinstructions fromf1;2;3;4gfor training the meta controller on the easier environment, but it wasfixed to 4 for fine-tuning. The sampling range was determined based on the success rate of theagent.19
SJNDWNOlg
Under review as a conference paper at ICLR 2017WHAT IS THE BESTPRACTICE FOR CNN SAPPLIED TOVISUAL INSTANCE RETRIEVAL ?Jiedong Hao, Jing Dong, Wei Wang, Tieniu TanCenter for Research on Intelligent Perception and ComputingInstitute of Automation, Chinese Academy of SciencesABSTRACTPrevious work has shown that feature maps of deep convolutional neural networks(CNNs) can be interpreted as feature representation of a particular image region.Features aggregated from these feature maps have been exploited for image re-trieval tasks and achieved state-of-the-art performances in recent years. The keyto the success of such methods is the feature representation. However, the differentfactors that impact the effectiveness of features are still not explored thoroughly.There are much less discussion about the best combination of them.The main contribution of our paper is the thorough evaluations of the various fac-tors that affect the discriminative ability of the features extracted from CNNs.Based on the evaluation results, we also identify the best choices for differentfactors and propose a new multi-scale image feature representation method to en-code the image effectively. Finally, we show that the proposed method generaliseswell and outperforms the state-of-the-art methods on four typical datasets used forvisual instance retrieval.1 I NTRODUCTIONImage retrieval is an important problem both for academic research and for industrial applications.Although it has been studied for many years (Sivic & Zisserman, 2003; Philbin et al., 2007; Toliaset al., 2015), it is still a challenging task. Generally, image retrieval is divided into two groups. Thefirst one is the category-level image retrieval (Sharma & Schiele, 2015), in which an image in thedataset is deemed to be similar to the query image if they share the same class or they are similar inshape and local structures. The other group is the instance-level image retrieval (Tolias et al., 2015),in which an image is considered to match the query if they contain the same object or the samescene. The instance-level image retrieval is harder in that the retrieval method need to encode thelocal and detailed information in order to tell two images apart, e.g., the algorithm should be ableto detect the differences between the Eiffel Tower and other steel towers although they have similarshapes. In this paper, we focus on the instance-level image retrieval.Traditionally, visual instance retrieval is mainly addressed by the BoF (bag of features) based meth-ods using the local feature descriptors such as SIFT (Lowe, 2004). In order to boost the retrievalperformances, post-processing techniques such as query expansion (Chum et al., 2007) and spatialverification (Philbin et al., 2007) are also employed.With the decisive victory (Krizhevsky et al., 2012) over traditional models in the ImageNet (Rus-sakovsky et al., 2015) image classification challenge, convolutional neural networks (Lecun et al.,1998) continue to achieve remarkable success in diverse fields such as object detection (Liu et al.,2015; Shaoqing Ren, 2015), semantic segmentation (Dai et al., 2016) and even image style trans-fer (Gatys et al., 2016). Networks trained on the Imagenet classification task can generalize quitewell to other tasks, which are either used off-the-shelf (Razavian et al., 2014a) or fine-tuned on thetask-specific datasets (Azizpour et al., 2014; Long et al., 2015). Inspired by all these, researchersin the field of image retrieval also shift their interest to the CNNs. Their experiments have shownpromising and surprising results (Babenko et al., 2014; Razavian et al., 2014c; Tolias et al., 2015),which are on par with or surpass the performances of conventional methods like BoF and VLAD(vector of locally aggregated descriptors) (J ́egou et al., 2010; Arandjelovi ́c & Zisserman, 2013) .1Under review as a conference paper at ICLR 2017Despite all these previous advances (Babenko et al., 2014; Babenko & Lempitsky, 2015; Toliaset al., 2015) on using CNNs for image feature representation, the underlying factors that contributeto the success of off-the-shelf CNNs on the image retrieval tasks are still largely unclear and un-explored, e.g., which layer is the best choice for instance retrieval, the convolutional layer or thefully-connected layer? What is the best way to represent the multi-scale information of an image?Clarifying these questions will help us advance a further step towards building a more robust andaccurate retrieval system. Also in situations where a large numbers of training samples are not avail-able, instance retrieval using unsupervised method is still preferable and may be the only option.In this paper, we aim to answer these questions and make three novel contributions. Unlike pre-vious papers, we explicitly choose five factors to study the image representations based on CNNsand conduct extensive experiments to evaluate their impacts on the retrieval performances. We alsogive detailed analysis on these factors and give our recommendations for combining them. Dur-ing experiments, we borrow wisdoms from literatures and evaluate their usefulness, but find thatthey are not as effective as some of the simpler design choices. Second, by combining the insightsobtained during the individual experiments, we are able to propose a new multi-scale image rep-resentation, which is compact yet effective. Finally, we evaluate our method on four challengingdatasets, i.e., Oxford5k, Paris6k, Oxford105k and UKB. Experimental results show that our methodis generally applicable and outperforms all previous methods on compact image representations bya large margin.2 R ELATED WORKMulti-scale image representation . Lazebnik et al. (2006) propose the spatial pyramid matchingapproach to encode the spatial information using BoF based methods. They represent an image us-ing a pyramid of several levels or scales. Features from different scales are combined to form theimage representation in such a way that coarser levels get less weight while finer levels get moreweight. Their argument is that matches found in coarser levels may involve increasingly dissimilarimage features. In our paper, we also explore the multi-scale paradigm in the same spirit using theconvolutional feature maps as the local descriptors. We find that the deep features from the convolu-tional feature maps are distinct from the traditional descriptors: the weighted sum of different levelof features shows no superior performances than a simple summation of them. Kaiming et al. (2014)devise an approach called SPP (spatial pyramid pooling). In SPP, feature maps of the last convo-lutional layer are divided into a 3 or 4 scale pyramid. First the regional features in each scale areconcatenated, then the scale-level features are concatenated to a fixed length vector to be forwardedto the next fully-connected layers. We find that this strategy is ineffective for unsupervised instanceretrieval, leading to inferior performances compared to other simple combination methods (see thepart about multi-scale representation in section 5.2 for more details.).Image representation using off-the-shelf CNNs . Gong et al. (2014) propose the MOP (multi-scale orderless pooling) method to represent an image in which VLAD is used to encode the level2 and level 3 features. Then features from different scales are PCA-compressed and concatenatedto form the image features. This method is rather complicated and time-consuming. At the sametime, Babenko et al. (2014) use Alexnet (Krizhevsky et al., 2012) trained on the Imagenet 1000-classclassification task and retrain the network on task-related dataset. The retraining procedure gives aboost to the retrieval performances. Instead of using the output of the fully-connected layers as theimage feature representations, Babenko & Lempitsky (2015) use the output feature maps of last con-volutional layer to compute the image features. Recently, instead of sum-pooling the convolutionalfeatures, Tolias et al. (2015) use max-pooling to aggregate the deep descriptors. Their multi-scalemethod, called R-MAC (regional maximum activation of convolutions), further improves the pre-vious results on four common instance retrieval datasets. Our work differs from these papers inthat we explicitly explore the various factors that underpin the success of unsupervised instance re-trieval, which have not been fully explored and analysed. By carefully choosing the different settingfor each factor and combining them in a complementary way, we show that a large improvement canbe achieved without additional cost.2Under review as a conference paper at ICLR 20173 I MPACTING FACTORSWhen we employ off-the-shelf CNNs for the task of instance-level image retrieval, a natural questionis: what kind of design choices should we make in order to make full use of the representationalpower of existing models? In this section, we summarize the five factors that may greatly impactthe performance of the final image retrieval system. In section 5.2, we will show our experimentalresults on each key factor. Before we delve into the impacting factors, first we will give a briefintroduction about how to represent an image using the activation feature maps of a certain layer.3.1 CNN F EATURES FOR INSTANCE RETRIEVALIn this paper, we are mainly interested in extracting compact and discriminative image features usingthe off-the-shelf CNNs in an efficient way. For a given image I, we simply subtract the mean valueof the RGB channels from the original image and do not do other sophisticated preprocessing. Thenthe image is fed into the convolutional network and goes through a series of convolutions, non-linearactivations and pooling operations. The feature activation maps of a certain layer can be interpretedas the raw image features, based on which we build the final image features. These feature mapsform a tensor of size KHW, where Kis the number of feature channels, and HandWareheight and width of a feature map. Each feature map represents a specific pattern which encodesa small part of information about the original image. If we represent the set of feature maps asF=fFig; i= 1;2; : : : ; K , where Fiis the ithactivation feature map, then the most simple imagefeature is formulated as:f= [f1; f2; : : : ; f i; : : : ; f K]T: (1)In the above equation 1, fiis obtained by applying the feature aggregation method (see section 3.2)over the ithfeature map Fi. Throughout this paper, we use feature maps after the non-linear acti-vations (ReLU) so that the elements in each feature map are all non-negative. We also experimentwith feature maps prior to ReLU, but find that they lead to inferior performances. After the imagefeature representation is obtained, post-processing techniques such as PCA and whitening can befurther applied.3.2 I MPACTING FACTORS ON PERFORMANCEFeature aggregation and normalization. After the feature maps of a certain layer are obtained,it is still challenging to aggregate the 3-dimensional feature maps to get compact vector represen-tations for images. Previous papers use either sum-pooling (Babenko & Lempitsky, 2015) or max-pooling (Tolias et al., 2015) followed by l2-normalization. Sum-pooling over a particular featuremapFiis expressed asfi=HXm=1WXn=1Fi(m; n); i2f1;2; : : : ; Kg; (2)while max-pooling is given byfi= maxm;nFi(m; n); (3)where m; n are all the possible values over the spatial coordinate of size HW. In this paper,for the first time, different combinations of aggregation and normalization methods ( l2andl1in themanner of RootSIFT (Arandjelovi ́c & Zisserman, 2012)) are evaluated and their results are reported.Output layer selection. Zeiler & Fergus (2014) has shown that image features aggregated fromthe feature activation maps of certain layers have interpretable semantic meanings. Gong et al.(2014) and Babenko et al. (2014) use the output of the first fully-connected layer to obtain theimage features, while Babenko & Lempitsky (2015) and Tolias et al. (2015) use the output featuremaps of the last convolutional layer. But these choices are somewhat subjective. In this paper, weextract dataset image features from the output feature maps of different layers and compare theirretrieval performances. Based on the finding in this experiment, we choose the best-performinglayer and also come up with a layer ensemble approach which outperforms state-of-the-art methods(see section 5.3).Image resizing. Famous models such as Alexnet (Krizhevsky et al., 2012) and VGGnet (Simonyan& Zisserman, 2014) all require that the input images have fixed size. In order to meet this require-ment, previous papers (Gong et al., 2014; Babenko & Lempitsky, 2015) usually resize the input3Under review as a conference paper at ICLR 2017(a) level 1 (b) level 2 (c) level 3Figure 1: An illustration of multi-scale representation of an image. The whole image is divided into 3levels from the coarsest (level 1) to the finest (level 3). At each level, the image is divided into different numberof equal-sized regions.images to the fixed size. We postulate that the resizing operation may lead to the distortion of im-portant information about the objects in the natural images. Ultimately, this kind of operation mayhurt the discriminative power of image features extracted from the network, thus degrading the re-trieval performances. For the task of image retrieval, we think it is best to keep the images theiroriginal sizes and feed them directly to the network whenever possible. In this paper, three imageresizing strategies are explored:• Both the height and width of the dataset images are set to the same fixed value (denoted astwo-fixed ).• The minimum of each dataset image’s size is set to a fixed value. (The aspect ratio of theoriginal image is kept.) (denoted as one-fixed ).• The images are kept their original sizes. (denoted as free).Multi-scale feature representation. Unlike local feature descriptors such as SIFT (Lowe, 2004),the feature vector extracted from the deep convolutional networks for an image is a global descriptorwhich encodes the holistic information. When used for image retrieval, this kind of features stilllack the detailed and local information desired to accurately match two images. Inspired by spatialpyramid matching (Lazebnik et al., 2006) and SPP (Kaiming et al., 2014), we explore the feasibilityof applying this powerful method to obtain discriminative image features. An image is representedby aL-level pyramid, and at each level, the image is divided evenly into several overlapping ornon-overlapping regions. The vector representations of these small regions are computed, then theregional vectors are combined to form the image feature vectors. The single scale representation ofan image is just a special case of the multi-scale method in which the number of level Lequals 1.Figure 1 shows an example of 3level representations of an image. The time cost of re-feeding thosesmall regions into the network to compute the regional vectors would be huge, thus unacceptablefor instance retrieval tasks. Inspired by the work of Girshick (2015) and Tolias et al. (2015), weassume a linear projection between the original image regions and the regions in the feature mapsof a certain layer. Then the regional feature vectors can be efficiently computed without re-feedingthe corresponding image regions. In section 5.2, various settings for the multi-scale and scale-level feature combination methods are explored and their retrieval performances are reported andanalysed.PCA and whitening. Principal Component Analysis (PCA) is a simple yet efficient method forreducing the dimensionality of feature vectors and decorrelating the feature elements. Previouswork (Babenko et al., 2014; J ́egou et al., 2010) has shown evidences that PCA and whitened featurescan actually boost the performances of image retrieval. In this paper, we further investigate theusefulness of PCA and whitening within our pipeline and give some recommendations.4Under review as a conference paper at ICLR 20174 I MPLEMENTATIONWe use the open source deep learning framework Caffe (Jia et al., 2014) for our whole experiments.The aim of this research is to investigate the most effective ways to exploit the feature activations ofexisting deep convolutional models. Based on past practices for networks to go deeper (Krizhevskyet al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2015), a consideration formoderate computational cost, and also the results from Tolias et al. (2015) that deeper networks workbetter than shallower ones, we decide to use the popular VGG-19 model (Simonyan & Zisserman,2014) trained on ImageNet as our model.Network transformation . The original VGG-19 network only accepts an image of fixed size ( 224224), which is not the optimal choice when extracting image features for retrieval tasks. In order forthe network to be able to process an image of arbitrary size (of course, the image size can not exceedthe GPU’s memory limit) and for us to experiment with different input image resizing strategies, weadapt the original VGG-19 network and change the fully-connected layers to convolutional (Longet al., 2015) layers. For more details about network transformations, see appendix A.5 E XPERIMENTSIn this section, we first introduce the datasets used and the evaluation metrics. Then we reportour experimental results for different impacting factors and give detailed analysis. In the last part,we show the performance of our method considering all these impacting factors and compare ourmethod with the state-of-the-art methods on four datasets.5.1 D ATASETS AND EVALUATION METRICSThe Oxford5k dataset (Philbin et al., 2007) contains 5062 images crawled from Flickr by using11 Oxford landmarks as queries. A total of 11 groups of queries — each having 5 queries withtheir ground truth relevant image list, are provided. For each query, a bounding box annotation isalso provided to denote the query region. During experiment, we report results using the full queryimages (denoted as full-query) and image regions within the bounding boxes of the query images(denoted as cropped-query). The performance on this dataset is measured by mAP (mean averageprecision) over all queries.The Paris6k dataset (Philbin et al., 2008) includes 6412 images1from Flickr which contains 11landmark buildings and the general scenes from Paris. Similar to the Oxford5k dataset, a total of 55queries belonging to 11 groups and the ground truth bounding boxes for each query are provided .The performance is reported as mAP over 55 queries.The Oxford105k2dataset contains the original Oxford5k dataset and additional 100,000 im-ages (Philbin et al., 2007) from Flickr. The 100,000 images are disjoint with the Oxford5k datasetand are used as distractors to test the retrieval performance when the dataset scales to larger size.We use the same evaluation protocol as the Oxford5k on this dataset.TheUKB dataset (Nist ́er & Stew ́enius, 2006) consists of 10200 photographs of 2550 objects, eachobject having exactly 4 images. The pictures of these objects are all taken indoor with large variationin orientation, scale, lighting and shooting angles. During experiment, each image is used to querythe whole dataset. The performance is measured by the average number of same-object images inthe top-4 results.5.2 R ESULTS AND DISCUSSIONIn this section, we report the results of experiments on the impact of different factors and analysetheir particular impact. The experiments in this section are conducted on the Oxford5k dataset.Feature aggregation and normalization. In this experiment, we compare the different combina-tions of feature aggregation (sum-pooling and max-pooling) and normalization methods ( l2andl1)1Following conventions, 20 corrupted images from this dataset are removed, leaving 6392 valid images.2The image named “portrait 000801.jpg” was corrupted and manually removed from this dataset.5Under review as a conference paper at ICLR 2017Table 1: Comparison between different combi-nations of feature aggregation and normaliza-tion methods.Method full-query cropped-querymax -l1 52.4 48.0sum -l2 58.0 52.6sum -l1 60.3 56.3max -l2 60.1 53.5Table 2: Comparison between different imageresizing strategies. The numbers in the parenthe-ses denote the sizes in which the maximum mAPsare achieved.Method full-query cropped-querytwo-fixed 55.5 (864) 38.7 (896)one-fixed 59.0 (800) 39.3 (737)free 58.0 52.6in terms of their retrieval performances. We use features from the layer conv5 4 with the freeinputimage size. The results (%) are shown in Table 1. Sum-pooling followed by l1normalization leadsto slightly better results than the other combinations, especially for the cropped-query. However,after preliminary experiment with a multi-scale version of sum -l1andmax -l2, we find that max -l2is much better than sum -l1. For example, employing a 4 level representation of images in the Ox-ford5k dataset, for the case of full-query, we find that the mAP for the max -l2method is 65.1, whilethe mAP for sum -l1is only 51.3 (even lower than the single scale representation). Base on theseresults, we stick to max -l2in computing the final image features.Output layer selection. In order to verify their feasibility for instance retrieval, we extract fromthe network the output feature maps of different layers and aggregate them to get the image featurevectors. We evaluate the performances using features from layer conv3 3 up to the highest fc7-convlayer (except the pooling layers, i.e.pool3, pool4 and pool5). Single-scale representations of thedataset images are used in this experiment.Figure 2 shows the retrieval performances of image features corresponding to different layers. Theretrieval performances for both the full and cropped queries increase as the layer increases fromlower layer conv3 3 to higher layers and plateau in layer conv5 4 and fc6-conv, then the perfor-mances begin to decrease as the layers increase to fc7-conv. The result shows that features fromlower layers such as conv3 3 and conv3 4 are too generic and lack the semantic meanings of theobject in the image, thus rendering them unsuitable for instance retrieval. On the other hand, fea-tures from the highest layer (fc7-conv) contain the semantic meaning of objects but lack the detailedand local information needed to match two similar images. The best results are obtained in layerconv5 4 (0.601) and fc6-conv (0.618), where the feature vectors combine both the low-level detailedinformation and high level semantic meanings of the image. Based on these observations and therequirement for keeping the image features compact, we mainly focus on image features from thelayer conv5 4 (dimensionality = 512 compared to 4096 of layer fc6-conv).conv3_3 conv3_4 conv4_1 conv4_2 conv4_3 conv4_4 conv5_1 conv5_2 conv5_3 conv5_4 fc6-conv fc7-convlayer names0.160.240.320.400.480.560.64mAPfull-querycropped-queryFigure 2: Performance comparison between different layers. This experiment is conducted using the freeinput image size.Image resizing. We experiment with 3 kinds of image resizing strategies which are detailed insection 3.2. We use grid search to find the optimal size for the two-fixed andone-fixed strategy. Asis shown in Table 2, the freeinput strategy outperforms or is close to the other two strategies: it6Under review as a conference paper at ICLR 2017performs especially well in the cropped-query case. This experiment shows that changing the imageaspect ratio ( two-fixed ) distorts the image information, thus reducing the performance dramatically.Theone-fixed way is better than the two-fixed method. But information loss still occurs due to theresizing operation. The freemethod is able to capture more natural and un-distorted informationfrom the images, which explains its superior performance over the other two methods. It is best tokeep the images their original sizes for the instance retrieval tasks.The benefit of multi-scale representation. In our multi-scale approach, the regional vectors fromeach scale are simply added together and l2-normalized to form the scale-level feature vectors. Thenfeatures from different scales are combined and l2-normalized to form the image representations. Infact, we also experimented with two methods which concatenate features from different scales. Thefirst method is in same vein to spatial pyramid pooling (Kaiming et al., 2014), i.e., region-level aswell as the scale-level features are all concatenated to form a high dimensional vector. In the secondmethod, region-level features are added while scale-level features are concatenated. We find thatthese two methods all lead to inferior results. The performance drop for the first in the case ofcropped-query can be as large as 41%. The high dimensionality of the concatenated features (largerthan 1.5k) will also lead to longer running times. Considering all these, we do not use concatenationof features in the following experiments.Table 3: Multi-scale representation: comparison between different methods. “overlap” denotes whetherthe regions in each level (see Figure 1) have some overlapping areas. “s2”,“s3” mean that overlap occurs inlevel 2 or 3. “weighing” means if the features from each level are added using same weight or different weight.“version” means the different choice of the number of regions in each scale.scale overlap weighing version full-query cropped-query(a1) 2 × × - 63.5 59.0(a2) 2 × X - 63.9 61.0(b1) 3 × × - 64.2 60.9(b2) 3 × X - 62.6 61.0(b3) 3 s2 × - 64.8 60.8(c1) 4 s3 × v1 65.1 61.4(c2) 4 s3 X v1 64.8 60.7(c3) 4 s2,s3 × v1 65.5 60.8(c4) 4 s2,s3 × v2 65.9 61.5(c5) 4 s2,s3 X v2 65.4 61.2(c6) 4 × × v3 64.5 61.3(c7) 4 s3 × v3 65.8 62.2(c8) 4 s2,s3 × v3 66.3 62.6We conduct extensive experiments to decide the best configurations for the multi-scale approach andreport our results in Table 3. First, we explore the impact of the number of scales on the retrievalperformances. For the 2 and 3 scale representations, The region number for each level are f11,22g,f11,22,33g. For the 4 scale representation, 3 versions are used and they differ in thenumber of regions in each scale: for “v1”, “v2”, and “v3”, the number of regions are f11,22,33,44g,f11,22,33,55gandf11,22,33,66g. Table 3 (a1)(b1)(c6) showthe performances of using 2, 3, and 4 scales to represent the dataset images, respectively. Clearly,more scale levels improve the results and in the case of cropped-query, increase the performance byan absolute 2%.We also conduct experiments to find whether the weighing of different scales leads to improvedperformance. The weighing method for features from different scales is similar to the manner ofspatial pyramid matching (Lazebnik et al., 2006) — features from coarser level are given less weightwhile features from the finer levels are given more weight. Suppose the features of different scalesfor an Lscale representation are f1; f2; : : : ; fL, then the image representation fis expressed as:f=12L1f1+LXi=212Li+1fi. (4)More details can be found in Lazebnik et al. (2006). Comparing the results of row (a1) and (a2), itseems that weighing different scales leads to better performance. But after more experiments, wefind that the weighing method generally leads to inferior results as the number of scales increase,7Under review as a conference paper at ICLR 201716 80 144 208 272 336 400 464 528number of principal component reserved.0.250.350.450.550.650.75mAPcrop-pariscrop-selffull-parisfull-selfFigure 3: The number of principal component reserved VS mAP. We show the results of full and croppedquery using the PCA and whitening matrix learned from the Oxford5k itself and Paris6k, denoted as “full-self”,“full-paris” and “crop-self”, “crop-paris”.e.g., compare the results of row pair(b1)(b2) and (c1)(c2). These results suggest that deep featuresare different from the traditional local feature descriptors such as SIFT. We should exercise withcaution when we apply the traditional wisdom found in SIFT to the deep convolutional descriptors,which is also suggested in Babenko & Lempitsky (2015). Based on the results of this experiment,no weighing methods are used in computing our final image feature representations.Next, we look into the issue of overlapping between different scales and try to verify its usefulness.For each scale and its different versions, we set some overlapping areas between the neighboringregions in either one or two scales of the pyramid (For the exact configurations of overlap in all casesin Table 3, see appendix B for the complete descriptions). From the row pair (b1)(b3) and (c1)(c3),we can see that overlap increase the performance for full-query but decrease a little the performancefor cropped-query. But for 4 scale v3 (note the pair(c7)(c8)), we see a consistent improvement forboth the full and cropped queries. So we decided to use overlap in level 2 and 3 in computing ourfinal features.Table 4: The impact of PCA and whitening. “PCA on self” and “PCA on Paris” mean that the correspondingfeatures are post-processed by the PCA and whitening matrices learned on the Oxford5k and Paris6k datasets,respectively. The numbers in the parentheses indicate the dimensionality of features used for obtaining thecorresponding results.Feature full-query cropped-query3scale overlap, original 64.8 60.83scale overlap, PCA on self 65.4(80) 60.9(112)3scale overlap, PCA on Paris 70.6(464) 67.3(480)4scale v3overlap(s3), original 65.1 61.44scale v3overlap(s3), PCA on self 66.9(80) 61.9(96)4scale v3overlap(s3), PCA on Paris 72.3(464) 70.8(496)4scale v3overlap(s2,s3),original 66.3 62.84scale v3overlap(s2,s3), PCA on self 69.0(80) 63.9(144)4scale v3overlap(s2,s3), PCA on Paris 73.2(496) 71.2(448)PCA and whitening . We perform PCA and whitening for the features extracted from the Oxford5kdataset using the PCA and whitening matrix learned from the Oxford5k or the Paris6k dataset andl2-normalize these features to get the final image representations.The retrieval results for 3 groups of features (from Table 3(b3)(c1)(c8)) are shown in Table 4.Clearly, PCA and whitening lead to better performances. For all 3 groups of features, PCA and8Under review as a conference paper at ICLR 2017Table 5: Comparison with state-of-the-art methods. “single” means multi-scale features from single layer(conv5 4) are used. “single, compression” uses the same features but compresses them to get the best perfor-mances. “layer ensemble” combines the similarity score from layer conv5 4 and fc6-conv. The dimensionalityof the combined feature is set to 1024 for compactness considerations. All our methods use PCA and whitening.method DOxford5k Paris6k Oxford105kUKBfull cropped full cropped full croppedJ ́egou & Zisserman (2014) 128 - 43.3 - - - 35.3 3.40Arandjelovi ́c & Zisserman (2012) 128 - 44.8 - - - 37.4 -J ́egou & Zisserman (2014) 1024 - 56.0 - - - 50.2 3.51Razavian et al. (2014b) 256 53.3 - 67.0 - 48.9 - 3.38Babenko et al. (2014) 512 55.7 - - - 52.2 - 3.56Babenko & Lempitsky (2015) 256 58.9 53.1 - - 57.8 50.1 3.65Arandjelovi ́c et al. (2016) 256 62.5 63.5 72.0 73.5 - - -Tolias et al. (2015) 512 - 66.8 - 83.0 - 61.6 -ours (single) 512 73.0 70.6 82.0 83.3 68.9 65.3 3.75ours (single, compression) - 73.2 71.2 83.0 84.0 68.9 65.8 3.76ours (layer ensemble) 1024 75.6 73.7 85.7 85.9 71.6 69.2 3.81whitening on the same dataset lead to insignificant improvement both in the case of full and croppedquery. But after doing PCA and whitening on the Paris6k dataset, the results for both the full andcropped queries improve greatly. In fact, the improvement for the case of cropped-query is evenmore surprising. For example, for the third feature group, the improvement are 10.4% and 13.4%for the full and cropped queries. It should also be noted that as the the number of principal compo-nent reserved increases, the performance for “PCA on self” and “PCA on Paris” differs greatly. As isshown in Figure 3, the performance for the former peaks at a relatively low dimension (around 100)and begins to decrease, while for the latter, the performance increases as the number of principalcomponent gets larger and then plateaus.Do the above results mean that we should always compute the PCA and whitening matrix from anydatasets other than the query dataset itself? The short answer is no. We find that for UKB, learningthe PCA and whitening matrix on the Oxford5k dataset shows inferior results compared to learningthe PCA and whitening matrix on UKB itself (about 2% drop in accuracy). This may be due to thelarge differences between the images of the two datasets as the Oxford5k dataset are mainly imagesof buildings while the images in UKB are mainly small indoor objects. We therefore recommendlearning the PCA and whitening matrix on a similar dataset to achieve good performances.5.3 C OMPARISON WITH OTHER METHODSBased on the previous experimental results and our analysis of different impacting factors on theretrieval performances, we propose a new multi-scale image feature representation. For a givenimage in the dataset, the whole process of image feature representation is divided into two steps.First, the input image is fed into the network without the resizing operation (the freeway) and a4-scale feature representation is built on top of the feature maps of layer conv5 4. During the multi-scale representation step, max-pooling of feature maps are used and regional vectors from the samescale are added together and l2-normalized. After that, features from different scales are summedandl2-normalized again. The second step involves applying the PCA and whitening operations onfeatures from the first step. The PCA and whitening matrix used are either learned from differentor same dataset: specifically, for the Oxford5k and Oxford105k, it is learned in the Paris6k, whilefor Paris6k and UKB, it is learned on Oxford5k and UKB respectively. The final PCA and whitenedimage features are used for reporting our method’s performances.Layer ensemble . Inspired by previous work on model ensemble to boost the classification perfor-mances (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), we consider fusing the similarityscore from different layers to improve the retrieval performances. Specifically, for two images, theirsimilarity score is computed as the weighted sum of the scores from different layers (these weightssum to 1 so that overall similarity score between two images are still in the range [0;1].). We haveevaluated various combination of layers to see their performances and find that best performanceis achieved by combining the score from conv5 4 and fc6-conv. For the fc6-conv features of animage, we use a 3-scale representation as the size of output feature maps are already very small.9Under review as a conference paper at ICLR 2017The fc6-conv features are compressed to low dimensional vectors for faster computation. Our layerensemble achieves 75.6% and 73.7% on Oxford5k for the full and cropped queries respectively,showing a large improvement over previous methods. This suggests that features from the fc6-convand conv5 4 are complementary. See Table 5 for the complete results on all four datasets.Comparison . We compare the performance of our method with several state-of-the-art methodswhich use small footprint representations and do not employ the complicated post-processing tech-niques such as geometric re-ranking (Philbin et al., 2007) and query expansion (Arandjelovi ́c &Zisserman, 2012). The results are shown in Table 5. In all the datasets and different scenarios(full or cropped), our method achieves the best performance with comparable cost. For Oxford5k(cropped) and UKB dataset, the relative improvement of our best results over previous methods(from Tolias et al. (2015) and Babenko & Lempitsky (2015)) are 10.3% and 4.4%.6 C ONCLUSIONIn this paper, we focus on instance retrieval based on features extracted from CNNs. we have con-ducted extensive experiments to evaluate the impact of five factors on the performances of imageretrieval and analysed their particular impacts. Based on the insights gained from these experiments,we have proposed a new multi-scale image representation which shows superior performances overprevious methods on four datasets. When combined with the technique “layer ensemble”, ourmethod can achieve further improvements. Overall, we have provided a viable and efficient solutionto apply CNNs in an unsupervised way to datasets with a relatively small number of images.REFERENCESR. Arandjelovi ́c and A. Zisserman. Three things everyone should know to improve object retrieval. In ComputerVision and Pattern Recognition (CVPR), 2012 IEEE Conference on , pp. 2911–2918, June 2012. doi: 10.1109/CVPR.2012.6248018.R. Arandjelovi ́c and A. Zisserman. All about vlad. In Computer Vision and Pattern Recognition (CVPR), 2013IEEE Conference on , pp. 1578–1585, June 2013. doi: 10.1109/CVPR.2013.207.R. Arandjelovi ́c, P. Gronat, A. Torii, T. Pajdla, and J. Sivic. NetVLAD: CNN architecture for weakly supervisedplace recognition. In IEEE Conference on Computer Vision and Pattern Recognition , 2016.Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. From generic tospecific deep representations for visual recognition. CoRR , abs/1406.5774, 2014. URL http://arxiv.org/abs/1406.5774 .Artem Babenko and Victor Lempitsky. Aggregating local deep features for image retrieval. In The IEEEInternational Conference on Computer Vision (ICCV) , December 2015.Artem Babenko, Anton Slesarev, Alexandr Chigorin, and Victor Lempitsky. Neural Codes for Image Retrieval ,pp. 584–599. Springer International Publishing, Cham, 2014. ISBN 978-3-319-10590-1. doi: 10.1007/978-3-319-10590-1 38. URL http://dx.doi.org/10.1007/978-3-31s9-10590-1_38 .Ondˇrej Chum, James Philbin, Josef Sivic, Michael Isard, and Andrew Zisserman. Total recall: Automatic queryexpansion with a generative feature model for object retrieval. In Computer Vision, 2007. ICCV 2007. IEEE11th International Conference on , pp. 1–8. IEEE, 2007.Jifeng Dai, Kaiming He, and Jian Sun. Instance-aware semantic segmentation via multi-task network cascades.InCVPR , 2016.Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Image style transfer using convolutional neuralnetworks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , June 2016.Ross Girshick. Fast r-cnn. In International Conference on Computer Vision (ICCV) , 2015.Yunchao Gong, Liwei Wang, Ruiqi Guo, and Svetlana Lazebnik. Multi-scale Orderless Pooling of DeepConvolutional Activation Features , pp. 392–407. Springer International Publishing, Cham, 2014. ISBN978-3-319-10584-0. doi: 10.1007/978-3-319-10584-0 26. URL http://dx.doi.org/10.1007/978-3-319-10584-0_26 .Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXivpreprint arXiv:1512.03385 , 2015.10Under review as a conference paper at ICLR 2017H. J ́egou and A. Zisserman. Triangulation embedding and democratic aggregation for image search. In 2014IEEE Conference on Computer Vision and Pattern Recognition , pp. 3310–3317, June 2014. doi: 10.1109/CVPR.2014.417.H. J ́egou, M. Douze, C. Schmid, and P. P ́erez. Aggregating local descriptors into a compact image representa-tion. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on , pp. 3304–3311, June2010. doi: 10.1109/CVPR.2010.5540039.Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar-rama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprintarXiv:1408.5093 , 2014.He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Jian Sun. Spatial pyramid pooling in deep convolutionalnetworks for visual recognition. In European Conference on Computer Vision , 2014.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neuralnetworks. In Advances in neural information processing systems , pp. 1097–1105, 2012.Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. Beyond bags of features: Spatial pyramid matchingfor recognizing natural scene categories. In Proceedings of the 2006 IEEE Computer Society Conferenceon Computer Vision and Pattern Recognition - Volume 2 , CVPR ’06, pp. 2169–2178, Washington, DC,USA, 2006. IEEE Computer Society. ISBN 0-7695-2597-0. doi: 10.1109/CVPR.2006.68. URL http://dx.doi.org/10.1109/CVPR.2006.68 .Y . Lecun, L. Bottou, Y . Bengio, and P. Haffner. Gradient-based learning applied to document recognition.Proceedings of the IEEE , 86(11):2278–2324, Nov 1998. ISSN 0018-9219. doi: 10.1109/5.726791.Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexan-der C. Berg. SSD: Single shot multibox detector. arXiv preprint arXiv:1512.02325 , 2015.Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation.InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 3431–3440, 2015.David G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of ComputerVision , 60(2):91–110, 2004. ISSN 1573-1405. doi: 10.1023/B:VISI.0000029664.99615.94. URL http://dx.doi.org/10.1023/B:VISI.0000029664.99615.94 .D. Nist ́er and H. Stew ́enius. Scalable recognition with a vocabulary tree. In IEEE Conference on ComputerVision and Pattern Recognition (CVPR) , volume 2, pp. 2161–2168, June 2006.J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Lost in quantization: Improving particular objectretrieval in large scale image databases. In Computer Vision and Pattern Recognition, 2008. CVPR 2008.IEEE Conference on , pp. 1–8, June 2008. doi: 10.1109/CVPR.2008.4587635.James Philbin, Ondrej Chum, Michael Isard, Josef Sivic, and Andrew Zisserman. Object retrieval with large vo-cabularies and fast spatial matching. In 2007 IEEE Conference on Computer Vision and Pattern Recognition ,pp. 1–8, June 2007. doi: 10.1109/CVPR.2007.383172.Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: Anastounding baseline for recognition. In Proceedings of the 2014 IEEE Conference on Computer Vision andPattern Recognition Workshops , CVPRW ’14, pp. 512–519, Washington, DC, USA, 2014a. IEEE ComputerSociety. ISBN 978-1-4799-4308-1. doi: 10.1109/CVPRW.2014.131. URL http://dx.doi.org/10.1109/CVPRW.2014.131 .Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. Visual instance retrieval with deepconvolutional networks. CoRR , abs/1412.6574, 2014b. URL http://arxiv.org/abs/1412.6574 .Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. Visual instance retrieval with deepconvolutional networks. CoRR , abs/1412.6574, 2014c. URL http://arxiv.org/abs/1412.6574 .Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, AndrejKarpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large ScaleVisual Recognition Challenge. International Journal of Computer Vision (IJCV) , 115(3):211–252, 2015.doi: 10.1007/s11263-015-0816-y.Ross Girshick Jian Sun Shaoqing Ren, Kaiming He. Faster R-CNN: Towards real-time object detection withregion proposal networks. arXiv preprint arXiv:1506.01497 , 2015.11Under review as a conference paper at ICLR 2017Gaurav Sharma and Bernt Schiele. Scalable nonlinear embeddings for semantic category-based image retrieval.InICCV , 2015.K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR ,abs/1409.1556, 2014.Josef Sivic and Andrew Zisserman. Video google: A text retrieval approach to object matching in videos. InComputer Vision, 2003. Proceedings. Ninth IEEE International Conference on , pp. 1470–1477. IEEE, 2003.C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V . Vanhoucke, and A. Ra-binovich. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and PatternRecognition (CVPR) , pp. 1–9, June 2015. doi: 10.1109/CVPR.2015.7298594.G. Tolias, R. Sicre, and H. J ́egou. Particular object retrieval with integral max-pooling of CNN activations.ArXiv e-prints , November 2015.Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computervision–ECCV 2014 , pp. 818–833. Springer, 2014.12Under review as a conference paper at ICLR 2017APPENDIX A T HE NETWORK TRANSFORMATIONSIn order for the network to process images of varying sizes, We change the layer fc6, fc7 and fc8from the original model to fc6-conv, fc7-conv and fc8-conv. It should be noted there are certainconstraints on the input image size due to the network’s inherent design. The original networkaccepts an image of fixed size ( 224224), so the output feature maps of the last convolutional layerconv5 4 is of size 51277. As a result, when we change the operation between layer conv5 4and fc6 from inner product to convolution, each filter bank kernel between conv5 4 and fc6-convhas size 77. This in turn means that if we are to extract features from layer fc6-conv and above,the minimum size of an input image must equal to or be greater than 224. For output feature mapsof layer conv5 4 and below, there are no restrictions on the input image size. During the experiment,when we are extracting features from layer fc6-conv and above, the minimum size of an image is setto be 224 if it is less than 224.APPENDIX B T HE DETAIL OF OVERLAP IN EACH SCALEIn this paper, the overlaps between different regions occur in the 3 and 4 scale pyramid. A singleregion in each scale can be specified as the combination of a slice from the the width and heightof the feature map. If a scale has NNregions, then the number of slices in width and heightof the feature map are both N. We use the same set of slices for both the width and height in thisexperiment.In 3 scale (see Table 3 (b3)), overlap occurs only in scale 2, and the slice (in the proportion to thelength of feature map width or height: f(0;23);(13;1)g. In 4 scale v1 (Table 3 (c1)–(c3)), the slicesfor scale 2 and 3 are f(0;34);(14;1)gandf(0;24);(14;34);(24;1)g. In 4 scale v2 (Table 3 (c4)(c5)),the slices for scale 2 and 3 are f(0;35);(25;1)gandf(0;35);(15;45);(25;1)g. In 4 scale v3 (Table 3(c6)–(c8)), the slices are f(0;46);(26;1)gandf(0;36);(16;46);(36;1)g, for scale 2 and 3, respectively.13
rk9eAFcxg
Published as a conference paper at ICLR 2017VARIATIONAL RECURRENT ADVERSARIALDEEPDOMAIN ADAPTATIONSanjay Purushotham*, Wilka Carvalho*, Tanachat Nilanon, Yan LiuDepartment of Computer ScienceUniversity of Southern CaliforniaLos Angeles, CA 90089, USAfspurusho,wcarvalh,nilanon,yanliu.cs g@usc.eduABSTRACTWe study the problem of learning domain invariant representations for time seriesdata while transferring the complex temporal latent dependencies between domains.Our model termed as Variational Recurrent Adversarial Deep Domain Adaptation(VRADA) is built atop a variational recurrent neural network (VRNN) and trainsadversarially to capture complex temporal relationships that are domain-invariant.This is (as far as we know) the first to capture and transfer temporal latent de-pendencies of multivariate time-series data. Through experiments on real-worldmultivariate healthcare time-series datasets, we empirically demonstrate that learn-ing temporal dependencies helps our model’s ability to create domain-invariantrepresentations, allowing our model to outperform current state-of-the-art deepdomain adaptation approaches.1 I NTRODUCTIONMany real-world applications require effective machine learning algorithms that can learn invariantrepresentations across related time-series datasets. For example, precision medicine for patients ofvarious age groups, mobile application recommendation for users based on locations, and so on.In these examples, while the domains (i.e. age group and location) may vary, there exist commonpredictive patterns that can aid in inferring knowledge from one domain to another. More often thannot, some domains have a significantly larger number of observations than others (e.g., respiratoryfailure in adults vs. children). Therefore effective domain adaption of time-series data is in greatdemand.The general approach to tackling domain adaptation has been explored under many facets whichinclude reducing the domain discrepancy between the source and target domains(Ben-David et al.(2007)), instance re-weighting (Jiang & Zhai (2007)), subspace alignment (Fernando et al. (2013)),and deep learning (Tzeng et al. (2015); Ganin & Lempitsky (2014)). Many of these approacheswork very well for non-sequential data but are not suitable for multivariate time-series data as theydo not usually capture the temporal dependencies present in the data. For sequential data, earlierwork has successfully used dynamic Bayesian Networks(Huang & Yates (2009)) and RecurrentNeural Networks (Socher et al. (2011)) to learn latent feature representations which were domain-invariant. Unfortunately, these works were not flexible enough to model non-linear dynamics ordid not explicitly capture and transfer the complex latent dependencies needed to perform domainadaptation of time-series data.In this paper, we address this problem with a model that learns temporal latent dependencies (i.e.dependencies between the latent variables across timesteps) that can be transferred across domainsthat experience different distributions in their features. We draw inspiration from the VariationalRecurrent Neural Network (Chung et al. (2016)) and use variational methods to produce a latentrepresentation that captures underlying temporal latent dependencies. Motivated by the theory ofdomain adaptation (Ben-David et al. (2010)), we perform adversarial training on this representation*: Co-first authors1Published as a conference paper at ICLR 2017Figure 1: A Story of Temporal Dependency and Domain Invariance(a)DNN (b)R-DANN (c)VRADAt-SNE projections for the latent representations of DNN, R-DANN, and our VRADA model. We show adaptionfrom Adult-AHRF to Child-AHRF data. Source data is represented with red circles and target data with bluecircles. From left to right, one can see that domain adaptation results in mixing the source and target domaindata distributions. We can also see a story of how encoding more temporal dependency into the latentrepresentation induces more domain-invariant representations. As models capture more underlying factors ofvariation, post domain adaptation representations gradually smoothen and become evenly dispersed, indicatingthat temporal dependency acts synergestically with domain adaptation.similarly to the Domain Adversarial Neural Network (DANN) (Ganin et al. (2016)) to make therepresentations invariant across domains. We call our model the Variational Recurrent AdversarialDeep Domain Adaptation (VRADA) model. As far as we know, this is the first model capable ofaccomplishing unsupervised domain adaptation while transferring temporal latent dependenciesfor complex multivariate time-series data. Figure 1 shows an example of the domain invariantrepresentations learned by different deep learning models including our VRADA model. From thisfigure, we can see that our model (VRADA) shows better mixing of the domain distributions than thecompeting models indicating that it learns better domain invariant representations.In order to prove the efficacy of our model, we perform domain adaptation using real-world healthcaretime-series data. We choose healthcare data for two primary reasons. (1) Currently, a standard protocolin healthcare is to build, evaluate, and deploy machine learning models for particular datasets thatmay perform poorly on unseen datasets with different distributions. For example, models built aroundpatient data from particular age groups perform poorly on other age groups because the features usedto train the models have different distributions across the groups (Alemayehu & Warner (2004); Laoet al. (2004); Seshamani & Gray (2004)). Knowledge learned from one group is not transferrableto the other group. Domain adaptation seems like a natural solution to this problem as knowledgeneeds to be transferred across domains which share features that exhibit different distributions. (2)Healthcare data has multiple attributes recorded per patient visit, and it is longitudinal and episodic innature. Thus, healthcare data is a suitable platform on which to study a model which seeks to capturecomplex temporal representations and transfer this knowledge across domains.The rest of the paper is structured as follows. In the following section, we briefly discuss thecurrent state-of-the-art deep domain adaptation approaches. Afterwards, we present our modelmathematically, detailing how it simultaneously learns to capture temporal latent dependencies andcreate domain-invariant representations. In Section 4, we compare and contrast the performance ofproposed approach with other approaches on two real-world health care datasets, and provide analysison our domain-invariant representations.2 R ELATED WORKDomain adaptation is a specific instance of transfer learning in which the feature spaces are shared buttheir marginal distributions are different. A good survey on the two has been done in several previousworks (Pan & Yang (2009); Jiang (2008); Patel et al. (2015)). Domain adaptation has been thoroughlystudied in computer vision(Saenko et al. (2010); Gong et al. (2012); Fernando et al. (2013)) andnatural language processing (NLP) (Blitzer (2007); Foster et al. (2010)) applications. Recently, thedeep learning paradigm has become popular in domain adaptation (Chen et al. (2012); Tzeng et al.(2015); Yang & Eisenstein; Long & Wang (2015)) due to its ability to learn rich, flexible, non-lineardomain-invariant representations. Here, we briefly discuss two deep domain adaptation approacheswhich are closely related to our proposed model. Domain Adversarial Neural Networks (DANN)2Published as a conference paper at ICLR 2017h1 h2 h3 ht:::::::::x1 x2 x3 xtz1 z2 z3 ztGyGdFigure 2: Block diagram of VRADA. Blue lines show the inference process, qe(ztjxt; z<t). Brown linesshow the generation process, pg(xtjzt; x<t). Red lines show the recurrence process where htis informed byht1, which is informed by zt1andxt1. Black lines indicate classification.(Ganin et al. (2016)) is a deep domain adaptation model which uses two core components to createdomain-invariant representations, a feature extractor that produces the data’s latent representation,and an adversarial domain labeler that attempts to classify that data’s domain to help the featureextractor produce latent representations which are domain-invariant. In Louizos et al. (2015), theauthors propose Variational Fair AutoEncoder, which uses Variational Autoencoding architecture(Kingma & Welling (2013)) to learn latent representations where most of the information aboutcertain known factors of variation are purged from the representation while still retaining as muchinformation about the data as possible. While, these deep learning approaches learn domain-invariantrepresentations, they fail to capture and transfer the underlying complex temporal latent relationshipsfrom one domain to another as they use convolutional or feed forward neural networks which weclaim are not suitable for multivariate time-series data.Other works such as Huang & Yates (2009); Xiao & Guo (2013) have used distributed representationsfor domain adaptation in NLP sequence labeling tasks. However, they either induce hidden statesas latent features using dynamic Bayesian networks (DBNs) or learn generalizable distributedrepresentations of words using Recurrent Neural Networks (RNN) (Socher et al. (2011)) to enabledomain adaptation. These works either model the highly non-linear dynamics, as one can with RNN,or capture the complex latent dependencies present in sequential data, as one can with DBNs, butnot both. To overcome the challenges of DBNs and RNNs, Variational Recurrent Neural Network(VRNN)( Chung et al. (2016)) was proposed recently to capture the complex relationship betweenthe underlying hidden factors of variation and the output variables at different time-steps. The VRNNuses Variational Autoencoders (V AEs)( Kingma & Welling (2013); Goodfellow et al. (2016)) at eachtime-step to learn a complex relationship between the latent hidden factors across time-steps. Likethe V AE, its latent variable is parametric. Combined, these things make it well-suited for multimodalsequential data such as multivariate time-series. In the following section, we discuss our approach,Variational Adversarial Deep Domain Adaptation (VRADA), which uses a VRNN to model andtransfer complex domain-invariant temporal latent relationships for unsupervised domain adaptationof multivariate time-series.3 V ARIATIONAL RECURRENT ADVERSARIAL DEEPDOMAIN ADAPTATIONIn this section, we present our Variational Recurrent Adversarial Deep Domain Adaptation (VRADA)model for the purpose of capturing and transferring temporal latent dependencies across domainsvia domain-invariant representations. First, we introduce the notations used in this paper and thendiscuss our VRADA model in detail.3.1 N OTATIONSLet us denote a multivariate variable-length time series with Ndata samples asfxi= (xit)Tit=1gNi=1,wherexit2RD. (Note: in our experiments, for all data samples Ti=, but for generality wemaintainTi). We denotefxiSgni=1as source domain data and fxiTgNi=n+1as target domain data. Weassume that each source domain data sample xiScomes with Llabelsyi2f0;1gL(for example,these labels may correspond to a clinical outcome such as mortality or ICD9 diagnosis codes), while3Published as a conference paper at ICLR 2017target domain has no labeled data samples. We assign a domain label di2f0;1gto each data sampleto indicate if it comes from the source or target domain. diwill be used for adversarial training.3.2 VRADAThe block diagram of our VRADA model is shown in Figure 2. To explicitly model the dependenciesbetween the latent random variable across time steps, the VRADA model utilizes Variational RecurrentNeural Networks (VRNN) (Chung et al. (2016)). The VRNN effectively contains a Variational Auto-Encoders (Kingma & Welling (2013)) at every time step, all of which are conditioned on previousauto-encoders via the hidden state ht1of an RNN, such as an LSTM (Hochreiter & Schmidhuber(1997)). Therefore, for each time-step of xit, we infer a latent random variable zitviazitjxitN(z;t;diag(z;t));where [z;t;z;t] ='enc('x(xit);ht1)with priorzitN(0;t;diag(0;t));where [0;t;0;t] ='prior(ht1)where;t;;tdenote parameters of a generating distribution, and 'can be any highly flexiblefunction such as deep neural networks. For each zit,xitis generated viaxitjzitN(x;t;diag(x;t));where [x;t;x;t] ='dec('z(zit);ht1)and learned by optimizing the VRNN objective function:Lr(xit;e;g) =Eqe(ziTijxiTi)[TiXt=1(D(qe(zitjxit;zi<t)jjp(zitjxi<t;zi<t))+logpg(xitjzit;xi<t))])whereqe(zitjxit;zi<t)is the inference model, p(zitjxi<t;zi<t)is the prior, pg(xitjzit;xi<t)is thegenerative model, eis the parameters of the VRNN’s encoder, gthe parameters of the VRNN’sdecoder, and D(jj)refers to KL-Divergence. Note: zTrefers to the set of all ztsuch thattT,likewise for z<T. For each xi, we use ~ziqe(ziTijxiTi;zi<Ti)as our feature representation forsource domain classification task since it captures temporal latent dependencies across the time-steps.Training the VRNN for the source domain classification involves solving the following optimization:mine;g;y1nnXi=11TiLr(xi;e;g) +1nnXi=1Ly(xi;y;e) +R(e) (1)whereR(e)is a regularizer for the parameters of VRNN encoder (which is also the feature extractorof VRADA) with a tuning hyperparameter .As we are interested in achieving domain adaptation via the latent representation ~zi(i.e. to make ~zidomain-invariant), we can adversarially train the above objective function (equation 1) by employingthe domain adaptation idea proposed in Ganin et al. (2016). Let Gy(~zi;y)andGd(~zi;d)representthe source label classifier (to predict source labels yi) and domain label classifier (to predict domainlabelsdi) respectively with parameters yanddfor a given input ~zi. Here,Gy(:)andGd(:)can bedeep neural networks. Let us denote their loss functions respectively asLy(xi;y;e) =LB(Gy(Ve(xi;e);y);yi);Ld(xi;d;e) =LB(Gd(Ve(xi;e);d);di)whereLBis the classification loss such as a binary or categorical cross-entropy loss function andVe(xi;e)is the VRNN encoder that maps input xito~zi.Now, for adversarial training, we consider the following domain adaptation term as the regularizer ofequation 1.R(e) = maxdh1nnXi=1Ld(xi;d;e)1n0NXi=n+1Ld(xi;d;e)i(2)wheren0is the number of target domain samples. As shown in Ganin et al. (2016), Ris the domainregularizer and it is derived from the empirical Hdivergence between the source domain and targetdomain samples( Ben-David et al. (2010)).4Published as a conference paper at ICLR 2017Combining the joint optimization problem of equations 1 and 2 leads to our VRADA model, wherewe minimize the source classification risk and at the same time achieve domain adaptation. Mathe-matically, we optimize the following complete objective function:E(e;g;y;d) =1NNXi=11TiLr(xi;e;g)+1nnXi=1Ly(xi;y)(1nnXi=1Ld(xi;d)+1n0NXi=n+1Ld(xi;d)))(3)whereis atrade-off between optimizing on making domain-invariant representations and optimiz-ing source classification accuracy. Our optimization involves minimization with respect to someparameters, and maximization with respect to the others, i.e., we iteratively solve the following:(^g;^y;^e) = arg ming;y;eE(e;g;y;^d)^d= arg maxdE(^e;^g;^y;d)with the gradient updates calculated as:e e(@Lr@e+@Ly@y@Ld@d) (4)g g@Lr@g(5)d d@Ld@d(6)y y@Ly@y(7)whereis the learning rate. We can use stochastic gradient descent (SGD) to solve the equations(5-7). To solve equation (4), we can use SGD and the gradient reversal layer (GRL)(Ganin et al.(2016)). The role of GRL is to reverse the gradient sign while performing backpropagation. Thisensures that the domain classification loss is maximized which makes the feature representationsdomain-invariant.Thus, VRADA results in learning feature representations which are domain-invariant (due to domainregressorR) and which capture the temporal latent dependencies (due to optimizing VRNN objectivefunctionLr). These things combine to allow the VRADAs’ discriminative power on the sourcedomain to transfer to the target domain.4 E XPERIMENTSWe conduct experiments on two real-world health care datasets to answer the following questions: (a)How does our VRADA model perform when compared to the state-of-the-art domain adaptation andnon-adaptation approaches? (b) How different are the domain-invariant representations learned byvarious domain adaptation methods? (c) How do we show that the temporal latent dependencies aretransferred between domains? In the remainder of this section, we will describe the datasets, methods,empirical results, and show visualizations to answer the above questions.4.1 D ATASET DESCRIPTIONWe conduct experiments on two health care datasets, including the MIMIC-III dataset and a PediatricICU (PICU) dataset from Children’s Hospital Los Angeles.MIMIC-III ( Johnson et al. (2016)) is a public dataset with deidentified clinical care data collected atBeth Israel Deaconess Medical Center from 2001 to 2012. It contains over 58,000 hospital admissionrecords of 38,645 adults and 7,875 neonates. For our experiments, we extracted the following twodatasets:Adult-AHRF dataset : To study domain adaptation for adult patients with acute hypoxemicrespiratory failure (AHRF), we extracted 20 time series features (such as Base excess, bloodpH value, Mean Air Pressure, PaO2, etc.) from 5527 admission records based on Khemani5Published as a conference paper at ICLR 2017et al. (2009). We grouped the patients into 4 groups/cohorts based on their age[1]- Group2: working-age adult (20 to 45 yrs, 508 patients); Group 3: old working-age adult (46 to65 yrs, 1888 patients); Group 4: elderly (66 to 85 yrs, 2394 patients); Group 5: old elderly(85 yrs and up, 437 patients). We treated each group as a separate domain with which wecould perform domain adaptation. For each patient, we used the first 4 day after admission(with each day serving as a single time-step) as time series data for training and testing ourmodels.ICD9 dataset : For this dataset we extracted 99 time series features from 19714 admissionrecords from 4 modalities including input-events (fluids into patient, e.g., insulin), output-events (fluids out of the patient, e.g., urine), lab-events (lab test results, e.g., blood pH values,platelet count, etc.) and prescription-events (drugs prescribed by doctors, e.g., aspirin,potassium chloride, etc.). These modalities are known to be extremely useful for monitoringICU patients. All the time series are of more than 48 hours of duration, and only the first 24hours (after admission) 2-hourly sampled time series data is used for training and testing ourmodels. We use this dataset to predict the ICD9 Diagnosis code categories for each patient’sadmission record.Child-AHRF dataset : This is a PICU dataset which contains health records of 398 children patientwith acute hypoxemic respiratory failure in the intensive care unit at Children’s Hospital Los Angeles(CHLA)(Khemani et al. (2009)). Similar to Adult-AHRF, this dataset has 20 time series featurescollected for 4 days after ICU admission. This dataset is considered as one group (Group 1: children,age 0 to 19 yrs) and represents one domain.4.1.1 P REDICTION AND DOMAIN ADAPTATION TASKSMortality Prediction: For Adult-AHRF and Child-AHRF datasets, we are interested in predictingmortality, i.e. whether a patient dies from AHRF during their hospital stay. 20.10% of all the patientsin Child-AHRF and 13.84% of all patients in Adult-AHRF have a positive mortality label (i.e. thepatients who die in hospital).ICD9 Code Prediction: Each admission record in MIMIC-III dataset has multiple ICD-9 diagnosiscodes. We group all the occurrences of the ICD-9 codes into 20 diagnosis groups[2]. For the ICD9dataset, we are interested in predicting these 20 ICD-9 Diagnosis Categories for each admissionrecord. We treat this as a multi-task prediction problem.Domain Adaptation Tasks: We study unsupervised domain adaptation (i.e. target domain labels areunavailable during training and validation) task with-in age groups of Adult-AHRF dataset, ICD9dataset and across Adult and Child-AHRF datasets. For Adult-AHRF and ICD9 datasets, we created12 source-target domain pairs using the age groups, pairing up each domain Diwith another domainDj6=i, for example, the source-target pair 2-5 was used for adapting from group 2 (working-age adult)to group 5 (old elderly). We also created 4 source-target pairs for performing domain adaptation from4 adult age-groups to 1 child age-group.4.2 M ETHODS AND IMPLEMENTATION DETAILSWe categorize the methods used in our main experiments into the following groups:Non-adaptive baseline methods: Logistic Regression (LR), Adaboost with decision regres-sors (Adaboost), and feed forward deep neural networks (DNN)Deep Domain adaptation methods: Domain Adversarial Neural Networks (DANN) (Ganinet al. (2016)); DANN with a RNN (LSTM) as feature extractor (R-DANN); Variational FairAutocoder (VFAE)(Louizos et al. (2015))Our method: Variational Recurrent Adversarial Deep Domain Adaptation (VRADA)[3].[1]:https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/[2]:http://tdrdata.com/ipd/ipd_SearchForICD9CodesAndDescriptions.aspx .“Conditions Originating in the Perinatal Period” is not present in the preprocessed dataset.[3]: Codes will be publicly released soon6Published as a conference paper at ICLR 2017In all our experiments, we conducted unsupervised domain adaptation where target domain labels areunavailable during training and validation. For R-DANN, we used LSTM(Hochreiter & Schmidhuber(1997)) as the feature extractor network instead of the feed-forward neural networks used in DANN.For VFAE, DANN and all the non-domain adaptive approaches we flattened the time series alongtime axis and treat it as the input to the model. For fairness, the classifier and feature extractors ofthe VRADA and R-DANN were equivalent in depth and both had the same model capacity. We alsoensure that the size of latent feature representation ~ziare similar for VRADA and DANN models.The model capacity of VFAE was chosen to be similar to VRADA. All the deep domain adaptationmodels including ours had depth of size 8 (including output classifier layers). We used the Adamoptimizer ( Kingma & Ba (2014)) and ran all models for 500 epochs with a learning rate of 3e4.We set an early stopping criteria that the model does not experience a decrease in the validationloss for 20 epochs. Source domain data was split into train/validation subsets with a 70/30 ratio andtarget domain data into train/validation/test subsets with a 70/15/15 ratio. In order to compare all themethods, we report AUC scores on the entire target domain set, and the test subset for each targetdomain data of a source-target pair.4.3 Q UANTITATIVE RESULTSIn Table 1, we compare non domain adaptation and domain adaptation models’ performance onthe target domain test subset for the AHRF mortality prediction task. It is immediately clear thatdomain adaptation methods consistently outperform non domain adaptation methods. We see thatgenerally the VRADA outperforms both variants of the DANN with it consistently seeing scores4%higher. While the standard deviation for the VRADA was about 1%, it was about 2%for theR-DANN, further showing our models efficacy as it converges to more stable local optima. Ourmodel VRADA beats state-of-the-art DANN(Ganin et al. (2016)) and VFAE(Louizos et al. (2015)) onall the source-pair domain adaptation tasks for Adult-AHRF dataset. For the domain adaptation fromAdult-AHRF to Child-AHRF dataset, we observe that VRADA mostly outperforms all the competingmodels. This shows that our model can perform well even for smaller target domain datasets.Table 1: AUC Comparison for AHRF Mortality Prediction task with and without Domain AdaptationSource-Target LR Adaboost DNN DANN VFAE R-DANN VRADA3- 2 0:555 0:562 0:569 0:572 0 :615 0 :603 0:6544- 2 0:624 0:645 0:569 0:589 0 :635 0 :584 0:6565- 2 0:527 0:554 0:551 0:540 0 :588 0 :611 0:6162- 3 0:627 0:621 0 :550 0:563 0 :585 0 :708 0:7244- 3 0:681 0:636 0 :542 0:527 0 :722 0:821 0:7705- 3 0:655 0:706 0:503 0:518 0 :608 0 :769 0:7822- 4 0:585 0:591 0:530 0:560 0 :582 0 :716 0:7773- 4 0:652 0:629 0 :531 0:527 0 :697 0:769 0:7645- 4 0:689 0:699 0:538 0:532 0 :614 0 :728 0:7382- 5 0:565 0:543 0 :549 0:526 0 :555 0 :659 0:7193- 5 0:576 0:587 0:510 0:526 0 :533 0 :630 0:7214- 5 0:682 0:587 0 :575 0:548 0 :712 0 :747 0:7755- 1 0:502 0:573 0:557 0:563 0 :618 0 :563 0:6394- 1 0:565 0:533 0 :572 0:542 0:668 0:577 0 :6363- 1 0:500 0 :500 0 :542 0:535 0 :570 0 :591 0:6312- 1 0:520 0:500 0 :534 0:559 0 :578 0 :630 0:637In the above table, we test classification without adaptation using Logistic Regression (LR), Adaboost withdecision tree classifiers and Feed forward Deep Neural Networks (DNN); and with adaptation using DeepDomain Adversarial Neural Networks (DANN), a DANN with an LSTM in its feature extractor (R-DANN),Variational Fair Autoencoder (VFAE) and our Variational Adversarial Domain Adaptation Model (VRADA). Allresults are reported on the target domain test subset dataset.As the AHRF mortality prediction task made it clear that domain adaptation is necessary for inter-group adaptation, for the ICD9 multi-task prediction task that involved data with time-steps of length12, we focused strictly on domain adaptive models (i.e. the DANN, R-DANN, and VRADA). Table 2shows the aggregated AUC scores on the entire target domain dataset and test data of the targetdomain for the 20 tasks of the ICD9 Code Prediction task. Here, we clearly see that VRADA and7Published as a conference paper at ICLR 2017Table 2: AUC Comparison for ICD9 Diagnosis Code Prediction taskModel 23 24 25 32 34 35 42 43 45 52 53 54DANNentire targettarget test0:5130:5090:5080:5130:5090:5310:5110:5270:5080:5150:5140:5310:5110:5150:5070:5210:5120:5210:5050:5180:5080:5140:5060:519R-DANNentire targettarget test0:6080:6050:5810:5790:5620:5700:6180:6280:6100:6090:5860:5890:6040:6140:6070:6160:5750:5860:5730:5730:5580:5630:5660:564VRADAentire targettarget test0:6200:6090:5640:5630:5570:5600:6110:6200:6170:6170:5800:5800:5980:6060:6150:6230:5880:5940:5710:5760:5820:5810:5760:576Here, we compare results for the ICD9 Diagnosis Code Prediction task on the ICD9 dataset. For each model, thetop row corresponds to the performance on the entire target domain dataset and the bottom row corresponds toperformance on the test subset (15%) of the target domain dataset.R-DANN models outperform DANN Ganin et al. (2016) by significant margins. We also observe thatVRADA outperforms R-DANN by 1:52% when averaged over all the source-target domain pairs.4.4 D ISCUSSIONFigure 3 shows the temporal latent dependencies captured by our VRADA as compared to theR-DANN for 3-4source-target pair. While both models learn temporal latent dependencies fairlywell, the VRADA outperforms the R-DANN in two ways. First, the VRADA’s neurons learnedstronger predictions of whether features are relevant towards modeling the data. If we look at theVRADA row, for both AHRF and ICD9 we see that the neural activation patterns are more consistentacross time-steps than for R-DANN. Figure 4 shows the unrolled memory cell states (in the formExamples(TimeNeurons )) for all the source and target domain data points. We see a consistentactivation firing patterns across all these data points for VRADA but not for R-DANN. Together withthe stronger performance on 3-4for AHRF and 2-5for ICD9, this potentially indicates that VRADAis better learning the temporal dependencies.Second, nuanced values are consistent across time-steps for the VRADA, exhibiting a gradualtransition towards stronger activation with time, whereas the temporal activation pattern of the R-DANN seems somewhat sporadic. While activation gradients across time are consistent for boththe R-DANN and VRADA, more consistent inhibitory and excitatory neuron firing patterns indicatethat the VRADA better transfers knowledge. Another indication of domain adaptation was shownin Figure 1c. Looking at the t-SNE projections of feature representations of DNN, R-DANN, andVRADA we can see that the addition of temporal latent dependencies might help in better mixingof the domain distributions since we observe that the data is more evenly spread out. Figure 1c andFigure 3 together indicate that the VRADA’s temporal latent dependency capturing power and abilityto create domain-invariant representations act synergistically. For plots of activation patterns withoutdomain adaptation, please see appendix section 6.2.3.5 S UMMARYBecause of its diverse range of patients and its episodic and longitudal nature, healthcare data providesa good platform to test domain adaptation techniques for temporal data. With it as our example, weshowcase the Variational Recurrent Adversarial Domain Adaptation (VRADA) model’s ability tolearn temporal latent representations that are domain-invariant. By comparing our model’s latentrepresentations to others’, we show its ability to use variational methods to capture hidden factors ofvariation and produce more robust domain-invariant representations. We hope this work serves as abedrock for future work capturing and adapting temporal latent representations across domains.ACKNOWLEDGMENTSThis material is based upon work supported by the NSF research grants IIS-1134990, IIS-1254206,Samsung GRO Grant and the NSF Graduate Research Fellowship Program under Grant No. DGE-1418060. Any opinions, findings, and conclusions or recommendations expressed in this materialare those of the author(s) and do not necessarily reflect the views of the funding agencies. We alsoacknowledge Thailand’s Development and Promotion of Science and Technology Talents Project forfinancial support. We thank Dr. Robinder Khemani for sharing the Child-AHRF dataset.8Published as a conference paper at ICLR 2017R-DANN0.511.522.533.544.52468101214161820-2-1.5-1-0.500.511.520.511.522.533.544.52468101214161820-3-2-10123VRADA0.511.522.533.544.52468101214161820-3-2-101230.511.522.533.544.52468101214161820-3-2-10123Source Target2 4 6 8 10 12510152025303540-4-3-2-1012342 4 6 8 10 12510152025303540 -3-2.5-2-1.5-1-0.500.511.522 4 6 8 10 12510152025303540-10-8-6-4-202468102 4 6 8 10 12510152025303540-10-8-6-4-20246810 Source TargetAHRF, 3-4 ICD9, 2-5Figure 3: Cell states of memory cell for R-DANN and VRADA showing temporal latent dependencies capturedby neurons of the R-DANN and VRADA for the source domain and transferred to the target domain. Each stepalong the y-axis refers to the activation of a single neuron with blue for strong inhibition and yellow for strongexcitation. Step along the x-axis refers to activation per time-step. The left shows a single example in adapting3-4 and the right for adapting 2-5.5010015020025030035040045050100150200250300350400450-8-6-4-202468105010015020025030035040045050100150200250300350400-10-8-6-4-202468105010015020025030035040045050100150200250300350400450-10-8-6-4-202468105010015020025030035040045050100150200250300350400 -10-8-6-4-20246810R-DANN VRADAFigure 4: Cell states of memory cell for R-DANN and VRADA showing activation for all ICD9 2-5 adaptationexamples. Here, we show temporal dependencies learned across time, feature pairs for examples in a domain.The y-axis values refer to values per data point and the x-axis shows activation at time, feature pairs with thetime and feature dimensions being flattened.REFERENCESBerhanu Alemayehu and Kenneth E Warner. The lifetime distribution of health care costs. Healthservices research , 39(3):627–642, 2004.S Ben-David, J Blitzer, and K Crammer. Analysis of representations for domain adaptation. Advancesin Neural . . . , pp. 137–144, 2007.Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer WortmanVaughan. A theory of learning from different domains. Machine learning , 79(1-2):151–175, 2010.John Blitzer. Domain adaptation of natural language processing systems . PhD thesis, University ofPennsylvania, 2007.Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. Marginalized denoising autoencodersfor domain adaptation. arXiv preprint arXiv:1206.4683 , 2012.Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron Courville, and Yoshua Bengio.A Recurrent Latent Variable Model for Sequential Data. arXiv.org , May 2016.Basura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. Unsupervised visual domainadaptation using subspace alignment. In Proceedings of the IEEE International Conference onComputer Vision , pp. 2960–2967, 2013.9Published as a conference paper at ICLR 2017George Foster, Cyril Goutte, and Roland Kuhn. Discriminative instance weighting for domainadaptation in statistical machine translation. In Proceedings of the 2010 Conference on EmpiricalMethods in Natural Language Processing , pp. 451–459. Association for Computational Linguistics,2010.Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. arXivpreprint arXiv:1409.7495 , 2014.Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran c ̧oisLaviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks.The Journal of Machine Learning Research , 17(1), 2016.Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic flow kernel for unsuperviseddomain adaptation. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conferenceon, pp. 2066–2073. IEEE, 2012.Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning . Mit Press, December 2016.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Fei Huang and Alexander Yates. Distributional representations for handling sparsity in supervisedsequence-labeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of theACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP:Volume 1-Volume 1 , pp. 495–503. Association for Computational Linguistics, 2009.Jing Jiang. A literature survey on domain adaptation of statistical classifiers. URL: http://sifaka. cs.uiuc. edu/jiang4/domainadaptation/survey , 2008.Jing Jiang and ChengXiang Zhai. Instance weighting for domain adaptation in nlp. In ACL, volume 7,pp. 264–271, 2007.AEW Johnson, TJ Pollard, L Shen, L Lehman, M Feng, M Ghassemi, B Moody, P Szolovits, LA Celi,and RG Mark. Mimic-iii, a freely accessible critical care database. Scientific Data , 2016.Robinder G Khemani, David Conti, Todd A Alonzo, Robert D Bart III, and Christopher JL Newth.Effect of tidal volume in children with acute hypoxemic respiratory failure. Intensive care medicine ,35(8):1428–1437, 2009.Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR ,abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980 .Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. arXiv.org , December 2013.Zhiqiang Lao, Dinggang Shen, Zhong Xue, Bilge Karacali, Susan M Resnick, and Christos Da-vatzikos. Morphological classification of brains via high-dimensional shape transformations andmachine learning methods. Neuroimage , 21(1):46–57, 2004.Mingsheng Long and Jianmin Wang. Learning transferable features with deep adaptation networks.CoRR, abs/1502.02791 , 1:2, 2015.Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. The variational fairauto encoder. arXiv preprint arXiv:1511.00830 , 2015.Sinno Jialin Pan and Qiang Yang. A Survey on Transfer Learning. IEEE Transactions on Knowledgeand Data Engineering , 22(10):1345–1359, 2009.Vishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adaptation:A survey of recent advances. IEEE signal processing magazine , 32(3):53–69, 2015.Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to newdomains. In European conference on computer vision , pp. 213–226. Springer, 2010.Meena Seshamani and Alastair M Gray. A longitudinal study of the effects of age and time to deathon hospital costs. Journal of health economics , 23(2):217–235, 2004.10Published as a conference paper at ICLR 2017Table 3: AUC Comparison for AHRF Mortality Prediction task for different types of VRADA trainingTraining 23 24 25 32 34 35 42 43 45 52 53 54I 0.704 0.777 0.682 0.540 0.764 0.721 0.603 0.727 0.710 0.616 0.782 0.738II 0.724 0.656 0.719 0.627 0.748 0.683 0.656 0.770 0.755 0.595 0.736 0.732III 0.721 0.688 0.656 0.654 0.757 0.691 0.609 0.766 0.775 0.602 0.709 0.714Richard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. Parsing natural scenes and naturallanguage with recursive neural networks. In Proceedings of the 28th international conference onmachine learning (ICML-11) , pp. 129–136, 2011.Eric Tzeng, Judy Hoffman, Trevor Darrell, and Kate Saenko. Simultaneous deep transfer acrossdomains and tasks. In Proceedings of the IEEE International Conference on Computer Vision , pp.4068–4076, 2015.Min Xiao and Yuhong Guo. Domain adaptation for sequence labeling tasks with a probabilisticlanguage adaptation model. In ICML (1) , pp. 293–301, 2013.Yi Yang and Jacob Eisenstein. Unsupervised multi-domain adaptation with feature embeddings.6 A PPENDIX6.1 T RAINING VARIATIONSWe tested 3 variations of training VRADA: (a) training VRADA regularly as discussed in Section 3(denoted by I), (b) loading a pretrained VRNN encoder and optimizing strictly off the classificationerrors, i.e.E(e;y;d) =1nnXi=1Ly(xi;y)(1nnXi=1Ld(xi;d) +1n0NXi=n+1Ld(xi;d))) (8)and (c) loading a pretrained VRNN encoder and using the objective as presented in equation 3(denoted by III). Key to note is that in method II, we do not apply variational methods towardslearning the shared latent representation. This was done to test whether they were helpful or harmfultowards the learned latent representation used for classification. In method III, we train VRADA asnormal but load a pretrained encoder. We pretrain the encoder by training the VRNN on all sourceand target domain samples for a desired source-target adaptation pair. In order to choose how manysamples would be used for training, we looked at which domain had more examples and chose thelarger of the two. For example, if the source domain was group 2 with 508 patients and the targetdomain was group 5 with 437 patients, the VRNN would see 508 samples of each domain, with group5 being sampled with replacement after seeing all its samples. As the encoder was used for learninglatent representations, we thought it worth investigating whether if pretrained it better captured thelatent representations that were being used by the domain classifier for adversarial training. Wethought beginning domain classification at a better initialization point might help VRADA avoidlocal minima. For each method, we fed one source domain sample to Gyand either a source or targetdomain sample to Gd. (For this training and all training samples, order was randomized.) We onlycalculated the loss Lronce for the Gdsamples so as to not bias the optimization of the VRNN.Table 3 shows the results of AHRF Mortality Prediction task for different types of VRADA training.From these experiments, we found that jointly training VRADA (i.e method I) usually performedbetter than the other pretrained training approaches.6.2 M ODEL VARIATIONS6.2.1 A DVERSARIAL TRAINING AT EVERY TIME -STEPA natural question is whether adversarial training at every time-step is more effective than adversarialtraining at the last time-step of a latent representation. If done at every time-step, the network learns11Published as a conference paper at ICLR 2017to create domain-invariant representations of subsets of your input xT. Do these domain-invariantrepresentations help the network find more optimal domain-invariant representations of x? Weempirically tested this scenario (Table 4) and found the results to be sub-optimal when compared toonly performing adversarial training at the last time-step (Table 1). Below are results for the R-DANNand VRADA models for adversarial training at every time-step.Table 4: AUC Comparison for AHRF Mortality Prediction task with adversarial training done at every time-stepModel 23 24 25 32 34 35 42 43 45 52 53 54R-DANN .651 .599 .598 .557 .679 .534 .563 .768 .588 .528 .696 .669VRADA .681 .691 .643 .594 .733 .641 .733 .794 .675 .583 .755 .7266.2.2 E FFECT OF RECONSTRUCTION LOSSTable 5 shows the effect of reconstruction loss for our VRADA model. We observe that reconstructingthe original data (i.e. using the decoder for reconstructing the data) helps in the overall performanceimprovement of our VRADA model.Table 5: AUC Comparison of VRADA model for AHRF Mortality Prediction task with and without reconstruc-tion lossModel 23 24 25 32 34 35 42 43 45 52 53 54Without reconstruction 0.703 0.623 0.570 0.647 0.622 0.564 0.577 0.608 0.552 0.599 0.640 0.676With reconstruction 0.724 0.777 0.719 0.654 0.764 0.721 0.656 0.770 0.775 0.616 0.782 0.7386.2.3 I MPACT OF ADVERSARIAL TRAININGIn figures 5 and 6 we show the cell state activations for the VRADA and R-DANN without domainadaptation (i.e. no adversarial training). From these figures, we see that the dependencies betweensource and target domains are not transferred correctly since we do not perform adversarial training.On the otherhand, as discussed in section 4.4, figure 3 shows that adversarial training helps intransferring the dependencies between source and target domains efficiently.6.3 R-DANN MODEL INFORMATIONHere we provide more details on the network architectures of the R-DANN and DANN. Please referto Figure 7 for a diagram of the R-DANN model showing the dimensions of each layer and theconnections between layers. The R-DANN and DANN were essentially identical except that, for theDANN, the first layer used a fully-connected layer instead of an RNN and took input flattened overthe time-dimension. Thus the input dimensions corresponded to fandtffor the R-DANN andDANN, respectively, where fis the number of features and tis the length of the time-dimension.12Published as a conference paper at ICLR 2017R-DANN0.5 11.5 22.5 33.5 44.5510152025300.5 11.5 22.5 33.5 44.551015202530VRADA0.5 11.5 22.5 33.5 44.5510152025300.5 11.5 22.5 33.5 44.551015202530Source TargetFigure 5: Cell states of memory cell for R-DANN and VRADA showing temporal latent dependencies capturedby neurons of the R-DANN and VRADA for the source domain and the target domain. Each step along they-axis refers to the activation of a single neuron with blue for strong inhibition and yellow for strong excitation.Step along the x-axis refers to activation per time-step. The figure shows a single example in adapting 3-4 forAHRF dataset.13Published as a conference paper at ICLR 2017R-DANN2 4 6 8 10 1251015202530354045502 4 6 8 10 125101520253035404550VRADA2 4 6 8 10 1251015202530354045502 4 6 8 10 125101520253035404550Source TargetFigure 6: Cell states of memory cell for R-DANN and VRADA showing temporal latent dependencies capturedby neurons of the R-DANN and VRADA for the source domain and the target domain. Each step along they-axis refers to the activation of a single neuron with blue for strong inhibition and yellow for strong excitation.Step along the x-axis refers to activation per time-step. The figure shows a single example in adapting 2-5 forICD9 dataset.14Published as a conference paper at ICLR 2017RNNoutput:100input:FFullyconnectedoutput:100input:100Fullyconnectedoutput:100input:100Fullyconnectedoutput:100input:100Fullyconnectedoutput:50input:100Fullyconnectedoutput:50input:50Fullyconnectedoutput:50input:50Fullyconnectedoutput:1input:50Fullyconnectedoutput:50input:100Fullyconnectedoutput:50input:50Fullyconnectedoutput:50input:50Fullyconnectedoutput:1input:50Figure 7: Block diagram of the R-DANN showing the number of neurons used in each layer and how the layerswere connected. This model had a capacity of about 46;000parameters.15
SJ3rcZcxl
Published as a conference paper at ICLR 2017Q-P ROP: S AMPLE -EFFICIENT POLICY GRADIENTWITH ANOFF-POLICY CRITICShixiang Gu123, Timothy Lillicrap4, Zoubin Ghahramani16, Richard E. Turner1, Sergey Levine35sg717@cam.ac.uk,countzero@google.com,zoubin@eng.cam.ac.uk,ret26@cam.ac.uk,svlevine@eecs.berkeley.edu1University of Cambridge, UK2Max Planck Institute for Intelligent Systems, T ̈ubingen, Germany3Google Brain, USA4DeepMind, UK5UC Berkeley, USA6Uber AI Labs, USAABSTRACTModel-free deep reinforcement learning (RL) methods have been successful in awide variety of simulated domains. However, a major obstacle facing deep RLin the real world is their high sample complexity. Batch policy gradient methodsoffer stable learning, but at the cost of high variance, which often requires largebatches. TD-style methods, such as off-policy actor-critic and Q-learning, aremore sample-efficient but biased, and often require costly hyperparameter sweepsto stabilize. In this work, we aim to develop methods that combine the stability ofpolicy gradients with the efficiency of off-policy RL. We present Q-Prop, a policygradient method that uses a Taylor expansion of the off-policy critic as a controlvariate. Q-Prop is both sample efficient and stable, and effectively combines thebenefits of on-policy and off-policy methods. We analyze the connection betweenQ-Prop and existing model-free algorithms, and use control variate theory to de-rive two variants of Q-Prop with conservative and aggressive adaptation. We showthat conservative Q-Prop provides substantial gains in sample efficiency over trustregion policy optimization (TRPO) with generalized advantage estimation (GAE),and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym’s MuJoCo continu-ous control environments.1 I NTRODUCTIONModel-free reinforcement learning is a promising approach for solving arbitrary goal-directed se-quential decision-making problems with only high-level reward signals and no supervision. It hasrecently been extended to utilize large neural network policies and value functions, and has beenshown to be successful in solving a range of difficult problems (Mnih et al., 2015; Schulman et al.,2015; Lillicrap et al., 2016; Silver et al., 2016; Gu et al., 2016b; Mnih et al., 2016). Deep neuralnetwork parametrization minimizes the need for manual feature and policy engineering, and allowslearning end-to-end policies mapping from high-dimensional inputs, such as images, directly to ac-tions. However, such expressive parametrization also introduces a number of practical problems.Deep reinforcement learning algorithms tend to be sensitive to hyperparameter settings, often re-quiring extensive hyperparameter sweeps to find good values. Poor hyperparameter settings tend toproduce unstable or non-convergent learning. Deep RL algorithms also tend to exhibit high samplecomplexity, often to the point of being impractical to run on real physical systems. Although a num-ber of recent techniques have sought to alleviate some of these issues (Hasselt, 2010; Mnih et al.,2015; Schulman et al., 2015; 2016), these recent advances still provide only a partial solution to theinstability and sample complexity challenges.Model-free reinforcement learning consists of on- and off-policy methods. Monte Carlo policy gra-dient methods (Peters & Schaal, 2006; Schulman et al., 2015) are popular on-policy methods that1Published as a conference paper at ICLR 2017directly maximize the cumulative future returns with respect to the policy. While these algorithmscan offer unbiased (or nearly unbiased, as discussed in Section 2.1) estimates of the gradient, theyrely on Monte Carlo estimation and often suffer from high variance. To cope with high variancegradient estimates and difficult optimization landscapes, a number of techniques have been pro-posed, including constraining the change in the policy at each gradient step (Kakade, 2001; Peterset al., 2010) and mixing value-based back-ups to trade off bias and variance in Monte Carlo returnestimates (Schulman et al., 2015). However, these methods all tend to require very large numbersof samples to deal with the high variance when estimating gradients of high-dimensional neuralnetwork policies. The crux of the problem with policy gradient methods is that they can only effec-tively use on-policy samples, which means that they require collecting large amounts of on-policyexperiences after each parameter update to the policy. This makes them very sample intensive. Off-policy methods, such as Q-learning (Watkins & Dayan, 1992; Sutton et al., 1999; Mnih et al., 2015;Gu et al., 2016b) and off-policy actor-critic methods (Lever, 2014; Lillicrap et al., 2016), can in-stead use all samples, including off-policy samples, by adopting temporal difference learning withexperience replay. Such methods are much more sample-efficient. However, convergence of thesealgorithms is in general not guaranteed with non-linear function approximators, and practical con-vergence and instability issues typically mean that extensive hyperparameter tuning is required toattain good results.In order to make deep reinforcement learning practical as a tool for tackling real-world tasks, wemust develop methods that are both data efficient and stable. In this paper, we propose Q-Prop, astep in this direction that combines the advantages of on-policy policy gradient methods with the effi-ciency of off-policy learning. Unlike prior approaches for off-policy learning, which either introducebias (Sutton et al., 1999; Silver et al., 2014) or increase variance (Precup, 2000; Levine & Koltun,2013; Munos et al., 2016), Q-Prop can reduce the variance of gradient estimator without addingbias; unlike prior approaches for critic-based variance reduction (Schulman et al., 2016) which fitthe value function on-policy, Q-Prop learns the action-value function off-policy. The core idea isto use the first-order Taylor expansion of the critic as a control variate, resulting in an analyticalgradient term through the critic and a Monte Carlo policy gradient term consisting of the residualsin advantage approximations. The method helps unify policy gradient and actor-critic methods: itcan be seen as using the off-policy critic to reduce variance in policy gradient or using on-policyMonte Carlo returns to correct for bias in the critic gradient. We further provide theoretical analy-sis of the control variate, and derive two additional variants of Q-Prop. The method can be easilyincorporated into any policy gradient algorithm. We show that Q-Prop provides substantial gainsin sample efficiency over trust region policy optimization (TRPO) with generalized advantage esti-mation (GAE) (Schulman et al., 2015; 2016), and improved stability over deep deterministic policygradient (DDPG) (Lillicrap et al., 2016) across a repertoire of continuous control tasks.2 B ACKGROUNDReinforcement learning (RL) aims to learn a policy for an agent such that it behaves optimallyaccording to a reward function. At a time step tand statest, the agent chooses an action atac-cording to its policy p(atjst), the state of the agent and the environment changes to new state st+1according to dynamics p(st+1jst;at), the agent receives a reward r(st;at), and the process con-tinues. Let Rtdenote a g-discounted cumulative return from tfor an infinite horizon problem, i.eRt=å¥t0=tgt0tr(st0;at0). The goal of reinforcement learning is to maximize the expected returnJ(q) =Epq[R0]with respect to the policy parameters q. In this section, we review several standardtechniques for performing this optimization, and in the next section, we will discuss our proposedQ-Prop algorithm that combines the strengths of these approaches to achieve efficient, stable RL.Monte Carlo policy gradient refers to policy gradient methods that use full Monte Carlo returns,e.g. REINFORCE (Williams, 1992) and TRPO (Schulman et al., 2015), and policy gradient withfunction approximation refers to actor-critic methods (Sutton et al., 1999) which optimize the policyagainst a critic, e.g. deterministic policy gradient (Silver et al., 2014; Lillicrap et al., 2016).2.1 M ONTE CARLO POLICY GRADIENT METHODSMonte Carlo policy gradient methods apply direct gradient-based optimization to the reinforcementlearning objective. This involves directly differentiating the J(q)objective with respect to the policy2Published as a conference paper at ICLR 2017parameters q. The standard form, known as the REINFORCE algorithm (Williams, 1992), is shownbelow:ÑqJ(q) =Ep[¥åt=0Ñqlogpq(atjst)gtRt] =Ep[¥åt=0gtÑqlogpq(atjst)(Rtb(st))]; (1)where b(st)is known as the baseline. For convenience of later derivations, Eq. 1 can also be writtenas below, where rp(s) =å¥t=0gtp(st=s)is the unnormalized discounted state visitation frequency,ÑqJ(q) =Estrp();atp(jst)[Ñqlogpq(atjst)(Rtb(st))]: (2)Eq. 2 is an unbiased gradient of the RL objective. However, in practice, most policy gradient meth-ods effectively use undiscounted state visitation frequencies, i.e. g=1 in the equal for rp, andare therefore biased; in fact, making them unbiased often hurts performance (Thomas, 2014). Inthis paper, we mainly discuss bias due to function approximation, off-policy learning, and valueback-ups.The gradient is estimated using Monte Carlo samples in practice and has very high variance. Aproper choice of baseline is necessary to reduce the variance sufficiently such that learning becomesfeasible. A common choice is to estimate the value function of the state Vp(st)to use as the base-line, which provides an estimate of advantage function Ap(st;at), which is a centered action-valuefunction Qp(st;at), as defined below:Vp(st) =Ep[Rt] =Epq(atjst)[Qp(st;at)]Qp(st;at) =r(st;at)+gEp[Rt+1] =r(st;at)+gEp(st+1jst;at)[Vp(st+1)]Ap(st;at) =Qp(st;at)Vp(st):(3)Qp(st;at)summarizes the performance of each action from a given state, assuming it follows pthereafter, and Ap(st;at)provides a measure of how each action compares to the average perfor-mance at the state st, which is given by Vp(st). Using Ap(st;at)centers the learning signal andreduces variance significantly.Besides high variance, another problem with the policy gradient is that it requires on-policy samples.This makes policy gradient optimization very sample intensive. To achieve similar sample efficiencyas off-policy methods, we can attempt to include off-policy data. Prior attempts use importancesampling to include off-policy trajectories; however, these are known to be difficult scale to high-dimensional action spaces because of rapidly degenerating importance weights (Precup, 2000).2.2 P OLICY GRADIENT WITH FUNCTION APPROXIMATIONPolicy gradient methods with function approximation (Sutton et al., 1999), or actor-critic methods,include a policy evaluation step, which often uses temporal difference (TD) learning to fit a criticQwfor the current policy p(q), and a policy improvement step which greedily optimizes the policypagainst the critic estimate Qw. Significant gains in sample efficiency may be achievable using off-policy TD learning for the critic, as in Q-learning and deterministic policy gradient (Sutton, 1990;Silver et al., 2014), typically by means of experience replay for training deep Q networks (Mnihet al., 2015; Lillicrap et al., 2016; Gu et al., 2016b).One particularly relevant example of such a method is the deep deterministic policy gradient(DDPG) (Silver et al., 2014; Lillicrap et al., 2016). The updates for this method are given below,where pq(atjst) =d(at=q(st))is a deterministic policy, bis arbitrary exploration distribution,andrbcorresponds to sampling from a replay buffer. Q(;)is the target network that slowly tracksQw(Lillicrap et al., 2016).w=argminwEstrb();atb(jst)[(r(st;at)+gQ(st+1;q(st+1))Qw(st;at))2]q=argmaxqEstrb()[Qw(st;q(st))](4)When the critic and policy are parametrized with neural networks, full optimization is expensive,and instead stochastic gradient optimization is used. The gradient in the policy improvement phaseis given below, which is generally a biased gradient of J(q).ÑqJ(q)Estrb()[ÑaQw(st;a)ja=q(st)Ñqq(st)] (5)3Published as a conference paper at ICLR 2017The crucial benefits of DDPG are that it does not rely on high variance REINFORCE gradients and istrainable on off-policy data. These properties make DDPG and other analogous off-policy methodssignificantly more sample-efficient than policy gradient methods (Lillicrap et al., 2016; Gu et al.,2016b; Duan et al., 2016). However, the use of a biased policy gradient estimator makes analyzingits convergence and stability properties difficult.3 Q-P ROPIn this section, we derive the Q-Prop estimator for policy gradient. The key idea from this estimatorcomes from observing Equations 2 and 5 and noting that the former provides an almost unbiased(see Section 2.1), but high variance gradient, while the latter provides a deterministic, but biasedgradient. By using the deterministic biased estimator as a particular form of control variate (Ross,2006; Paisley et al., 2012) for the Monte Carlo policy gradient estimator, we can effectively use bothtypes of gradient information to construct a new estimator that in practice exhibits improved sampleefficiency through the inclusion of off-policy samples while preserving the stability of on-policyMonte Carlo policy gradient.3.1 Q-P ROP ESTIMATORTo derive the Q-Prop gradient estimator, we start by using the first-order Taylor expansion of anarbitrary function f(st;at), ̄f(st;at) =f(st; ̄at)+Ñaf(st;a)ja= ̄at(at ̄at)as the control vari-ate for the policy gradient estimator. We use ˆQ(st;at) =å¥t0=tgt0tr(st0;at0)to denote MonteCarlo return from state stand actionat, i.e. Ep[ˆQ(st;at)] = r(st;at) +gEp[Vp(st+1)], andq(st) =Epq(atjst)[at]to denote the expected action of a stochastic policy pq. Full derivation isin Appendix A.ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄f(st;at)]+Erp;p[Ñqlogpq(atjst) ̄f(st;at)]=Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄f(st;at)]+Erp[Ñaf(st;a)ja= ̄atÑqq(st)](6)Eq. 6 is general for arbitrary function f(st;at)that is differentiable with respect to atat an arbitraryvalue of ̄at; however, a sensible choice is to use the critic Qwforfandq(st)for ̄atto get,ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄Qw(st;at)]+Erp[ÑaQw(st;a)ja=q(st)Ñqq(st)]:(7)Finally, since in practice we estimate advantages ˆA(st;at), we write the Q-Prop estimator in termsof advantages to complete the basic derivation,ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆA(st;at) ̄Aw(st;at)]+Erp[ÑaQw(st;a)ja=q(st)Ñqq(st)] ̄A(st;at) = ̄Q(st;at)Epq[ ̄Q(st;at)] =ÑaQw(st;a)ja=q(st)(atq(st)):(8)Eq. 8 is composed of an analytic gradient through the critic as in Eq. 5 and a residual REINFORCEgradient in Eq. 2. From the above derivation, Q-Prop is simply a Monte Carlo policy gradientestimator with a special form of control variate. The important insight comes from the fact thatQwcan be trained using off-policy data as in Eq. 4. Under this setting, Q-Prop is no longer justa Monte Carlo policy gradient method, but more closely resembles an actor-critic method, wherethe critic can be updated off-policy but the actor is always updated on-policy with an additionalREINFORCE correction term so that it remains a Monte Carlo policy gradient method regardlessof the parametrization, training method, and performance of the critic. Therefore, Q-Prop can bedirectly combined with a number of prior techniques from both on-policy methods such as naturalpolicy gradient (Kakade, 2001), trust-region policy optimization (TRPO) (Schulman et al., 2015)and generalized advantage estimation (GAE) (Schulman et al., 2016), and off-policy methods suchas DDPG (Lillicrap et al., 2016) and Retrace( l) (Munos et al., 2016).Intuitively, if the critic Qwapproximates Qpwell, it provides a reliable gradient, reduces the estima-tor variance, and improves the convergence rate. Interestingly, control variate analysis in the nextsection shows that this is not the only circumstance where Q-Prop helps reduce variance.4Published as a conference paper at ICLR 20173.2 C ONTROL VARIATE ANALYSIS AND ADAPTIVE Q-P ROPFor Q-Prop to be applied reliably, it is crucial to analyze how the variance of the estimator changesbefore and after the application of control variate. Following the prior work on control vari-ates (Ross, 2006; Paisley et al., 2012), we first introduce h(st)to Eq. 8, a weighing variable thatmodulates the strength of control variate. This additional variable h(st)does not introduce bias tothe estimator.ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆA(st;at)h(st) ̄Aw(st;at)]+Erp[h(st)ÑaQw(st;a)ja=q(st)Ñqq(st)](9)The variance of this estimator is given below, where m=1:::Mindexes the dimension of q,Var=ErpåmVarat(Ñqmlogpq(atjst)(ˆA(st;at)h(st) ̄A(st;at))): (10)If we choose h(st)such that Var<Var, where Var =Erp[åmVarat(Ñqmlogpq(atjst)ˆA(st;at))]is the original estimator variance measure, then we have managed to reduce the variance. Directlyanalyzing the above variance measure is nontrivial, for the same reason that computing the optimalbaseline is difficult (Weaver & Tao, 2001). In addition, it is often impractical to get multiple actionsamples from the same state, which prohibits using na ̈ıve Monte Carlo to estimate the expectations.Instead, we propose a surrogate variance measure, Var =Erp[Varat(ˆA(st;at))]. A similar surrogateis also used by prior work on learning state-dependent baseline (Mnih & Gregor, 2014), and thebenefit is that the measure becomes more tractable,Var=Erp[Varat(ˆA(st;at)h(st) ̄A(st;at))]=Var+Erp[2h(st)Covat(ˆA(st;at); ̄A(st;at))+h(st)2Varat( ̄A(st;at))]:(11)SinceEp[ˆA(st;at)] =Ep[ ̄A(st;at)] = 0, the terms can be simplified as below,Covat(ˆA; ̄A) =Ep[ˆA(st;at) ̄A(st;at)]Varat( ̄A) =Ep[ ̄A(st;at)2] =ÑaQw(st;a)jTa=q(st)Sq(st)ÑaQw(st;a)ja=q(st);(12)where Sq(st)is the covariance matrix of the stochastic policy pq. The nice property of Eq. 11 isthat Varat( ̄A)is analytical and Cov at(ˆA; ̄A)can be estimated with single action sample. Using thisestimate, we propose adaptive variants of Q-Prop that regulate the variance of the gradient estimate.Adaptive Q-Prop. The optimal state-dependent factor h(st)can be computed per state, accord-ing to h(st) =Covat(ˆA; ̄A)=Varat( ̄A). This provides maximum reduction in variance accordingto Eq. 11. Substituting h(st)into Eq. 11, we get Var=Erp[(1rcorr(ˆA; ̄A)2)Varat(ˆA)], wherercorris the correlation coefficient, which achieves guaranteed variance reduction if at any state ̄Aiscorrelated with ˆA. We call this the fully adaptive Q-Prop method. An important conclusion fromthis analysis is that, in adaptive Q-Prop, the critic Qwdoes not necessarily need to be approximatingQpwell to produce good results. Its Taylor expansion merely needs to be correlated with ˆA, posi-tively or even negatively. This is in contrast with actor-critic methods, where performance is greatlydependent on the absolute accuracy of the critic’s approximation.Conservative and Aggressive Q-Prop. In practice, the single-sample estimate of Cov at(ˆA; ̄A)hashigh variance itself, and we propose the following two practical implementations of adaptive Q-Prop:(1)h(st) =1 if ˆCovat(ˆA; ̄A)>0 and h(st) =0 if otherwise, and (2) h(st) =sign(ˆCovat(ˆA; ̄A)). Thefirst implementation, which we call conservative Q-Prop, can be thought of as a more conservativeversion of Q-Prop, which effectively disables the control variate for some samples of the states. Thisis sensible as if ˆAand ̄Aare negatively correlated, it is likely that the critic is very poor. The secondvariant can correspondingly be termed aggressive Q-Prop, since it makes more liberal use of thecontrol variate.3.3 Q-P ROP ALGORITHMPseudo-code for the adaptive Q-Prop algorithm is provided in Algorithm 1. It is a mixture of policygradient and actor-critic. At each iteration, it first rolls out the stochastic policy to collect on-policy5Published as a conference paper at ICLR 2017Algorithm 1 Adaptive Q-Prop1: Initialize wfor critic Qw,qfor stochastic policy pq, and replay buffer R / 0.2:repeat3: fore=1;:::; Edo .Collect Eepisodes of on-policy experience using pq4:s0;ep(s0)5: fort=0;:::; T1do6:at;epq(jst;e),st+1;ep(jst;e;at;e),rt;e=r(st;e;at;e)7: Add batch data B=fs0:T;1:E;a0:T1;1:E;r0:T1;1:Egto replay buffer R8: Take ETgradient steps on QwusingRandpq9: Fit Vf(st)usingB10: Compute ˆAt;eusing GAE( l) and ̄At;eusing Eq. 711: Set ht;ebased on Section 3.212: Compute and center the learning signals lt;e=ˆAt;eht;e ̄At;e13: Compute ÑqJ(q)1ETåeåtÑqlogpq(at;ejst;e)lt;e+ht;eÑaQw(st;e;a)ja=q(st;e)Ñqq(st;e)14: Take a gradient step on pqusing ÑqJ(q), optionally with a trust-region constraint using B15:until pqconverges.samples, adds the batch to a replay buffer, takes a few gradient steps on the critic, computes ˆAand ̄A, and finally applies a gradient step on the policy pq. In our implementation, the critic Qwis fittedwith off-policy TD learning using the same techniques as in DDPG (Lillicrap et al., 2016):w=argminwEstrb();atb(jst)[(r(st;at)+gEp[Q0(st+1;at+1)]Qw(st;at))2]: (13)Vfis fitted with the same technique in (Schulman et al., 2016). Generalized advantage estimation(GAE) (Schulman et al., 2016) is used to estimate ˆA. The policy update can be done by any methodthat utilizes the first-order gradient and possibly the on-policy batch data, which includes trust regionpolicy optimization (TRPO) (Schulman et al., 2015). Importantly, this is just one possible imple-mentation of Q-Prop, and in Appendix C we show a more general form that can interpolate betweenpure policy gradient and off-policy actor-critic.3.4 L IMITATIONSA limitation with Q-Prop is that if data collection is very fast, e.g. using fast simulators, the computetime per episode is bound by the critic training at each iteration, and similar to that of DDPG andusually much more than that of TRPO. However, in applications where data collection speed isthe bottleneck, there is sufficient time between policy updates to fit Qwwell, which can be doneasynchronously from the data collection, and the compute time of Q-Prop will be about the same asthat of TRPO.Another limitation is the robustness to bad critics. We empirically show that our conservative Q-Propis more robust than standard Q-Prop and much more robust than pure off-policy actor-critic methodssuch as DDPG; however, estimating when an off-policy critic is reliable or not is still a fundamentalproblem that shall be further investigated. We can also alleviate this limitation by adopting morestable off-policy critic learning techniques such as Retrace( l) (Munos et al., 2016).4 R ELATED WORKVariance reduction in policy gradient methods is a long-standing problem with a large body of priorwork (Weaver & Tao, 2001; Greensmith et al., 2004; Schulman et al., 2016). However, explorationof action-dependent control variates is relatively recent, with most work focusing instead on simplerbaselining techniques (Ross, 2006). A subtle exception is compatible feature approximation (Suttonet al., 1999) which can be viewed as a control variate as explained in Appendix B. Another exceptionis doubly robust estimator in contextual bandits (Dud ́ık et al., 2011), which uses a different controlvariate whose bias cannot be tractably corrected. Control variates were explored recently not inRL but for approximate inference in stochastic models (Paisley et al., 2012), and the closest relatedwork in that domain is the MuProp algorithm (Gu et al., 2016a) which uses a mean-field networkas a surrogate for backpropagating a deterministic gradient through stochastic discrete variables.MuProp is not directly applicable to model-free RL because the dynamics are unknown; however, it6Published as a conference paper at ICLR 2017can be if the dynamics are learned as in model-based RL (Atkeson & Santamaria, 1997; Deisenroth& Rasmussen, 2011). This model-based Q-Prop is itself an interesting direction of research as iteffectively corrects bias in model-based learning.Part of the benefit of Q-Prop is the ability to use off-policy data to improve on-policy policy gra-dient methods. Prior methods that combine off-policy data with policy gradients either introducebias (Sutton et al., 1999; Silver et al., 2014) or use importance weighting, which is known to re-sult in degenerate importance weights in high dimensions, resulting in very high variance (Precup,2000; Levine & Koltun, 2013). Q-Prop provides a new approach for using off-policy data to reducevariance without introducing further bias.Lastly, since Q-Prop uses both on-policy policy updates and off-policy critic learning, it can takeadvantage of prior work along both lines of research. We chose to implement Q-Prop on top ofTRPO-GAE primarily for the purpose of enabling a fair comparison in the experiments, but com-bining Q-Prop with other on-policy update schemes and off-policy critic training methods is aninteresting direction for future work. For example, Q-Prop can also be used with other on-policypolicy gradient methods such as A3C (Mnih et al., 2016) and off-policy advantage estimation meth-ods such as Retrace( l) (Munos et al., 2016), GTD2 (Sutton et al., 2009), emphatic TD (Sutton et al.,2015), and WIS-LSTD (Mahmood et al., 2014).5 E XPERIMENTS(a) (b) (c) (d) (e) (f) (g)Figure 1: Illustrations of OpenAI Gym MuJoCo domains (Brockman et al., 2016; Duan et al., 2016):(a) Ant, (b) HalfCheetah, (c) Hopper, (d) Humanoid, (e) Reacher, (f) Swimmer, (g) Walker.We evaluated Q-Prop and its variants on continuous control environments from the OpenAI Gymbenchmark (Brockman et al., 2016) using the MuJoCo physics simulator (Todorov et al., 2012) asshown in Figure 1. Algorithms are identified by acronyms, followed by a number indicating batchsize, except for DDPG, which is a prior online actor-critic algorithm (Lillicrap et al., 2016). “c-” and“v-” denote conservative and aggressive Q-Prop variants as described in Section 3.2. “TR-” denotestrust-region policy optimization (Schulman et al., 2015), while “V-” denotes vanilla policy gradient.For example, “TR-c-Q-Prop-5000” means convervative Q-Prop with the trust-region policy update,and a batch size of 5000. “VPG” and “TRPO” are vanilla policy gradient and trust-region policy op-timization respectively (Schulman et al., 2016; Duan et al., 2016). Unless otherwise stated, all policygradient methods are implemented with GAE( l=0:97) (Schulman et al., 2016). Note that TRPO-GAE is currently the state-of-the-art method on most of the OpenAI Gym benchmark tasks, thoughour experiments show that a well-tuned DDPG implementation sometimes achieves better results.Our algorithm implementations are built on top of the rllab TRPO and DDPG codes from Duanet al. (2016) and available at https://github.com/shaneshixiang/rllabplusplus .Policy and value function architectures and other training details including hyperparameter valuesare provided in Appendix D.5.1 A DAPTIVE Q-P ROPFirst, it is useful to identify how reliable each variant of Q-Prop is. In this section, we analyzestandard Q-Prop and two adaptive variants, c-Q-Prop and a-Q-Prop, and demonstrate the stabilityof the method across different batch sizes. Figure 2a shows a comparison of Q-Prop variants withtrust-region updates on the HalfCheetah-v1 domain, along with the best performing TRPO hyper-parameters. The results are consistent with theory: conservative Q-Prop achieves much more stableperformance than the standard and aggressive variants, and all Q-Prop variants significantly outper-form TRPO in terms of sample efficiency, e.g. conservative Q-Prop reaches average reward of 4000using about 10 times less samples than TRPO.7Published as a conference paper at ICLR 2017(a) Standard Q-Prop vs adaptive variants. (b) Conservative Q-Prop vs TRPO across batch sizes.Figure 2: Average return over episodes in HalfCheetah-v1 during learning, exploring adaptive Q-Prop methods and different batch sizes. All variants of Q-Prop substantially outperform TRPO interms of sample efficiency. TR-c-QP, conservative Q-Prop with trust-region update performs moststably across different batch sizes.Figure 2b shows the performance of conservative Q-Prop against TRPO across different batchsizes. Due to high variance in gradient estimates, TRPO typically requires very large batch sizes,e.g. 25000 steps or 25 episodes per update, to perform well. We show that our Q-Prop methods canlearn even with just 1 episode per update, and achieves better sample efficiency with small batchsizes. This shows that Q-Prop significantly reduces the variance compared to the prior methods.As we discussed in Section 1, stability is a significant challenge with state-of-the-art deep RL meth-ods, and is very important for being able to reliably use deep RL for real world tasks. In the rest ofthe experiments, we will use conservative Q-Prop as the main Q-Prop implementation.5.2 E VALUATION ACROSS ALGORITHMS(a) Comparing algorithms on HalfCheetah-v1. (b) Comparing algorithms on Humanoid-v1.Figure 3: Average return over episodes in HalfCheetah-v1 and Humanoid-v1 during learning, com-paring Q-Prop against other model-free algorithms. Q-Prop with vanilla policy gradient outperformsTRPO on HalfCheetah. Q-Prop significantly outperforms TRPO in convergence time on Humanoid.In this section, we evaluate two versions of conservative Q-Prop, v-c-Q-Prop using vanilla pol-icy gradient and TR-c-Q-Prop using trust-region updates, against other model-free algorithms onthe HalfCheetah-v1 domain. Figure 3a shows that c-Q-Prop methods significantly outperform thebest TRPO and VPG methods. Even Q-Prop with vanilla policy gradient is comparable to TRPO,confirming the significant benefits from variance reduction. DDPG on the other hand exhibits incon-sistent performances. With proper reward scaling, i.e. “DDPG-r0.1”, it outperforms other methodsas well as the DDPG results reported in prior work (Duan et al., 2016; Amos et al., 2016). Thisillustrates the sensitivity of DDPG to hyperparameter settings, while Q-Prop exhibits more stable,monotonic learning behaviors when compared to DDPG. In the next section we show this improvedstability allows Q-Prop to outperform DDPG in more complex domains.8Published as a conference paper at ICLR 20175.3 E VALUATION ACROSS DOMAINSLastly, we evaluate Q-Prop against TRPO and DDPG across multiple domains. While the gymenvironments are biased toward locomotion, we expect we can achieve similar performance on ma-nipulation tasks such as those in Lillicrap et al. (2016). Table 1 summarizes the results, including thebest attained average rewards and the steps to convergence. Q-Prop consistently outperform TRPOin terms of sample complexity and sometimes achieves higher rewards than DDPG in more complexdomains. A particularly notable case is shown in Figure 3b, where Q-Prop substantially improvessample efficiency over TRPO on Humanoid-v1 domain, while DDPG cannot find a good solution.The better performance on the more complex domains highlights the importance of stable deep RLalgorithms: while costly hyperparameter sweeps may allow even less stable algorithms to performwell on simpler problems, more complex tasks might have such narrow regions of stable hyperpa-rameters that discovering them becomes impractical.TR-c-Q-Prop TRPO DDPGDomain Threshold MaxReturn. Episodes MaxReturn Epsisodes MaxReturn EpisodesAnt 3500 3534 4975 4239 13825 957 N/AHalfCheetah 4700 4811 20785 4734 26370 7490 600Hopper 2000 2957 5945 2486 5715 2604 965Humanoid 2500 >3492 14750 918 >30000 552 N/AReacher -7 -6.0 2060 -6.7 2840 -6.6 1800Swimmer 90 103 2045 110 3025 150 500Walker 3000 4030 3685 3567 18875 3626 2125Table 1: Q-Prop, TRPO and DDPG results showing the max average rewards attained in the first30k episodes and the episodes to cross specific reward thresholds. Q-Prop often learns more sampleefficiently than TRPO and can solve difficult domains such as Humanoid better than DDPG.6 D ISCUSSION AND CONCLUSIONWe presented Q-Prop, a policy gradient algorithm that combines reliable, consistent, and poten-tially unbiased on-policy gradient estimation with a sample-efficient off-policy critic that acts as acontrol variate. The method provides a large improvement in sample efficiency compared to state-of-the-art policy gradient methods such as TRPO, while outperforming state-of-the-art actor-criticmethods on more challenging tasks such as humanoid locomotion. We hope that techniques likethese, which combine on-policy Monte Carlo gradient estimation with sample-efficient variance re-duction through off-policy critics, will eventually lead to deep reinforcement learning algorithmsthat are more stable and efficient, and therefore better suited for application to complex real-worldlearning tasks.ACKNOWLEDGMENTSWe thank Rocky Duan for sharing and answering questions about rllab code, and Yutian Chen andLaurent Dinh for discussion on control variates. SG and RT were funded by NSERC, Google, andEPSRC grants EP/L000776/1 and EP/M026957/1. ZG was funded by EPSRC grant EP/J012300/1and the Alan Turing Institute (EP/N510129/1).REFERENCESBrandon Amos, Lei Xu, and J Zico Kolter. Input convex neural networks. arXiv preprintarXiv:1609.07152 , 2016.Christopher G Atkeson and Juan Carlos Santamaria. A comparison of direct and model-based rein-forcement learning. In In International Conference on Robotics and Automation . Citeseer, 1997.Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, andWojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540 , 2016.9Published as a conference paper at ICLR 2017Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policysearch. In Proceedings of the 28th International Conference on machine learning (ICML-11) , pp.465–472, 2011.Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deepreinforcement learning for continuous control. International Conference on Machine Learning(ICML) , 2016.Miroslav Dud ́ık, John Langford, and Lihong Li. Doubly robust policy evaluation and learning. arXivpreprint arXiv:1103.4601 , 2011.Evan Greensmith, Peter L Bartlett, and Jonathan Baxter. Variance reduction techniques for gradientestimates in reinforcement learning. Journal of Machine Learning Research , 5(Nov):1471–1530,2004.Shixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. Muprop: Unbiased backpropagationfor stochastic neural networks. International Conference on Learning Representations (ICLR) ,2016a.Shixiang Gu, Tim Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q-learning withmodel-based acceleration. In International Conference on Machine Learning (ICML) , 2016b.Hado V Hasselt. Double q-learning. In Advances in Neural Information Processing Systems , pp.2613–2621, 2010.Sham Kakade. A natural policy gradient. In NIPS , volume 14, pp. 1531–1538, 2001.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Guy Lever. Deterministic policy gradient algorithms. 2014.Sergey Levine and Vladlen Koltun. Guided policy search. In International Conference on MachineLearning (ICML) , pp. 1–9, 2013.Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa,David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. Interna-tional Conference on Learning Representations (ICLR) , 2016.A Rupam Mahmood, Hado P van Hasselt, and Richard S Sutton. Weighted importance samplingfor off-policy learning with linear function approximation. In Advances in Neural InformationProcessing Systems , pp. 3014–3022, 2014.Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. Inter-national Conference on Machine Learning (ICML) , 2014.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle-mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-levelcontrol through deep reinforcement learning. Nature , 518(7540):529–533, 2015.V olodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, TimHarley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcementlearning. In International Conference on Machine Learning (ICML) , 2016.R ́emi Munos, Tom Stepleton, Anna Harutyunyan, and Marc G Bellemare. Safe and efficient off-policy reinforcement learning. arXiv preprint arXiv:1606.02647 , 2016.John Paisley, David Blei, and Michael Jordan. Variational bayesian inference with stochastic search.International Conference on Machine Learning (ICML) , 2012.Jan Peters and Stefan Schaal. Policy gradient methods for robotics. In International Conference onIntelligent Robots and Systems (IROS) , pp. 2219–2225. IEEE, 2006.Jan Peters, Katharina M ̈ulling, and Yasemin Altun. Relative entropy policy search. In AAAI . Atlanta,2010.10Published as a conference paper at ICLR 2017Doina Precup. Eligibility traces for off-policy policy evaluation. Computer Science DepartmentFaculty Publication Series , pp. 80, 2000.Sheldon M Ross. Simulation. Burlington, MA: Elsevier , 2006.John Schulman, Sergey Levine, Pieter Abbeel, Michael I. Jordan, and Philipp Moritz. Trust regionpolicy optimization. In International Conference on Machine Learning (ICML) , pp. 1889–1897,2015.John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. International Confer-ence on Learning Representations (ICLR) , 2016.David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. De-terministic policy gradient algorithms. In International Conference on Machine Learning (ICML) ,2014.David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche,Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Masteringthe game of go with deep neural networks and tree search. Nature , 529(7587):484–489, 2016.Richard S Sutton. Integrated architectures for learning, planning, and reacting based on approxi-mating dynamic programming. In International Conference on Machine Learning (ICML) , pp.216–224, 1990.Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradientmethods for reinforcement learning with function approximation. In Advances in Neural Infor-mation Processing Systems (NIPS) , volume 99, pp. 1057–1063, 1999.Richard S Sutton, Hamid Reza Maei, Doina Precup, Shalabh Bhatnagar, David Silver, CsabaSzepesv ́ari, and Eric Wiewiora. Fast gradient-descent methods for temporal-difference learningwith linear function approximation. In Proceedings of the 26th Annual International Conferenceon Machine Learning , pp. 993–1000. ACM, 2009.Richard S Sutton, A Rupam Mahmood, and Martha White. An emphatic approach to the problemof off-policy temporal-difference learning. The Journal of Machine Learning Research , 2015.Philip Thomas. Bias in natural actor-critic algorithms. In ICML , pp. 441–448, 2014.Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control.In2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pp. 5026–5033.IEEE, 2012.Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning , 8(3-4):279–292, 1992.Lex Weaver and Nigel Tao. The optimal reward baseline for gradient-based reinforcement learning.InProceedings of the Seventeenth conference on Uncertainty in artificial intelligence , pp. 538–545. Morgan Kaufmann Publishers Inc., 2001.Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcementlearning. Machine learning , 8(3-4):229–256, 1992.A Q-P ROP ESTIMATOR DERIVATIONThe full derivation of the Q-Prop estimator is shown in Eq. 14. We make use of the followingproperty that is commonly used in baseline derivations:Epq(x)[Ñqlogpq(x)] =ZxÑqpq(x) =ÑqZxp(x) =011Published as a conference paper at ICLR 2017This holds true when f(st;at)is an arbitrary function differentiable with respect to atand ̄fis itsfirst-order Taylor expansion around at= ̄at, i.e. ̄f(st;at) =f(st; ̄at)+Ñaf(st;a)ja= ̄at(at ̄at).Here,q(st) =Ep[at]is the mean of stochastic policy pq. The derivation appears below:ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄f(st;at))]+Erp;p[Ñqlogpq(atjst) ̄f(st;at)]g(q) =Erp;p[Ñqlogpq(atjst) ̄f(st;at)]=Erp;p[Ñqlogpq(atjst)(f(st; ̄at)+Ñaf(st;a)ja= ̄at(at ̄at))]=Erp;p[Ñqlogpq(atjst)Ñaf(st;a)ja= ̄atat]=ErpZatÑqpq(atjst)Ñaf(st;a)ja= ̄atat=ErpÑaf(st;a)ja= ̄atZatÑqpq(atjst)at=Erp[Ñaf(st;a)ja= ̄atÑqEp[at]]=Erp[Ñaf(st;a)ja= ̄atÑqq(st)]ÑqJ(q) =Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄f(st;at))]+ g(q)=Erp;p[Ñqlogpq(atjst)(ˆQ(st;at) ̄f(st;at))]+Erp[Ñaf(st;a)ja= ̄atÑqq(st)](14)B C ONNECTION BETWEEN Q-P ROP AND COMPATIBLE FEATUREAPPROXIMATIONIn this section we show that actor-critic with compatible feature approximation is a form ofcontrol variate. A critic Qwiscompatible (Sutton et al., 1999) if it satisfies (1) Qw(st;at) =wTÑqlogpq(atjst), i.e. ÑwQw(st;at) =Ñqlogpq(atjst), and (2) wis fit with objective w=argmin wL(w) =argmin wErp;p[(ˆQ(st;at)Qw(st;at))2], that is fitting Qwon on-policy MonteCarlo returns. Condition (2) implies the following identity,ÑwL=2Erp;p[Ñqlogpq(atjst)(ˆQ(st;at)Qw(st;at))] = 0: (15)In compatible feature approximation, it directly uses Qwas control variate, rather than its Taylorexpansion ̄Qwas in Q-Prop. Using Eq. 15, the Monte Carlo policy gradient is,ÑqJ(q) =Erp;p[Ñqlogpq(atjst)Qw(st;at)]=Erp;p[(Ñqlogpq(atjst)Ñqlogpq(atjst)T)w]=Erp[I(q;st)w];(16)where I(q;st) =Epq[Ñqlogpq(atjst)Ñqlogpq(atjst)T]is Fisher’s information matrix. Thus, vari-ance reduction depends on ability to compute or estimate I(q;st)andweffectively.C U NIFYING POLICY GRADIENT AND ACTOR -CRITICQ-Prop closely ties together policy gradient and actor-critic algorithms. To analyze this point, wewrite a generalization of Eq. 9 below, introducing two additional variables a;rCR:ÑqJ(q)μaErp;p[Ñqlogpq(atjst)(ˆA(st;at)h ̄Aw(st;at)]+hErCR[ÑaQw(st;a)ja=q(st)Ñqq(st)](17)Eq. 17 enables more analysis where bias generally is introduced only when a6=1 or rCR6=rp.Importantly, Eq. 17 covers both policy gradient and deterministic actor-critic algorithm as its specialcases. Standard policy gradient is recovered by h=0, and deterministic actor-critic is recoveredbya=0 and rCR=rb. This allows heuristic or automatic methods for dynamically changingthese variables through the learning process for optimizing different metrics, e.g. sample efficiency,convergence speed, stability.Table 2 summarizes the various edge cases of Eq. 17. For example, since we derive our method froma control variates standpoint, Qwcan be any function and the gradient remains almost unbiased (see12Published as a conference paper at ICLR 2017Parameter Implementation options Introduce bias?Qw off-policy TD; on-policy TD( l); model-based; etc. NoVf on-policy Monte Carlo fitting; Epq[Qw(st;at)]; etc Nol 0l1 Yes, except l=1a a0 Yes, except a=1h anyh NorCR rof any policy Yes, except rCR=rpTable 2: Implementation options and edge cases of the generalized Q-Prop estimator in Eq. 17.Section 2.1). A natural choice is to use off-policy temporal difference learning to learn the critic Qwcorresponding to policy p. This enables effectively utilizing off-policy samples without introducingfurther bias. An interesting alternative to this is to utilize model-based roll-outs to estimate thecritic, which resembles MuProp in stochastic neural networks (Gu et al., 2016a). Unlike prior workon using fitted dynamics model to accelerate model-free learning (Gu et al., 2016b), this approachdoes not introduce bias to the gradient of the original objective.D E XPERIMENT DETAILSPolicy and value function architectures. The network architectures are largely based on thebenchmark paper by Duan et al. (2016). For policy gradient methods, the stochastic policypq(atjst) =N(q(st);Sq)is a local Gaussian policy with a local state-dependent mean and aglobal covariance matrix. q(st)is a neural network with 3 hidden layers of sizes 100-50-25 andtanh nonlinearities at the first 2 layers, and Sqis diagonal. For DDPG, the policy is deterministicand has the same architecture as qexcept that it has an additional tanh layer at the output. Vf(st)for baselines and GAE is fit with the same technique by Schulman et al. (2016), a variant of linearregression on Monte Carlo returns with soft-update constraint. For Q-Prop and DDPG, Qw(s;a)isparametrized with a neural network with 2 hidden layers of size 100 and ReLU nonlinearity, whereais included after the first hidden layer.Training details. This section describes parameters of the training algorithms and their hyperpa-rameter search values in fg. The optimal performing hyperparameter results are reported. Policygradient methods (VPG, TRPO, Q-Prop) used batch sizes of f1000, 5000, 25000 gtime steps, stepsizes off0.1, 0.01, 0.001gfor the trust-region method, and base learning rates of f0.001, 0.0001gwith Adam (Kingma & Ba, 2014) for vanilla policy gradient methods. For Q-Prop and DDPG, Qwis learned with the same technique as in DDPG (Lillicrap et al., 2016), using soft target networkswith t=0:999, a replay buffer of size 106steps, a mini-batch size of 64, and a base learning rateoff0.001, 0.0001gwith Adam (Kingma & Ba, 2014). For Q-Prop we also tuned the relative ratioof gradient steps on the critic Qwagainst the number of steps on the policy, in the range f0.1, 0.5,1.0g, where 0.1 corresponds to 100 critic updates for every policy update if the batch size is 1000.For DDPG, we swept the reward scaling using f0.01,0.1,1.0gas it is sensitive to this parameter.13
S1VaB4cex
Published as a conference paper at ICLR 2017FRACTAL NET:ULTRA -DEEPNEURAL NETWORKS WITHOUT RESIDUALSGustav LarssonUniversity of Chicagolarsson@cs.uchicago.eduMichael MaireTTI Chicagommaire@ttic.eduGregory ShakhnarovichTTI Chicagogreg@ttic.eduABSTRACTWe introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networkswhose structural layouts are precisely truncated fractals. These networks containinteracting subpaths of different lengths, but do not include any pass-through orresidual connections; every internal signal is transformed by a filter and nonlinearitybefore being seen by subsequent layers. In experiments, fractal networks matchthe excellent performance of standard residual networks on both CIFAR andImageNet classification tasks, thereby demonstrating that residual representationsmay not be fundamental to the success of extremely deep convolutional neuralnetworks. Rather, the key may be the ability to transition, during training, fromeffectively shallow to deep. We note similarities with student-teacher behavior anddevelop drop-path, a natural extension of dropout, to regularize co-adaptation ofsubpaths in fractal architectures. Such regularization allows extraction of high-performance fixed-depth subnetworks. Additionally, fractal networks exhibit ananytime property: shallow subnetworks provide a quick answer, while deepersubnetworks, with higher latency, provide a more accurate answer.1 I NTRODUCTIONResidual networks (He et al., 2016a), or ResNets, lead a recent and dramatic increase in both depth andaccuracy of convolutional neural networks, facilitated by constraining the network to learn residuals.ResNet variants (He et al., 2016a;b; Huang et al., 2016b) and related architectures (Srivastava et al.,2015) employ the common technique of initializing and anchoring, via a pass-through channel, anetwork to the identity function. Training now differs in two respects. First, the objective changesto learning residual outputs, rather than unreferenced absolute mappings. Second, these networksexhibit a type of deep supervision (Lee et al., 2014), as near-identity layers effectively reduce distanceto the loss. He et al. (2016a) speculate that the former, the residual formulation itself, is crucial.We show otherwise, by constructing a competitive extremely deep architecture that does not rely onresiduals. Our design principle is pure enough to communicate in a single word, fractal, and a simplediagram (Figure 1). Yet, fractal networks implicitly recapitulate many properties hard-wired intoprevious successful architectures. Deep supervision not only arises automatically, but also drives atype of student-teacher learning (Ba & Caruana, 2014; Urban et al., 2017) internal to the network.Modular building blocks of other designs (Szegedy et al., 2015; Liao & Carneiro, 2015) resemblespecial cases of a fractal network’s nested substructure.For fractal networks, simplicity of training mirrors simplicity of design. A single loss, attached to thefinal layer, suffices to drive internal behavior mimicking deep supervision. Parameters are randomlyinitialized. As they contain subnetworks of many depths, fractal networks are robust to choice ofoverall depth; make them deep enough and training will carve out a useful assembly of subnetworks.The entirety of emergent behavior resulting from a fractal design may erode the need for recentengineering tricks intended to achieve similar effects. These tricks include residual functional formswith identity initialization, manual deep supervision, hand-crafted architectural modules, and student-teacher training regimes. Section 2 reviews this large body of related techniques. Hybrid designscould certainly integrate any of them with a fractal architecture; we leave open the question of thedegree to which such hybrids are synergistic.1Published as a conference paper at ICLR 2017zf4pzqzf4pzqBlock 1Block 2Block 3Block 4Block 5xyFractal Expansion RuleLayer KeyConvolutionJoinPoolPredictionzfCfCpzqzfCfCfC1pzqFigure 1: Fractal architecture. Left: A simple expansion rule generates a fractal architecture withCintertwined columns. The base case, f1pzq, has a single layer of the chosen type ( e.g.convolutional)between input and output. Join layers compute element-wise mean. Right: Deep convolutionalnetworks periodically reduce spatial resolution via pooling. A fractal version uses fCas a buildingblock between pooling layers. Stacking Bsuch blocks yields a network whose total depth, measuredin terms of convolution layers, is B2C1. This example has depth 40(B5,C4).Our main contribution is twofold:We introduce FractalNet, the first simple alternative to ResNet. FractalNet shows thatexplicit residual learning is not a requirement for building ultra-deep neural networks.Through analysis and experiments, we elucidate connections between FractalNet and anarray of phenomena engineered into previous deep network designs.As an additional contribution, we develop drop-path, a novel regularization protocol for ultra-deep fractal networks. Without data augmentation, fractal networks, trained with drop-path anddropout (Hinton et al., 2012), exceed the performance of residual networks regularized via stochasticdepth (Huang et al., 2016b). Though, like stochastic depth, it randomly removes macro-scalecomponents, drop-path further exploits our fractal structure in choosing which components to disable.Drop-path constitutes not only a regularization strategy, but also provides means of optionallyimparting fractal networks with anytime behavior. A particular schedule of dropped paths duringlearning prevents subnetworks of different depths from co-adapting. As a consequence, both shallowand deep subnetworks must individually produce correct output. Querying a shallow subnetwork thusyields a quick and moderately accurate result in advance of completion of the full network.Section 3 elaborates the technical details of fractal networks and drop-path. Section 4 providesexperimental comparisons to residual networks across the CIFAR-10, CIFAR-100 (Krizhevsky,2009), SVHN (Netzer et al., 2011), and ImageNet (Deng et al., 2009) datasets. We also evaluateregularization and data augmentation strategies, investigate subnetwork student-teacher behaviorduring training, and benchmark anytime networks obtained using drop-path. Section 5 providessynthesis. By virtue of encapsulating many known, yet seemingly distinct, design principles, self-similar structure may materialize as a fundamental component of neural architectures.2Published as a conference paper at ICLR 20172 R ELATED WORKDeepening feed-forward neural networks has generally returned dividends in performance. A strikingexample within the computer vision community is the improvement on the ImageNet (Deng et al.,2009) classification task when transitioning from AlexNet (Krizhevsky et al., 2012) to VGG (Si-monyan & Zisserman, 2015) to GoogLeNet (Szegedy et al., 2015) to ResNet (He et al., 2016a).Unfortunately, greater depth also makes training more challenging, at least when employing a first-order optimization method with randomly initialized layers. As the network grows deeper and morenon-linear, the linear approximation of a gradient step becomes increasingly inappropriate. Desire toovercome these difficulties drives research on both optimization techniques and network architectures.On the optimization side, much recent work yields improvements. To prevent vanishing gradients,ReLU activation functions now widely replace sigmoid and tanh units (Nair & Hinton, 2010). Thissubject remains an area of active inquiry, with various tweaks on ReLUs, e.g.PReLUs (He et al., 2015),and ELUs (Clevert et al., 2016). Even with ReLUs, employing batch normalization (Ioffe & Szegedy,2015) speeds training by reducing internal covariate shift. Good initialization can also amelioratethis problem (Glorot & Bengio, 2010; Mishkin & Matas, 2016). Path-SGD (Neyshabur et al., 2015)offers an alternative normalization scheme. Progress in optimization is somewhat orthogonal to ourarchitectural focus, with the expectation that advances in either are ripe for combination.Notable ideas in architecture reach back to skip connections, the earliest example of a nontrivialrouting pattern within a neural network. Recent work further elaborates upon them (Maire et al., 2014;Hariharan et al., 2015). Highway networks (Srivastava et al., 2015) and ResNet (He et al., 2016a;b)offer additional twists in the form of parameterized pass-through and gating. In work subsequentto our own, Huang et al. (2016a) investigate a ResNet variant with explicit skip connections. Thesemethods share distinction as the only other designs demonstrated to scale to hundreds of layers andbeyond. ResNet’s building block uses the identity map as an anchor point and explicitly parameterizesan additive correction term (the residual). Identity initialization also appears in the context of recurrentnetworks (Le et al., 2015). A tendency of ResNet and highway networks to fall-back to the identitymap may make their effective depth much smaller than their nominal depth.Some prior results hint at what we experimentally demonstrate in Section 4. Namely, reduction ofeffective depth is key to training extremely deep networks; residuals are incidental. Huang et al.(2016b) provide one clue in their work on stochastic depth: randomly dropping layers from ResNetduring training, thereby shrinking network depth by a constant factor, provides additional performancebenefit. We build upon this intuition through drop-path, which shrinks depth much more drastically.The success of deep supervision (Lee et al., 2014) provides another clue that effective depth is crucial.Here, an auxiliary loss, forked off mid-level layers, introduces a shorter path during backpropagation.The layer at the fork receives two gradients, originating from the main loss and the auxiliaryloss, that are added together. Deep supervision is now common, being adopted, for example, byGoogLeNet (Szegedy et al., 2015). However, irrelevance of the auxiliary loss at test time introducesthe drawback of having a discrepancy between the actual objective and that used for training.Exploration of the student-teacher paradigm (Ba & Caruana, 2014) illuminates the potential forinterplay between networks of different depth. In the model compression scenario, a deeper network(previously trained) guides and improves the learning of a shallower and faster student network (Ba& Caruana, 2014; Urban et al., 2017). This is accomplished by feeding unlabeled data through theteacher and having the student mimic the teacher’s soft output predictions. FitNets (Romero et al.,2015) explicitly couple students and teachers, forcing mimic behavior across several intermediatepoints in the network. Our fractal networks capture yet another alternative, in the form of implicitcoupling, with the potential for bidirectional information flow between shallow and deep subnetworks.Widening networks, by using larger modules in place of individual layers, has also produced per-formance gains. For example, an Inception module (Szegedy et al., 2015) concatenates results ofconvolutional layers of different receptive field size. Stacking these modules forms the GoogLeNet ar-chitecture. Liao & Carneiro (2015) employ a variant with maxout in place of concatenation. Figure 1makes apparent our connection with such work. As a fractal network deepens, it also widens. More-over, note that stacking two 2D convolutional layers with the same spatial receptive field ( e.g.33)achieves a larger ( 55) receptive field. A horizontal cross-section of a fractal network is reminiscentof an Inception module, except with additional joins due to recursive structure.3Published as a conference paper at ICLR 20173 F RACTAL NETWORKSWe begin with a more formal presentation of the ideas sketched in Figure 1. Convolutional neuralnetworks serve as our running example and, in the subsequent section, our experimental platform.However, it is worth emphasizing that our framework is more general. In principle, convolutionallayers in Figure 1 could be replaced by a different layer type, or even a custom-designed module orsubnetwork, in order to generate other fractal architectures.LetCdenote the index of the truncated fractal fCpq. Our network’s structure, connections and layertypes, is defined by fCpq. A network consisting of a single convolutional layer is the base case:f1pzqconvpzq (1)We define successive fractals recursively:fC1pzqrp fCfCqpzqs`r convpzqs (2)wheredenotes composition and `a join operation. When drawn in the style of Figure 1, Ccorresponds to the number of columns, or width, of network fCpq. Depth, defined to be the numberofconv layers on the longest path between input and output, scales as 2C1. Convolutional networksfor classification typically intersperse pooling layers. We achieve the same by using fCpqas abuilding block and stacking it with subsequent pooling layers Btimes, yielding total depth B2C1.The join operation `merges two feature blobs into one. Here, a blob is the result of a conv layer: atensor holding activations for a fixed number of channels over a spatial domain. The channel countcorresponds to the size of the filter set in the preceding conv layer. As the fractal is expanded, wecollapse neighboring joins into a single joinlayer which spans multiple columns, as shown on theright side of Figure 1. The join layer merges all of its input feature blobs into a single output blob.Several choices seem reasonable for the action of a join layer, including concatenation and addition.We instantiate each join to compute the element-wise mean of its inputs. This is appropriate forconvolutional networks in which channel count is set the same for all conv layers within a fractal block.Averaging might appear similar to ResNet’s addition operation, but there are critical differences:ResNet makes clear distinction between pass-through and residual signals. In FractalNet, nosignal is privileged. Every input to a joinlayer is the output of an immediately precedingconv layer. The network structure alone cannot identify any as being primary.Drop-path regularization, as described next in Section 3.1, forces each input to a join to beindividually reliable. This reduces the reward for even implicitly learning to allocate part ofone signal to act as a residual for another.Experiments show that we can extract high-performance subnetworks consisting of a singlecolumn (Section 4.2). Such a subnetwork is effectively devoid of joins, as only a single pathis active throughout. They produce no signal to which a residual could be added.Together, these properties ensure that join layers are not an alternative method of residual learning.3.1 R EGULARIZATION VIA DROP-PATHDropout (Hinton et al., 2012) and drop-connect (Wan et al., 2013) modify interactions betweensequential network layers in order to discourage co-adaptation. Since fractal networks containadditional macro-scale structure, we propose to complement these techniques with an analogouscoarse-scale regularization scheme.Figure 2 illustrates drop-path. Just as dropout prevents co-adaptation of activations, drop-pathprevents co-adaptation of parallel paths by randomly dropping operands of the join layers. Thisdiscourages the network from using one input path as an anchor and another as a corrective term (aconfiguration that, if not prevented, is prone to overfitting). We consider two sampling strategies:Local : ajoin drops each input with fixed probability, but we make sure at least one survives.Global : a single path is selected for the entire network. We restrict this path to be a singlecolumn, thereby promoting individual columns as independently strong predictors.4Published as a conference paper at ICLR 2017Iteration #1(Local)Iteration #2(Global)Iteration #3(Local)Iteration #4(Global)Figure 2: Drop-path. A fractal network block functions with some connections between layersdisabled, provided some path from input to output is still available. Drop-path guarantees at least onesuch path, while sampling a subnetwork with many other paths disabled. During training, presentinga different active subnetwork to each mini-batch prevents co-adaptation of parallel paths. A globalsampling strategy returns a single column as a subnetwork. Alternating it with local samplingencourages the development of individual columns as performant stand-alone subnetworks.As with dropout, signals may need appropriate rescaling. With element-wise means, this is trivial;each join computes the mean of only its active inputs.In experiments, we train with dropout and a mixture model of 50% local and 50% global samplingfor drop-path. We sample a new subnetwork each mini-batch. With sufficient memory, we cansimultaneously evaluate one local sample and all global samples for each mini-batch by keepingseparate networks and tying them together via weight sharing.While fractal connectivity permits the use of paths of any length, global drop-path forces the use ofmany paths whose lengths differ by orders of magnitude (powers of 2). The subnetworks sampled bydrop-path thus exhibit large structural diversity. This property stands in contrast to stochastic depthregularization of ResNet, which, by virtue of using a fixed drop probability for each layer in a chain,samples subnetworks with a concentrated depth distribution (Huang et al., 2016b).Global drop-path serves not only as a regularizer, but also as a diagnostic tool. Monitoring perfor-mance of individual columns provides insight into both the network and training mechanisms, asSection 4.3 discusses in more detail. Individually strong columns of various depths also give userschoices in the trade-off between speed (shallow) and accuracy (deep).3.2 D ATA AUGMENTATIONData augmentation can reduce the need for regularization. ResNet demonstrates this, achieving27.22% error rate on CIFAR-100 with augmentation compared to 44.76% without (Huang et al.,2016b). While augmentation benefits fractal networks, we show that drop-path provides highlyeffective regularization, allowing them to achieve competitive results even without data augmentation.3.3 I MPLEMENTATION DETAILSWe implement FractalNet using Caffe (Jia et al., 2014). Purely for convenience, we flip the orderof pool and join layers at the end of a block in Figure 1. We pool individual columns immediatelybefore the joins spanning all columns, rather than pooling once immediately after them.We train fractal networks using stochastic gradient descent with momentum. As now standard, weemploy batch normalization together with each conv layer (convolution, batch norm, then ReLU).5Published as a conference paper at ICLR 2017Method C100 C100+ C100++ C10 C10+ C10++ SVHNNetwork in Network (Lin et al., 2013) 35.68 - - 10.41 8.81 - 2.35Generalized Pooling (Lee et al., 2016) 32.37 - - 7.62 6.05 - 1.69Recurrent CNN (Liang & Hu, 2015) 31.75 - - 8.69 7.09 - 1.77Multi-scale (Liao & Carneiro, 2015) 27.56 - - 6.87 - - 1.76FitNet Romero et al. (2015) - 35.04 - - 8.39 - 2.42Deeply Supervised (Lee et al., 2014) - 34.57 - 9.69 7.97 - 1.92All-CNN (Springenberg et al., 2014) - 33.71 - 9.08 7.25 4.41 -Highway Net (Srivastava et al., 2015) - 32.39 - - 7.72 - -ELU (Clevert et al., 2016) - 24.28 - - 6.55 - -Scalable BO (Snoek et al., 2015) - - 27.04 - - 6.37 1.77Fractional Max-Pool (Graham, 2014) - - 26.32 - - 3.47 -FitResNet (Mishkin & Matas, 2016) - 27.66 - - 5.84 - -ResNet (He et al., 2016a) - - - - 6.61 - -ResNet by (Huang et al., 2016b) 44.76 27.22 - 13.63 6.41 - 2.01Stochastic Depth (Huang et al., 2016b) 37.80 24.58 - 11.66 5.23 - 1.75Identity Mapping (He et al., 2016b) - 22.68 - - 4.69 - -ResNet in ResNet (Targ et al., 2016) - 22.90 - - 5.01 - -Wide (Zagoruyko & Komodakis, 2016) - 20.50 - - 4.17 - -DenseNet-BC (Huang et al., 2016a)119.64 17.60 - 5.19 3.62 - 1.74FractalNet (20 layers, 38.6M params) 35.34 23.30 22.85 10.18 5.22 5.11 2.01+ drop-path + dropout 28.20 23.73 23.36 7.33 4.60 4.59 1.87ëdeepest column alone 29.05 24.32 23.60 7.27 4.68 4.63 1.89FractalNet (40 layers, 22.9M params)2- 22.49 21.49 - 5.24 5.21 -Table 1: CIFAR-100/CIFAR-10/SVHN. We compare test error (%) with other leading methods,trained with either no data augmentation, translation/mirroring (+), or more substantial augmentation(++). Our main point of comparison is ResNet. We closely match its benchmark results usingdata augmentation, and outperform it by large margins without data augmentation. Training withdrop-path, we can extract from FractalNet single-column (plain) networks that are highly competitive.4 E XPERIMENTSThe CIFAR, SVHN, and ImageNet datasets serve as testbeds for comparison to prior work andanalysis of FractalNet’s internal behavior. We evaluate performance on the standard classification taskassociated with each dataset. For CIFAR and SVHN, which consist of 3232images, we set ourfractal network to have 5blocks ( B5) with 22non-overlapping max-pooling and subsamplingapplied after each. This reduces the input 3232spatial resolution to 11over the course of theentire network. A softmax prediction layer attaches at the end of the network. Unless otherwise noted,we set the number of filter channels within blocks 1through 5asp64;128;256;512;512q, mostlymatching the convention of doubling the number of channels after halving spatial resolution.For ImageNet, we choose a fractal architecture to facilitate direct comparison with the 34-layerResNet of He et al. (2016a). We use the same first and last layer as ResNet-34, but change the middleof the network to consist of 4blocks ( B4), each of 8layers ( C4columns). We use a filterchannel progression of p128;256;512;1024qin blocks 1through 4.4.1 T RAININGFor experiments using dropout, we fix drop rate per block at p0%;10%;20%;30%;40%q, similarto Clevert et al. (2016). Local drop-path uses 15% drop rate across the entire network.1Densely connected networks (DenseNets) are concurrent work, appearing subsequent to our original arXivpaper on FractalNet. A variant of residual networks, they swap addition for concatenation in the residualfunctional form. We report performance of their 250-layer DenseNet-BC network with growth rate k24.2This deeper (4 column) FractalNet has fewer parameters. We vary column width: p128;64;32;16qchannelsacross columns initially, doubling each block except the last. A linear projection temporarily widens thinnercolumns before joins. As in Iandola et al. (2016), we switch to a mix of 11and33convolutional filters.6Published as a conference paper at ICLR 2017Method Top-1 (%) Top-5 (%)VGG-16 28.07 9.33ResNet-34 C 24.19 7.40FractalNet-34 24.12 7.39Table 2: ImageNet (validation set, 10-crop).Cols. Depth Params. Error (%)1 5 0.3M 37.322 10 0.8M 30.713 20 2.1M 27.694 40 4.8M 27.385 80 10.2M 26.466 160 21.1M 27.38Table 3: Ultra-deep fractal networks(CIFAR-100++). Increasing depth greatly im-proves accuracy until eventual diminishingreturns. Contrast with plain networks, whichare not trainable if made too deep (Table 4).Model Depth Train Loss Error (%)Plain 5 0.786 36.62Plain 10 0.159 32.47Plain 20 0.037 31.31Plain 40 0.580 38.84Fractal Col #1 5 0.677 37.23Fractal Col #2 10 0.141 32.85Fractal Col #3 20 0.029 31.31Fractal Col #4 40 0.016 31.75Fractal Full 40 0.015 27.40Table 4: Fractal structure as a training appara-tus(CIFAR-100++). Plain networks perform well ifmoderately deep, but exhibit worse convergence dur-ing training if instantiated with great depth. How-ever, as a column trained within, and then extractedfrom, a fractal network with mixed drop-path, werecover a plain network that overcomes such depthlimitation (possibly due to a student-teacher effect).We run for 400epochs on CIFAR, 20epochs on SVHN, and 70epochs on ImageNet. Our learningrate starts at 0:02(for ImageNet, 0:001) and we train using stochastic gradient descent with batchsize100(for ImageNet, 32) and momentum 0:9. For CIFAR/SVHN, we drop the learning rate by afactor of 10whenever the number of remaining epochs halves. For ImageNet, we drop by a factor of10at epochs 50and65. We use Xavier initialization (Glorot & Bengio, 2010).A widely employed (Lin et al., 2013; Clevert et al., 2016; Srivastava et al., 2015; He et al., 2016a;b;Huang et al., 2016b; Targ et al., 2016) scheme for data augmentation on CIFAR consists of onlyhorizontal mirroring and translation (uniform offsets in r4;4s), with images zero-padded whereneeded after mean subtraction. We denote results achieved using no more than this degree ofaugmentation by appending a “+” to the dataset name ( e.g.CIFAR-100+). A “++” marks resultsreliant on more data augmentation; here exact schemes may vary. Our entry in this category is modestand simply changes the zero-padding to reflect-padding.4.2 R ESULTSTable 1 compares performance of FractalNet on CIFAR and SVHN with competing methods. Frac-talNet (depth 20) outperforms the original ResNet across the board. With data augmentation, ourCIFAR-100 accuracy is close to that of the best ResNet variants. With neither augmentation nor regu-larization, FractalNet’s performance on CIFAR is superior to both ResNet and ResNet with stochasticdepth, suggesting that FractalNet may be less prone to overfitting. Most methods perform similarlyon SVHN. Increasing depth to 40, while borrowing some parameter reduction tricks (Iandola et al.,2016), reveals FractalNet’s performance to be consistent across a range of configuration choices.Experiments without data augmentation highlight the power of drop-path regularization. On CIFAR-100, drop-path reduces FractalNet’s error rate from 35:34% to28:20%. Unregularized ResNet is farbehind ( 44:76%) and ResNet with stochastic depth ( 37:80%) does not catch up to our unregularizedstarting point of 35:34%. CIFAR-10 mirrors this story. With data augmentation, drop-path provides aboost (CIFAR-10), or does not significantly influence FractalNet’s performance (CIFAR-100).Note that the performance of the deepest column of the fractal network is close to that of the fullnetwork (statistically equivalent on CIFAR-10). This suggests that the fractal structure may be moreimportant as a learning framework than as a final model architecture.Table 2 shows that FractalNet scales to ImageNet, matching ResNet (He et al., 2016a) at equal depth.Note that, concurrent with our work, refinements to the residual network paradigm further improve thestate-of-the-art on ImageNet. Wide residual networks (Zagoruyko & Komodakis, 2016) of 34-layersreduce single-crop Top-1 and Top-5 validation error by approximately 2%and1%, respectively, over7Published as a conference paper at ICLR 20170 50 100 150 200 250 300 350 400Epochs10-1100101Training LossPlain Networks5 layers10 layers20 layers40 layers0 50 100 150 200 250 300 350 400Epochs10-1100101Training LossFractalNetCol #1: 5 layersCol #2: 10 layersCol #3: 20 layersCol #4: 40 layersFractalNetFigure 3: Implicit deep supervision. Left: Evolution of loss for plain networks of depth 5,10,20and40trained on CIFAR-100. Training becomes increasingly difficult for deeper networks. At 40layers, we are unable to train the network satisfactorily. Right: We train a 4column fractal networkwith mixed drop-path, monitoring its loss as well as the losses of its four subnetworks correspondingto individual columns of the same depth as the plain networks. As the 20-layer subnetwork starts tostabilize, drop-path puts pressure on the 40-layer column to adapt, with the rest of the network as itsteacher. This explains the elbow-shaped learning curve for Col #4 that occurs around 25epochs.ResNet- 34by doubling feature channels in each layer. DenseNets (Huang et al., 2016a) substantiallyimprove performance by building residual blocks that concatenate rather than add feature channels.Table 3 demonstrates that FractalNet resists performance degradation as we increase Cto obtainextremely deep networks ( 160layers for C6). Scores in this table are not comparable tothose in Table 1. For time and memory efficiency, we reduced block-wise feature channels top16;32;64;128;128qand the batch size to 50for the supporting experiments in Tables 3 and 4.Table 4 provides a baseline showing that training of plain deep networks begins to degrade by the timetheir depth reaches 40layers. In our experience, a plain 160-layer completely fails to converge. Thistable also highlights the ability to use FractalNet and drop-path as an engine for extracting trainednetworks (columns) with the same topology as plain networks, but much higher test performance.4.3 I NTROSPECTIONWith Figure 3, we examine the evolution of a 40-layer FractalNet during training. Tracking columnsindividually (recording their losses when run as stand-alone networks), we observe that the 40-layercolumn initially improves slowly, but picks up once the loss of the rest of the network begins tostabilize. Contrast with a plain 40-layer network trained alone (dashed blue line), which never makesfast progress. The column has the same initial plateau, but subsequently improves after 25epochs,producing a loss curve uncharacteristic of plain networks.We hypothesize that the fractal structure triggers effects akin to deep supervision and lateral student-teacher information flow. Column #4 joins with column #3 every other layer, and in every fourthlayer this join involves no other columns. Once the fractal network partially relies on the signalgoing through column #3, drop-path puts pressure on column #4 to produce a replacement signalwhen column #3 is dropped. This task has constrained scope. A particular drop only requires twoconsecutive layers in column #4 to substitute for one in column #3 (a mini student-teacher problem).This explanation of FractalNet dynamics parallels what, in concurrent work, Greff et al. (2017)claim for ResNet. Specifically, Greff et al. (2017) suggest residual networks learn unrolled iterativeestimation, with each layer performing a gradual refinement on its input representation. The deepestFractalNet column could behave in the same manner, with the remainder of the network acting as ascaffold for building smaller refinement steps by doubling layers from one column to the next.8Published as a conference paper at ICLR 2017These interpretations appear not to mesh with the conclusions of Veit et al. (2016), who claim thatensemble-like behavior underlies the success of ResNet. This is certainly untrue of some very deepnetworks, as FractalNet provides a counterexample: we can extract a single column (plain networktopology) and it alone (no ensembling) performs nearly as well as the entire network. Moreover, thegradual refinement view may offer an alternative explanation for the experiments of Veit et al. (2016).If each layer makes only a small modification, removing one may look, to the subsequent portionof the network, like injecting a small amount of input noise. Perhaps noise tolerance explains thegradual performance degradation that Veit et al. (2016) observe when removing ResNet layers.5 C ONCLUSIONOur experiments with fractal networks provide strong evidence that path length is fundamentalfor training ultra-deep neural networks; residuals are incidental. Key is the shared characteristicof FractalNet and ResNet: large nominal network depth, but effectively shorter paths for gradientpropagation during training. Fractal architectures are arguably the simplest means of satisfyingthis requirement, and match residual networks in experimental performance. Fractal networks areresistant to being too deep; extra depth may slow training, but does not impair accuracy.With drop-path, regularization of extremely deep fractal networks is intuitive and effective. Drop-pathdoubles as a method of enforcing speed (latency) vs. accuracy tradeoffs. For applications where fastresponses have utility, we can obtain fractal networks whose partial evaluation yields good answers.Our analysis connects the internal behavior of fractal networks with phenomena engineered into othernetworks. Their substructure resembles hand-crafted modules used as components in prior work.Their training evolution may emulate deep supervision and student-teacher learning.ACKNOWLEDGMENTSWe gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used forthis research.REFERENCESJimmy Ba and Rich Caruana. Do deep nets really need to be deep? NIPS , 2014.Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning byexponential linear units (ELUs). ICLR , 2016.Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchicalimage database. CVPR , 2009.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks.AISTATS , 2010.Benjamin Graham. Fractional max-pooling. arXiv:1412.6071 , 2014.Klaus Greff, Rupesh Kumar Srivastava, and Jürgen Schmidhuber. Highway and residual networks learn unrollediterative estimation. ICLR , 2017.Bharath Hariharan, Pablo Arbelaez, Ross Girshick, and Jitendra Malik. Hypercolumns for object segmentationand fine-grained localization. CVPR , 2015.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-levelperformance on ImageNet classification. ICCV , 2015.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CVPR ,2016a.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. ECCV ,2016b.Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Improvingneural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580 , 2012.9Published as a conference paper at ICLR 2017Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks.arXiv:1608.06993 , 2016a.Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth.ECCV , 2016b.Forrest N. Iandola, Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer.SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 1MB model size. arXiv:1602.07360 ,2016.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducinginternal covariate shift. ICML , 2015.Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar-rama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093 ,2014.Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classification with deep convolutionalneural networks. NIPS , 2012.Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectifiedlinear units. arXiv:1504.00941 , 2015.Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets.NIPS Workshop on Deep Learning and Representation Learning , 2014.Chen-Yu Lee, Patrick W Gallagher, and Zhuowen Tu. Generalizing pooling functions in convolutional neuralnetworks: Mixed, gated, and tree. AISTATS , 2016.Ming Liang and Xiaolin Hu. Recurrent convolutional neural network for object recognition. CVPR , 2015.Zhibin Liao and Gustavo Carneiro. Competitive multi-scale convolution. arXiv:1511.05635 , 2015.Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. ICLR , 2013.Michael Maire, Stella X. Yu, and Pietro Perona. Reconstructive sparse code transfer for contour detection andsemantic labeling. ACCV , 2014.Dmytro Mishkin and Jiri Matas. All you need is a good init. ICLR , 2016.Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. ICML , 2010.Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y . Ng. Reading digits innatural images with unsupervised feature learning. NIPS Workshop on Deep Learning and UnsupervisedFeature Learning , 2011.Behnam Neyshabur, Ruslan Salakhutdinov, and Nathan Srebro. Path-SGD: Path-normalized optimization indeep neural networks. NIPS , 2015.Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio.Fitnets: Hints for thin deep nets. ICLR , 2015.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition.ICLR , 2015.Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Md Patwary,Mostofa Ali, Ryan P Adams, et al. Scalable bayesian optimization using deep neural networks. ICML , 2015.Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity:The all convolutional net. ICLR (workshop track) , 2014.Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. ICML , 2015.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan,Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. CVPR , 2015.Sasha Targ, Diogo Almeida, and Kevin Lyman. Resnet in resnet: Generalizing residual architectures.arXiv:1603.08029 , 2016.10Published as a conference paper at ICLR 2017Gregor Urban, Krzysztof J. Geras, Samira Ebrahimi Kahou, Ozlem Aslan, Shengjie Wang, AbdelrahmanMohamed, Matthai Philipose, Matt Richardson, and Rich Caruana. Do deep convolutional nets really need tobe deep and convolutional? ICLR , 2017.Andreas Veit, Michael Wilber, and Serge Belongie. Residual networks behave like ensembles of relativelyshallow networks. NIPS , 2016.Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neural networks usingdropconnect. ICML , 2013.Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. BMVC , 2016.11
BJ6oOfqge
Published as a conference paper at ICLR 2017TEMPORAL ENSEMBLING FOR SEMI-SUPERVISEDLEARNINGSamuli LaineNVIDIAslaine@nvidia.comTimo AilaNVIDIAtaila@nvidia.comABSTRACTIn this paper, we present a simple and efficient method for training deep neuralnetworks in a semi-supervised setting where only a small portion of training datais labeled. We introduce self-ensembling, where we form a consensus predictionof the unknown labels using the outputs of the network-in-training on differentepochs, and most importantly, under different regularization and input augmenta-tion conditions. This ensemble prediction can be expected to be a better predictorfor the unknown labels than the output of the network at the most recent trainingepoch, and can thus be used as a target for training. Using our method, we setnew records for two standard semi-supervised learning benchmarks, reducing the(non-augmented) classification error rate from 18.44% to 7.05% in SVHN with500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and furtherto 5.12% and 12.16% by enabling the standard augmentations. We additionallyobtain a clear improvement in CIFAR-100 classification accuracy by using ran-dom images from the Tiny Images dataset as unlabeled extra inputs during train-ing. Finally, we demonstrate good tolerance to incorrect labels.1 I NTRODUCTIONIt has long been known that an ensemble of multiple neural networks generally yields better pre-dictions than a single network in the ensemble. This effect has also been indirectly exploited whentraining a single network through dropout (Srivastava et al., 2014), dropconnect (Wan et al., 2013),or stochastic depth (Huang et al., 2016) regularization methods, and in swapout networks (Singhet al., 2016), where training always focuses on a particular subset of the network, and thus the com-plete network can be seen as an implicit ensemble of such trained sub-networks. We extend this ideaby forming ensemble predictions during training, using the outputs of a single network on differenttraining epochs and under different regularization and input augmentation conditions. Our train-ing still operates on a single network, but the predictions made on different epochs correspond to anensemble prediction of a large number of individual sub-networks because of dropout regularization.This ensemble prediction can be exploited for semi-supervised learning where only a small portionof training data is labeled. If we compare the ensemble prediction to the current output of the net-work being trained, the ensemble prediction is likely to be closer to the correct, unknown labels ofthe unlabeled inputs. Therefore the labels inferred this way can be used as training targets for theunlabeled inputs. Our method relies heavily on dropout regularization and versatile input augmen-tation. Indeed, without neither, there would be much less reason to place confidence in whateverlabels are inferred for the unlabeled training data.We describe two ways to implement self-ensembling, -model and temporal ensembling. Both ap-proaches surpass prior state-of-the-art results in semi-supervised learning by a considerable margin.We furthermore observe that self-ensembling improves the classification accuracy in fully labeledcases as well, and provides tolerance against incorrect labels.The recently introduced transform/stability loss of Sajjadi et al. (2016b) is based on the same prin-ciple as our work, and the -model can be seen as a special case of it. The -model can also beseen as a simplification of the -model of the ladder network by Rasmus et al. (2015), a previouslypresented network architecture for semi-supervised learning. Our temporal ensembling method hasconnections to the bootstrapping method of Reed et al. (2014) targeted for training with noisy labels.1Published as a conference paper at ICLR 2017xiyistochasticaugmentationnetworkwith dropoutzi~zicross-entropysquareddifferenceweightedsumlossxiyistochasticaugmentationzi~zicross-entropysquareddifferenceweightedsumlosszinetworkwith dropoutw(t)w(t)Temporal ensemblingП-modelFigure 1: Structure of the training pass in our methods. Top: -model. Bottom: temporal en-sembling. Labels yiare available only for the labeled inputs, and the associated cross-entropy losscomponent is evaluated only for those.Algorithm 1 -model pseudocode.Require:xi= training stimuliRequire:L= set of training input indices with known labelsRequire:yi= labels for labeled inputs i2LRequire:w(t)= unsupervised weight ramp-up functionRequire:f(x)= stochastic neural network with trainable parameters Require:g(x)= stochastic input augmentation functionfortin[1;num epochs ]doforeach minibatch Bdozi2B f(g(xi2B)) .evaluate network outputs for augmented inputs~zi2B f(g(xi2B)) .again, with different dropout and augmentationloss 1jBjPi2(B\L)logzi[yi].supervised loss component+w(t)1CjBjPi2Bjjzi~zijj2.unsupervised loss componentupdateusing, e.g., A DAM .update network parametersend forend forreturn2 S ELF-ENSEMBLING DURING TRAININGWe present two implementations of self-ensembling during training. The first one, -model, en-courages consistent network output between two realizations of the same input stimulus, under twodifferent dropout conditions. The second method, temporal ensembling, simplifies and extends thisby taking into account the network predictions over multiple previous training epochs.We shall describe our methods in the context of traditional image classification networks. Let thetraining data consist of total of Ninputs, out of which Mare labeled. The input stimuli, availablefor all training data, are denoted xi, wherei2f1:::Ng. Let setLcontain the indices of the labeledinputs,jLj=M. For everyi2L, we have a known correct label yi2f1:::Cg, whereCis thenumber of different classes.2.1 -MODELThe structure of -model is shown in Figure 1 (top), and the pseudocode in Algorithm 1. Duringtraining, we evaluate the network for each training input xitwice, resulting in prediction vectors ziand~zi. Our loss function consists of two components. The first component is the standard cross-entropy loss, evaluated for labeled inputs only. The second component, evaluated for all inputs,penalizes different predictions for the same training input xiby taking the mean square difference2Published as a conference paper at ICLR 2017between the prediction vectors ziand~zi.1To combine the supervised and unsupervised loss terms,we scale the latter by time-dependent weighting function w(t). By comparing the entire outputvectorsziand~zi, we effectively ask the “dark knowledge” (Hinton et al., 2015) between the twoevaluations to be close, which is a much stronger requirement compared to asking that only the finalclassification remains the same, which is what happens in traditional training.It is important to notice that, because of dropout regularization, the network output during trainingis a stochastic variable. Thus two evaluations of the same input xiunder same network weights yield different results. In addition, Gaussian noise and augmentations such as random translationare evaluated twice, resulting in additional variation. The combination of these effects explainsthe difference between the prediction vectors ziand~zi. This difference can be seen as an error inclassification, given that the original input xiwas the same, and thus minimizing it is a reasonablegoal.In our implementation, the unsupervised loss weighting function w(t)ramps up, starting from zero,along a Gaussian curve during the first 80 training epochs. See Appendix A for further details aboutthis and other training parameters. In the beginning the total loss and the learning gradients are thusdominated by the supervised loss component, i.e., the labeled data only. We have found it to bevery important that the ramp-up of the unsupervised loss component is slow enough—otherwise,the network gets easily stuck in a degenerate solution where no meaningful classification of the datais obtained.Our approach is somewhat similar to the -model of the ladder network by Rasmus et al. (2015), butconceptually simpler. In the -model, the comparison is done directly on network outputs, i.e., aftersoftmax activation, and there is no auxiliary mapping between the two branches such as the learneddenoising functions in the ladder network architecture. Furthermore, instead of having one “clean”and one “corrupted” branch as in -model, we apply equal augmentation and noise to the inputs forboth branches.As shown in Section 3, the -model combined with a good convolutional network architectureprovides a significant improvement over prior art in classification accuracy.2.2 T EMPORAL ENSEMBLINGAnalyzing how the -model works, we could equally well split the evaluation of the two branches intwo separate phases: first classifying the training set once without updating the weights , and thentraining the network on the same inputs under different augmentations and dropout, using the justobtained predictions as targets for the unsupervised loss component. As the training targets obtainedthis way are based on a single evaluation of the network, they can be expected to be noisy. Temporalensembling alleviates this by aggregating the predictions of multiple previous network evaluationsinto an ensemble prediction. It also lets us evaluate the network only once during training, gainingan approximate 2x speedup over the -model.The structure of our temporal ensembling method is shown in Figure 1 (bottom), and the pseudocodein Algorithm 2. The main difference to the -model is that the network and augmentations areevaluated only once per input per epoch, and the target vectors ~zfor the unsupervised loss componentare based on prior network evaluations instead of a second evaluation of the network.After every training epoch, the network outputs ziare accumulated into ensemble outputs ZibyupdatingZi Zi+ (1)zi, whereis a momentum term that controls how far the ensemblereaches into training history. Because of dropout regularization and stochastic augmentation, Zthuscontains a weighted average of the outputs of an ensemble of networks ffrom previous trainingepochs, with recent epochs having larger weight than distant epochs. For generating the trainingtargets ~z, we need to correct for the startup bias in Zby dividing by factor (1t). A similarbias correction has been used in, e.g., Adam (Kingma & Ba, 2014) and mean-only batch normal-ization (Salimans & Kingma, 2016). On the first training epoch, Zand~zare zero as no data fromprevious epochs is available. For this reason, we specify the unsupervised weight ramp-up functionw(t)to also be zero on the first training epoch.1Squared difference gave slightly but consistently better results than cross-entropy loss in our tests.3Published as a conference paper at ICLR 2017Algorithm 2 Temporal ensembling pseudocode. Note that the updates of Zand~zcould equallywell be done inside the minibatch loop; in this pseudocode they occur between epochs for clarity.Require:xi= training stimuliRequire:L= set of training input indices with known labelsRequire:yi= labels for labeled inputs i2LRequire:= ensembling momentum, 0<1Require:w(t)= unsupervised weight ramp-up functionRequire:f(x)= stochastic neural network with trainable parameters Require:g(x)= stochastic input augmentation functionZ 0[NC] .initialize ensemble predictions~z 0[NC] .initialize target vectorsfortin[1;num epochs ]doforeach minibatch Bdozi2B f(g(xi2B;t)) .evaluate network outputs for augmented inputsloss 1jBjPi2(B\L)logzi[yi].supervised loss component+w(t)1CjBjPi2Bjjzi~zijj2.unsupervised loss componentupdateusing, e.g., A DAM .update network parametersend forZ Z+ (1)z . accumulate ensemble predictions~z Z=(1t) .construct target vectors by bias correctionend forreturnThe benefits of temporal ensembling compared to -model are twofold. First, the training is fasterbecause the network is evaluated only once per input on each epoch. Second, the training targets~zcan be expected to be less noisy than with -model. As shown in Section 3, we indeed obtainsomewhat better results with temporal ensembling than with -model in the same number of trainingepochs. The downside compared to -model is the need to store auxiliary data across epochs, andthe new hyperparameter . While the matrix Zcan be fairly large when the dataset contains a largenumber of items and categories, its elements are accessed relatively infrequently. Thus it can bestored, e.g., in a memory mapped file.An intriguing additional possibility of temporal ensembling is collecting other statistics from thenetwork predictions zibesides the mean. For example, by tracking the second raw moment ofthe network outputs, we can estimate the variance of each output component zi;j. This makes itpossible to reason about the uncertainty of network outputs in a principled way (Gal & Ghahramani,2016). Based on this information, we could, e.g., place more weight on more certain predictionsvs. uncertain ones in the unsupervised loss term. However, we leave the exploration of these avenuesas future work.3 R ESULTSOur network structure is given in Table 5, and the test setup and all training parameters are detailedin Appendix A. We test the -model and temporal ensembling in two image classification tasks,CIFAR-10 and SVHN, and report the mean and standard deviation of 10 runs using different randomseeds.Although it is rarely stated explicitly, we believe that our comparison methods do not use input aug-mentation, i.e., are limited to dropout and other forms of permutation-invariant noise. Therefore wereport the error rates without augmentation, unless explicitly stated otherwise. Given that the abilityof an algorithm to extract benefit from augmentation is also an important property, we report theclassification accuracy using a standard set of augmentations as well. In purely supervised trainingthe de facto standard way of augmenting the CIFAR-10 dataset includes horizontal flips and randomtranslations, while SVHN is limited to random translations. By using these same augmentations wecan compare against the best fully supervised results as well. After all, the fully supervised resultsshould indicate the upper bound of obtainable accuracy.4Published as a conference paper at ICLR 2017Table 1: CIFAR-10 results with 4000 labels, averages of 10 runs (4 runs for all labels).Error rate (%) with # labels4000 All (50000)Supervised-only 35:561:59 7:330:04with augmentation 34:851:65 6:050:15Conv-Large, -model (Rasmus et al., 2015) 20:400:47CatGAN (Springenberg, 2016) 19:580:58GAN of Salimans et al. (2016) 18:632:32-model 16:550:29 6:900:07-model with augmentation 12:360:31 5.560.10Temporal ensembling with augmentation 12.160.24 5:600:10Table 2: SVHN results for 500 and 1000 labels, averages of 10 runs (4 runs for all labels).Error rate (%) with # labelsModel500 1000 All (73257)Supervised-only 35:185:61 20:472:64 3:050:07with augmentation 31:593:60 19:303:89 2:880:03DGN (Kingma et al., 2014) 36:020:10Virtual Adversarial (Miyato et al., 2016) 24:63ADGM (Maaløe et al., 2016) 22:86SDGM (Maaløe et al., 2016) 16:610:24GAN of Salimans et al. (2016) 18:444:8 8:111:3-model 7:050:30 5:430:25 2:780:03-model with augmentation 6:650:53 4:820:17 2.540.04Temporal ensembling with augmentation 5.120.13 4.420.16 2:740:063.1 CIFAR-10CIFAR-10 is a dataset consisting of 3232pixel RGB images from ten classes. Table 1 shows a2:1percentage point reduction in classification error rate with 4000 labels (400 per class) comparedto earlier methods for the non-augmented -model.Enabling the standard set of augmentations further reduces the error rate by 4:2percentage pointsto12:36%. Temporal ensembling is slightly better still at 12:16%, while being twice as fast totrain. This small improvement conceals the subtle fact that random horizontal flips need to be doneindependently for each epoch in temporal ensembling, while -model can randomize once per apair of evaluations, which according to our measurements is 0.5 percentage points better thanindependent flips.A principled comparison with Sajjadi et al. (2016b) is difficult due to several reasons. They provideresults only for a fairly extreme set of augmentations (translations, flipping, rotations, stretching,and shearing) on top of fractional max pooling (Graham, 2014), which introduces random, localstretching inside the network, and is known to improve classification results substantially. Theyquote an error rate of only 13.60% for supervised-only training with 4000 labels, while our cor-responding baseline is 34.85%. This gap indicates a huge benefit from versatile augmentationsand fractional max pooling—in fact, their baseline result is already better than any previous semi-supervised results. By enabling semi-supervised learning they achieve a 17% drop in classificationerror rate (from 13.60% to 11.29%), while we see a much larger relative drop of 65% (from 34.85%to 12.16%).3.2 SVHNThe street view house numbers (SVHN) dataset consists of 3232pixel RGB images of real-worldhouse numbers, and the task is to classify the centermost digit. In SVHN we chose to use only the5Published as a conference paper at ICLR 2017Table 3: CIFAR-100 results with 10000 labels, averages of 10 runs (4 runs for all labels).Error rate (%) with # labels10000 All (50000)Supervised-only 51:210:33 29:140:25with augmentation 44:560:30 26:420:17-model 43:430:54 29:060:21-model with augmentation 39:190:36 26:320:04Temporal ensembling with augmentation 38.650.51 26.300.15Table 4: CIFAR-100 + Tiny Images results, averages of 10 runs.Error rate (%) with # unlabeledauxiliary inputs from Tiny ImagesRandom 500k Restricted 237k-model with augmentation 25:790:17 25:430:32Temporal ensembling with augmentation 23.620.23 23.790.24official 73257 training examples following Salimans et al. (2016). Even with this choice our errorrate with all labels is only 3:05% without augmentation.Table 2 compares our method to the previous state-of-the-art. With the most commonly used 1000labels we observe an improvement of 2:7percentage points, from 8:11% to5:43% without augmen-tation, and further to 4:42% with standard augmentations.We also investigated the behavior with 500 labels, where we obtained an error rate less than halfof Salimans et al. (2016) without augmentations, with a significantly lower standard deviation aswell. When augmentations were enabled, temporal ensembling further reduced the error rate to5:12%. In this test the difference between -model and temporal ensembling was quite significantat1:5percentage points.In SVHN Sajjadi et al. (2016b) provide results without augmentation, with the caveat that theyuse fractional max pooling, which is a very augmentation-like technique due to the random, localstretching it introduces inside the network. It leads to a superb error rate of 2.28% in supervised-only training, while our corresponding baseline is 3.05% (or 2.88% with translations). Given thatin a separate experiment our network matched the best published result for non-augmented SVHNwhen extra data is used (1.69% from Lee et al. (2015)), this gap is quite surprising, and leads us toconclude that fractional max pooling leads to a powerful augmentation of the dataset, well beyondwhat simple translations can achieve. Our temporal ensembling technique obtains better error ratesfor both 500 and 1000 labels (5.12% and 4.42%, respectively) compared to the 6.03% reported bySajjadi et al. for 732 labels.3.3 CIFAR-100 AND TINYIMAGESThe CIFAR-100 dataset consists of 3232pixel RGB images from a hundred classes. We arenot aware of previous semi-supervised results in this dataset, and chose 10000 labels for our ex-periments. Table 3 shows error rates of 43:43% and38:65% without and with augmentation, re-spectively. These correspond to 7.8 and 5.9 percentage point improvements compared to supervisedlearning with labeled inputs only.We ran two additional tests using unlabeled extra data from Tiny Images dataset (Torralba et al.,2008): one with randomly selected 500k extra images, most not corresponding to any of the CIFAR-100 categories, and another with a restricted set of 237k images from the categories that correspondto those found in the CIFAR-100 dataset (see appendix A for details). The results are shown inTable 4. The addition of randomly selected, unlabeled extra images improved the error rate by 2:7percentage points (from 26:30% to23:63%), indicating a desirable ability to learn from randomnatural images. Temporal ensembling benefited much more from the extra data than the -model.Interestingly, restricting the extra data to categories that are present in CIFAR-100 did not improve6Published as a conference paper at ICLR 201701020304050607080901000%20%50%80%90%01020304050607080901000%20%50%80%90%Standard supervisedTemporal ensembling1300Classification accuracy (%)epoch1300epochFigure 2: Percentage of correct SVHN classifications as a function of training epoch when a part ofthe labels is randomized. With standard supervised training (left) the classification accuracy sufferswhen even a small portion of the labels give disinformation, and the situation worsens quickly asthe portion of randomized labels increases to 50% or more. On the other hand, temporal ensembling(right) shows almost perfect resistance to disinformation when half of the labels are random, andretains over ninety percent classification accuracy even when 80% of the labels are random.the classification accuracy further. This indicates that in order to train a better classifier by addingextra data as unlabeled inputs, it is enough to have the extra data roughly in the same space as theactual inputs—in our case, natural images. We hypothesize that it may even be possible to useproperly crafted synthetic data as unlabeled inputs to obtain improved classifiers.In order to keep the training times tolerable, we limited the number of unlabeled inputs to 50k perepoch in these tests, i.e., on every epoch we trained using all 50k labeled inputs from CIFAR-100 and50k additional unlabeled inputs from Tiny Images. The 50k unlabeled inputs were chosen randomlyon each epoch from the 500k or 237k extra inputs. In temporal ensembling, after each epoch weupdated only the rows of Zthat corresponded to inputs used on that epoch.3.4 S UPERVISED LEARNINGWhen all labels are used for traditional supervised training, our network approximately matchesthe state-of-the-art error rate for a single model in CIFAR-10 with augmentation (Lee et al., 2015;Mishkin & Matas, 2016) at 6:05%, and without augmentation (Salimans & Kingma, 2016) at 7:33%.The same is probably true for SVHN as well, but there the best published results rely on extra datathat we chose not to use.Given this premise, it is perhaps somewhat surprising that our methods reduce the error rate alsowhen all labels are used (Tables 1 and 2). We believe that this is an indication that the consis-tency requirement adds a degree of resistance to ambiguous labels that are fairly common in manyclassification tasks, and that it encourages features to be more invariant to stochastic sampling.3.5 T OLERANCE TO INCORRECT LABELSIn a further test we studied the hypothesis that our methods add tolerance to incorrect labels byassigning a random label to a certain percentage of the training set before starting to train. Figure 2shows the classification error graphs for standard supervised training and temporal ensembling.Clearly our methods provide considerable resistance to wrong labels, and we believe this is becausethe unsupervised loss term encourages the mapping function implemented by the network to beflat in the vicinity of all input data points, whereas the supervised loss term enforces the mappingfunction to have a specific value in the vicinity of the labeled input data points. This means thateven the wrongly labeled inputs play a role in shaping the mapping function—the unsupervisedloss term smooths the mapping function and thus also the decision boundaries, effectively fusingthe inputs into coherent clusters, whereas the excess of correct labels in each class is sufficient forlocking the clusters to the right output vectors through the supervised loss term. The difference toclassical regularizers is that we induce smoothness only on the manifold of likely inputs instead7Published as a conference paper at ICLR 2017of over the entire input domain. For further analysis about the importance of the gradient of themapping function, see Simard et al. (1998).4 R ELATED WORKThere is a large body of previous work on semi-supervised learning (Zhu, 2005). In here we willconcentrate on the ones that are most directly connected to our work.-model is a subset of a ladder network (Rasmus et al., 2015) that introduces lateral connections intoan encoder-decoder type network architecture, targeted at semi-supervised learning. In -model, allbut the highest lateral connections in the ladder network are removed, and after pruning the un-necessary stages, the remaining network consists of two parallel, identical branches. One of thebranches takes the original training inputs, whereas the other branch is given the same input cor-rupted with noise. The unsupervised loss term is computed as the squared difference between the(pre-activation) output of the clean branch and a denoised (pre-activation) output of the corruptedbranch. The denoised estimate is computed from the output of the corrupted branch using a para-metric nonlinearity that has 10 auxiliary trainable parameters per unit. Our -model differs fromthe-model in removing the parametric nonlinearity and denoising, having two corrupted paths,and comparing the outputs of the network instead of pre-activation data of the final layer.Sajjadi et al. (2016b) recently introduced a new loss function for semi-supervised learning, so calledtransform/stability loss, which is founded on the same principle as our work. During training, theyrun augmentation and network evaluation ntimes for each minibatch, and then compute an unsu-pervised loss term as the sum of all pairwise squared distances between the obtained nnetworkoutputs. As such, their technique follows the general pseudo-ensemble agreement (PEA) regular-ization framework of Bachman et al. (2014). In addition, they employ a mutual exclusivity lossterm (Sajjadi et al., 2016a) that we do not use. Our -model can be seen as a special case of thetransform/stability loss obtained by setting n= 2. The computational cost of training with trans-form/stability loss increases linearly as a function of n, whereas the efficiency of our temporalensembling technique remains constant regardless of how large effective ensemble we obtain via theaveraging of previous epochs’ predictions.In bootstrap aggregating, or bagging , multiple networks are trained independently based on subsetsof training data (Breiman, 1996). This results in an ensemble that is more stable and accuratethan the individual networks. Our approach can be seen as pulling the predictions from an implicitensemble that is based on a single network, and the variability is a result of evaluating it underdifferent dropout and augmentation conditions instead of training on different subsets of data. Inwork parallel to ours, Huang et al. (2017) store multiple snapshots of the network during training,hopefully corresponding to different local minima, and use them as an explicit ensemble.The general technique of inferring new labels from partially labeled data is often referred to as boot-strapping orself-training , and it was first proposed by Yarowsky (1995) in the context of linguisticanalysis. Whitney & Sarkar (2012) analyze Yarowsky’s algorithm and propose a novel graph-basedlabel propagation approach. Similarly, label propagation methods (Zhu & Ghahramani, 2002) inferlabels for unlabeled training data by comparing the associated inputs to labeled training inputs usinga suitable distance metric. Our approach differs from this in two important ways. Firstly, we nevercompare training inputs against each other, but instead only rely on the unknown labels remainingconstant, and secondly, we let the network produce the likely classifications for the unlabeled inputsinstead of providing them through an outside process.In addition to partially labeled data, considerable amount of effort has been put into dealing withdensely but inaccurately labeled data. This can be seen as a semi-supervised learning task where partof the training process is to identify the labels that are not to be trusted. For recent work in this area,see, e.g., Sukhbaatar et al. (2014) and Patrini et al. (2016). In this context of noisy labels, Reed et al.(2014) presented a simple bootstrapping method that trains a classifier with the target composed ofa convex combination of the previous epoch output and the known but potentially noisy labels. Ourtemporal ensembling differs from this by taking into account the evaluations over multiple epochs.Generative Adversarial Networks (GAN) have been recently used for semi-supervised learning withpromising results (Maaløe et al., 2016; Springenberg, 2016; Odena, 2016; Salimans et al., 2016). It8Published as a conference paper at ICLR 2017Table 5: The network architecture used in all of our tests.NAME DESCRIPTIONinput 3232RGB imagenoise Additive Gaussian noise = 0:15conv1a 128filters, 33, pad = ’same’, LReLU ( = 0:1)conv1b 128filters, 33, pad = ’same’, LReLU ( = 0:1)conv1c 128filters, 33, pad = ’same’, LReLU ( = 0:1)pool1 Maxpool 22pixelsdrop1 Dropout,p= 0:5conv2a 256filters, 33, pad = ’same’, LReLU ( = 0:1)conv2b 256filters, 33, pad = ’same’, LReLU ( = 0:1)conv2c 256filters, 33, pad = ’same’, LReLU ( = 0:1)pool2 Maxpool 22pixelsdrop2 Dropout,p= 0:5conv3a 512filters, 33, pad = ’valid’, LReLU ( = 0:1)conv3b 256filters, 11, LReLU (= 0:1)conv3c 128filters, 11, LReLU (= 0:1)pool3 Global average pool ( 66!11 pixels)dense Fully connected 128!10output Softmaxcould be an interesting avenue for future work to incorporate a generative component to our solution.We also envision that our methods could be applied to regression-type learning tasks.5 A CKNOWLEDGEMENTSWe thank the anonymous reviewers, Tero Karras, Pekka J ̈anis, Tim Salimans, Ian Goodfellow, aswell as Harri Valpola and his colleagues at Curious AI for valuable suggestions that helped to im-prove this article.REFERENCESPhilip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. In Advancesin Neural Information Processing Systems 27 (NIPS) . 2014.Leo Breiman. Bagging predictors. Machine Learning , 24(2), 1996.Sander Dieleman, Jan Schl ̈uter, Colin Raffel, Eben Olson, Søren Kaae Sønderby, et al. Lasagne:First release., 2015.Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing modeluncertainty in deep learning. CoRR , abs/1506.02142, 2016.Benjamin Graham. Fractional max-pooling. CoRR , abs/1412.6071, 2014.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassinghuman-level performance on imagenet classification. CoRR , abs/1502.01852, 2015.G. E. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. CoRR ,abs/1503.02531, 2015.Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. Deep networks withstochastic depth. CoRR , abs/1603.09382, 2016.Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, and Kilian Q. Weinberger.Snapshot Ensembles: Train 1, get M for free. In Proc. International Conference on LearningRepresentations (ICLR) , 2017.9Published as a conference paper at ICLR 2017Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR ,abs/1412.6980, 2014.Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervisedlearning with deep generative models. In Advances in Neural Information Processing Systems 27(NIPS) . 2014.Chen-Yu Lee, Patrick W. Gallagher, and Zhuowen Tu. Generalizing pooling functions in convolu-tional neural networks: Mixed, gated, and tree. CoRR , abs/1509.08985, 2015.Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep gen-erative models. CoRR , abs/1602.05473, 2016.Andrew L Maas, Awni Y Hannun, and Andrew Ng. Rectifier nonlinearities improve neural networkacoustic models. In Proc. International Conference on Machine Learning (ICML) , volume 30,2013.Dmytro Mishkin and Jiri Matas. All you need is a good init. In Proc. International Conference onLearning Representations (ICLR) , 2016.Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributionalsmoothing with virtual adversarial training. In Proc. International Conference on Learning Rep-resentations (ICLR) , 2016.Augustus Odena. Semi-supervised learning with generative adversarial networks. Data EfficientMachine Learning workshop at ICML 2016 , 2016.Giorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, and Lizhen Qu. Making neuralnetworks robust to label noise: a loss correction approach. CoRR , abs/1609.03683, 2016.Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems28 (NIPS) . 2015.Scott E. Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and An-drew Rabinovich. Training deep neural networks on noisy labels with bootstrapping. CoRR ,abs/1412.6596, 2014.Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Mutual exclusivity loss for semi-superviseddeep learning. In 2016 IEEE International Conference on Image Processing, ICIP 2016 , pp.1908–1912, 2016a.Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transfor-mations and perturbations for deep semi-supervised learning. In Advances in Neural InformationProcessing Systems 29 (NIPS) . 2016b.Tim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization toaccelerate training of deep neural networks. CoRR , abs/1602.07868, 2016.Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training GANs. CoRR , abs/1606.03498, 2016.Patrice Y . Simard, Yann A. LeCun, John S. Denker, and Bernard Victorri. Transformation Invariancein Pattern Recognition — Tangent Distance and Tangent Propagation , pp. 239–274. 1998.Saurabh Singh, Derek Hoiem, and David A. Forsyth. Swapout: Learning an ensemble of deeparchitectures. CoRR , abs/1605.06465, 2016.Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generativeadversarial networks. In Proc. International Conference on Learning Representations (ICLR) ,2016.Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. Strivingfor simplicity: The all convolutional net. CoRR , abs/1412.6806, 2014.10Published as a conference paper at ICLR 2017Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine LearningResearch , 15:1929–1958, 2014.Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Trainingconvolutional networks with noisy labels. CoRR , abs/1406.2080, 2014.Theano Development Team. Theano: A Python framework for fast computation of mathematicalexpressions. CoRR , abs/1605.02688, May 2016.A. Torralba, R. Fergus, and W. T. Freeman. 80 million tiny images: A large data set for nonpara-metric object and scene recognition. IEEE TPAMI , 30(11):1958–1970, 2008.Li Wan, Matthew Zeiler, Sixin Zhang, Yann L. Cun, and Rob Fergus. Regularization of neuralnetworks using dropconnect. Proc. International Conference on Machine Learning (ICML) , 28(3):1058–1066, 2013.Max Whitney and Anoop Sarkar. Bootstrapping via graph propagation. In Proceedings of the 50thAnnual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1 , ACL’12, 2012.David Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In Pro-ceedings of the 33rd Annual Meeting on Association for Computational Linguistics , ACL ’95,1995.Xiaojin Zhu. Semi-supervised learning literature survey. Technical Report 1530, Computer Sci-ences, University of Wisconsin-Madison, 2005.Xiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propa-gation. Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002.A N ETWORK ARCHITECTURE ,TEST SETUP ,AND TRAINING PARAMETERSTable 5 details the network architecture used in all of our tests. It is heavily inspired by ConvPool-CNN-C (Springenberg et al., 2014) and the improvements made by Salimans & Kingma (2016). Alldata layers were initialized following He et al. (2015), and we applied weight normalization andmean-only batch normalization (Salimans & Kingma, 2016) with momentum 0:999to all of them.We used leaky ReLU (Maas et al., 2013) with = 0:1as the non-linearity, and chose to use maxpooling instead of strided convolutions because it gave consistently better results in our experiments.All networks were trained using Adam (Kingma & Ba, 2014) with a maximum learning rate ofmax= 0:003, except for temporal ensembling in the SVHN case where a maximum learning rateofmax= 0:001worked better. Adam momentum parameters were set to 1= 0:9and2= 0:999as suggested in the paper. The maximum value for the unsupervised loss component was set towmaxM=N , whereMis the number of labeled inputs and Nis the total number of training inputs.For-model runs, we used wmax= 100 in all runs except for CIFAR-100 with Tiny Images wherewe setwmax= 300 . For temporal ensembling we used wmax= 30 in most runs. For the corruptedlabel test in Section 3.5 we used wmax= 300 for 0% and 20% corruption, and wmax= 3000 forcorruption of 50% and higher. For basic CIFAR-100 runs we used wmax= 100 , and for CIFAR-100with Tiny Images we used wmax= 1000 . The accumulation decay constant of temporal ensemblingwas set to= 0:6in all runs.In all runs we ramped up both the learning rate and unsupervised loss component weight wduringthe first 80 epochs using a Gaussian ramp-up curve exp[5(1T)2], whereTadvances linearlyfrom zero to one during the ramp-up period. In addition to ramp-up, we annealed the learning rateto zero and Adam 1to0:5during the last 50 epochs, but otherwise we did not decay themduring training. The ramp-down curve was similar to the ramp-up curve but time-reversed and witha scaling constant of 12:5instead of 5. All networks were trained for 300 epochs with minibatchsize of 100.11Published as a conference paper at ICLR 2017CIFAR-10 Following previous work in fully supervised learning, we pre-processed the images us-ing ZCA and augmented the dataset using horizontal flips and random translations. The translationswere drawn from [2;2]pixels, and were independently applied to both branches in the -model.SVHN We pre-processed the input images by biasing and scaling each input image to zero meanand unit variance. We used only the 73257 items in the official training set, i.e., did not use theprovided 531131 extra items. The training setups were otherwise similar to CIFAR-10 except thathorizontal flips were not used.Implementation Our implementation is written in Python using Theano (TheanoDevelopment Team, 2016) and Lasagne (Dieleman et al., 2015), and is available athttps://github.com/smlaine2/tempens .Model convergence As discussed in Section 2.1, a slow ramp-up of the unsupervised cost is veryimportant for getting the models to converge. Furthermore, in our very preliminary tests with 250labels in SVHN we noticed that optimization tended to explode during the ramp-up period, and weeventually found that using a lower value for Adam 2parameter (e.g., 0:99instead of 0:999) seemsto help in this regard.We do not attempt to guarantee that the occurrence of labeled inputs during training would be some-how stratified; with bad luck there might be several consecutive minibatches without any labeledinputs when the label density is very low. Some previous work has identified this as a weakness, andhave solved the issue by shuffling the input sequences in such a way that stratification is guaranteed,e.g. Rasmus et al. (2015) (confirmed from the authors). This kind of stratification might furtherimprove the convergence of our methods as well.Tiny Images, extra data from restricted categories The restricted extra data in Section 3.3 wasextracted from Tiny Images by picking all images with labels corresponding to the 100 categoriesused in CIFAR-100. As the Tiny Images dataset does not contain CIFAR-100 categories aquar-iumfishandmaple tree, we used images with labels fishandmaple instead. The result was a totalof 237 203 images that were used as unlabeled extra data. Table 6 shows the composition of thisextra data set.It is worth noting that the CIFAR-100 dataset itself is a subset of Tiny Images, and we did notexplicitly prevent overlap between this extra set and CIFAR-100. This led to approximately a thirdof the CIFAR-100 training and test images being present as unlabeled inputs in the extra set. Theother test with 500k extra entries picked randomly out of all 79 million images had a negligibleoverlap with CIFAR-100.12Published as a conference paper at ICLR 2017Table 6: The Tiny Images (Torralba et al., 2008) labels and image counts used in the CIFAR-100plus restricted extra data tests (rightmost column of Table 4). Note that the extra input images weresupplied as unlabeled data for our networks, and the labels were used only for narrowing down thefull set of 79 million images.Label # Label # Label # Label #apple 2242 baby 2771 bear 2242 beaver 2116bed 2767 bee 2193 beetle 2173 bicycle 2599bottle 2212 bowl 2707 boy 2234 bridge 2274bus 3068 butterfly 3036 camel 2121 can 2461castle 3094 caterpillar 2382 cattle 2089 chair 2552chimpanzee 1706 clock 2375 cloud 2390 cockroach 2318couch 2171 crab 2735 crocodile 2712 cup 2287dinosaur 2045 dolphin 2504 elephant 2794 fish3082flatfish 1504 forest 2244 fox 2684 girl 2204hamster 2294 house 2320 kangaroo 2563 keyboard 1948lamp 2242 lawn mower 1929 leopard 2139 lion 3045lizard 2130 lobster 2136 man 2248 maple2149motorcycle 2168 mountain 2249 mouse 2128 mushroom 2390oaktree 1995 orange 2650 orchid 1902 otter 2073palm tree 2107 pear 2120 pickup truck 2478 pine tree 2341plain 2198 plate 3109 poppy 2730 porcupine 1900possum 2008 rabbit 2408 raccoon 2587 ray 2564road 2862 rocket 2180 rose 2237 sea 2122seal 2159 shark 2157 shrew 1826 skunk 2450skyscraper 2298 snail 2369 snake 2989 spider 3024squirrel 2374 streetcar 1905 sunflower 2761 sweet pepper 1983table 3137 tank 1897 telephone 1889 television 2973tiger 2603 tractor 1848 train 3020 trout 2726tulip 2160 turtle 2438 wardrobe 2029 whale 2597willow tree 2040 wolf 2423 woman 2446 worm 294513
Byk-VI9eg
Published as a conference paper at ICLR 2017GENERATIVE MULTI -ADVERSARIAL NETWORKSIshan Durugkar, Ian Gemp, Sridhar MahadevanCollege of Information and Computer SciencesUniversity of Massachusetts, AmherstAmherst, MA 01060, USAfidurugkar,imgemp,mahadeva g@cs.umass.eduABSTRACTGenerative adversarial networks (GANs) are a framework for producing a gen-erative model by way of a two-player minimax game. In this paper, we proposetheGenerative Multi-Adversarial Network (GMAN), a framework that extendsGANs to multiple discriminators. In previous work, the successful training ofGANs requires modifying the minimax objective to accelerate training early on.In contrast, GMAN can be reliably trained with the original, untampered objec-tive. We explore a number of design perspectives with the discriminator role rang-ing from formidable adversary to forgiving teacher. Image generation tasks com-paring the proposed framework to standard GANs demonstrate GMAN produceshigher quality samples in a fraction of the iterations when measured by a pairwiseGAM-type metric.1 I NTRODUCTIONGenerative adversarial networks (Goodfellow et al. (2014)) (GANs) are a framework for producinga generative model by way of a two-player minimax game. One player, the generator, attempts togenerate realistic data samples by transforming noisy samples, z, drawn from a simple distribution(e.g.,zN (0;1)) using a transformation function G(z)with learned weights, . The generatorreceives feedback as to how realistic its synthetic sample is from another player, the discriminator,which attempts to discern between synthetic data samples produced by the generator and samplesdrawn from an actual dataset using a function D!(x)with learned weights, !.The GAN framework is one of the more recent successes in a line of research on adversarial train-ing in machine learning (Schmidhuber (1992); Bagnell (2005); Ajakan et al. (2014)) where gamesbetween learners are carefully crafted so that Nash equilibria coincide with some set of desired op-timality criteria. Preliminary work on GANs focused on generating images (e.g., MNIST (LeCunet al. (1998)), CIFAR (Krizhevsky (2009))), however, GANs have proven useful in a variety of appli-cation domains including learning censored representations (Edwards & Storkey (2015)), imitatingexpert policies (Ho & Ermon (2016)), and domain transfer (Yoo et al. (2016)). Work extendingGANs to semi-supervised learning (Chen et al. (2016); Mirza & Osindero (2014); Gauthier (2014);Springenberg (2015)), inference (Makhzani et al. (2015); Dumoulin et al. (2016)), feature learning(Donahue et al. (2016)), and improved image generation (Im et al. (2016); Denton et al. (2015);Radford et al. (2015)) have shown promise as well.Despite these successes, GANs are reputably difficult to train. While research is still underway toimprove training techniques and heuristics (Salimans et al. (2016)), most approaches have focusedon understanding and generalizing GANs theoretically with the aim of exploring more tractableformulations (Zhao et al. (2016); Li et al. (2015); Uehara et al. (2016); Nowozin et al. (2016)).In this paper, we theoretically and empirically justify generalizing the GAN framework to multiplediscriminators. We review GANs and summarize our extension in Section 2. In Sections 3 and 4,we present our N-discriminator extension to the GAN framework ( Generative Multi-AdversarialNetworks ) with several variants which range the role of the discriminator from formidable adversaryto forgiving teacher. Section 4.2 explains how this extension makes training with the untamperedminimax objective tractable. In Section 5, we define an intuitive metric (GMAM) to quantify GMANEqual contribution1Published as a conference paper at ICLR 2017performance and evaluate our framework on a variety of image generation tasks. Section 6 concludeswith a summary of our contributions and directions for future research.Contributions —To summarize, our main contributions are: i) a multi-discriminator GAN frame-work, GMAN, that allows training with the original, untampered minimax objective; ii) a generativemulti-adversarial metric (GMAM) to perform pairwise evaluation of separately trained frameworks;iii) a particular instance of GMAN, GMAN, that allows the generator to automatically regulatetraining and reach higher performance (as measured by GMAM) in a fraction of the training timerequired for the standard GAN model.2 G ENERATIVE ADVERSARIAL NETWORKS TO GMANThe original formulation of a GAN is a minimax game between a generator, G(z) :z!x, and adiscriminator, D!(x) :x![0;1],minGmaxD2DV(D;G ) =Expdata(x)hlog(D(x))i+Ezpz(z)hlog(1D(G(z)))i; (1)wherepdata(x)is the true data distribution and pz(z)is a simple (usually fixed) distribution that iseasy to draw samples from (e.g., N(0;1)). We differentiate between the function space of discrim-inators,D, and elements of this space, D. LetpG(x)be the distribution induced by the generator,G(z). We assume D;G to be deep neural networks as is typically the case.In their original work, Goodfellow et al. (2014) proved that given sufficient network capacitiesand an oracle providing the optimal discriminator, D=argmaxDV(D;G ), gradient descent onpG(x)will recover the desired globally optimal solution, pG(x) =pdata(x), so that the generatordistribution exactly matches the data distribution. In practice, they replaced the second term, log(1D(G(z))), withlog(D(G(z)))to enhance gradient signals at the start of the game; note this is nolonger a zero-sum game. Part of their convergence and optimality proof involves using the oracle,D, to reduce the minimax game to a minimization over Gonly:minGV(D;G) = minGnC(G) =log(4) + 2JSD (pdatajjpG)o(2)whereJSD denotes Jensen-Shannon divergence. Minimizing C(G)necessarily minimizes JSD ,however, we rarely know Dand so we instead minimize V(D;G ), which is only a lower bound.This perspective of minimizing the distance between the distributions, pdata andpG, motivatedLi et al. (2015) to develop a generative model that matches all moments of pG(x)withpdata(x)(atoptimality) by minimizing maximum mean discrepancy (MMD). Another approach, EBGAN, (Zhaoet al. (2016)) explores a larger class of games (non-zero-sum games) which generalize the generatorand discriminator objectives to take real-valued “energies” as input instead of probabilities. Nowozinet al. (2016) and then Uehara et al. (2016) extended the JSD perspective on GANs to more generaldivergences, specifically f-divergences and then Bregman-divergences respectively.In general, these approaches focus on exploring fundamental reformulations of V(D;G ). Similarly,our work focuses on a fundamental reformulation, however, our aim is to provide a framework thataccelerates training of the generator to a more robust state irrespective of the choice of V.2.1 GMAN: A M ULTI -ADVERSARIAL EXTENSIONWe propose introducing multiple discriminators, which brings with it a number of design possibil-ities. We explore approaches ranging between two extremes: 1) a more discriminating D(betterapproximating maxDV(D;G )) and 2) aDbetter matched to the generator’s capabilities. Math-ematically, we reformulate G’s objective as minGmaxF(V(D1;G);:::;V (DN;G))for differentchoices ofF(see Figure 1). Each Diis still expected to independently maximize its own V(Di;G)(i.e. no cooperation). We sometimes abbreviate V(Di;G)withViandF(V1;:::;VN)withFG(Vi).3 A F ORMIDABLE ADVERSARYHere, we consider multi-discriminator variants that attempt to better approximate maxDV(D;G ),providing a harsher critic to the generator.2Published as a conference paper at ICLR 2017G DN D2 D1 V(DN,G) V(D2,G) V(D1,G) F( · ) Figure 1: (GMAN) The generator trains using feedback aggregated over multiple discriminators. IfF:= max ,Gtrains against the best discriminator. If F:=mean ,Gtrains against an ensemble.We explore other alternatives to Fin Sections 4.1 & 4.4 that improve on both these options.3.1 M AXIMIZING V(D,G)For a fixedG, maximizing FG(Vi)withF:= max andNrandomly instantiated copies of our dis-criminator is functionally equivalent to optimizing V(e.g., stochastic gradient ascent) with randomrestarts in parallel and then presenting maxi2f1;:::;NgV(Di;G)as the loss to the generator —a verypragmatic approach to the difficulties presented by the non-convexity of Vcaused by the deep net.Requiring the generator to minimize the max forcesGto generate high fidelity samples that musthold up under the scrutiny of all Ndiscriminators, each potentially representing a distinct max.In practice, maxDi2DV(Di;G)is not performed to convergence (or global optimality), so theabove problem is oversimplified. Furthermore, introducing Ndiscriminators affects the dynam-ics of the game which affects the trajectories of the discriminators. This prevents us from claimingmaxfV1(t);:::;VN(t)g>maxfV01(t)g8teven if we initalize D1(0) =D01(0)as it is unlikely thatD1(t) =D01(t)at some time tafter the start of the game.3.2 B OOSTINGWe can also consider taking the max overNdiscriminators as a form of boosting for the discrim-inator’s online classification problem (online because Gcan produce an infinite data stream). Theboosted discriminator is given a sample xtand must predict whether it came from the generator orthe dataset. The booster then makes its prediction using the predictions of the NweakerDi.There are a few differences between taking the max (case 1) and online boosting (case 2). In case 1,our booster is limited to selecting a single weak discriminator (i.e. a pure strategy), while in case 2,many boosting algorithms more generally use linear combinations of the discriminators. Moreover,in case 2, a booster must make a prediction before receiving a loss function. In case 1, we assumeaccess to the loss function at prediction time, which allows us to compute the max.It is possible to train the weak discriminators using boosting and then ignore the booster’s predictionby instead presenting maxfVig. We explore both variants in our experiments, using the adaptive al-gorithm proposed in Beygelzimer et al. (2015). Unfortunately, boosting failed to produce promisingresults on the image generation tasks. It is possible that boosting produces too strong an adversaryfor learning which motivates the next section. Boosting results appear in Appendix A.7.4 A F ORGIVING TEACHERThe previous perspectives focus on improving the discriminator with the goal of presenting a betterapproximation of maxDV(D;G )to the generator. Our next perspective asks the question, “IsmaxDV(D;G )too harsh a critic?”4.1 Soft-DISCRIMINATORIn practice, training against a far superior discriminator can impede the generator’s learning. Thisis because the generator is unlikely to generate any samples considered “realistic” by the discrimi-nator’s standards, and so the generator will receive uniformly negative feedback. This is problem-3Published as a conference paper at ICLR 2017atic because the information contained in the gradient derived from negative feedback only dictateswhere to drive down pG(x), not specifically where to increase pG(x). Furthermore, driving downpG(x)necessarily increases pG(x)in other regions of X(to maintainRXpG(x) = 1 ) which may ormay not contain samples from the true dataset ( whack-a-mole dilemma). In contrast, a generator ismore likely to see positive feedback against a more lenient discriminator, which may better guide agenerator towards amassing pG(x)in approximately correct regions of X.For this reason, we explore a variety of functions that allow us to soften themax operator. Wechoose to focus on soft versions of the three classical Pythagorean means parameterized by where= 0corresponds to the mean and the max is recovered as !1 :AMsoft(V;) =NXiwiVi (3)GMsoft(V;) =expNXiwilog(Vi)(4)HMsoft(V;) =NXiwiV1i1(5)wherewi=eVi=jeVjwith0;Vi<0. Using a softmax also has the well known advantage ofbeing differentiable (as opposed to subdifferentiable for max). Note that we only require continuityto guarantee that computing the softmax is actually equivalent to computing V(~D;G )where ~Dissome convex combination of Di(see Appendix A.5).4.2 U SING THE ORIGINAL MINIMAX OBJECTIVETo illustrate the effect the softmax has on training, observe that the component of AMsoft(V;0)relevant to generator training can be rewritten as1NNXiExpG(x)hlog(1Di(x))i=1NExpG(x)hlog(z)i: (6)wherez=QNi(1Di(x)). Note that the generator gradient, j@log(z)@zj, is minimized at z= 1overz2(0;1]1. From this form, it is clear that z= 1 if and only if Di= 08i, soGonly receives avanishing gradient if all Diagree that the sample is fake; this is especially unlikely for large N. Inother words, Gonly needs to fool a single Dito receive constructive feedback. This result allows thegenerator to successfully minimize the original generator objective, log(1D). This is in contrastto the more popular log(D)introduced to artificially enhance gradients at the start of training.At the beginning of training, when maxDiV(Di;G)is likely too harsh a critic for the generator, wecan setcloser to zero to use the mean, increasing the odds of providing constructive feedback tothe generator. In addition, the discriminators have the added benefit of functioning as an ensemble,reducing the variance of the feedback presented to the generator, which is especially importantwhen the discriminators are far from optimal and are still learning a reasonable decision boundary.As training progresses and the discriminators improve, we can increase to become more criticalof the generator for more refined training.4.3 M AINTAINING MULTIPLE HYPOTHESESWe argue for this ensemble approach on a more fundamental level as well. Here, we draw onthe density ratio estimation perspective of GANs (Uehara et al. (2016)). The original GAN proofassumes we have access to pdata(x), if only implicitly. In most cases of interest, the discriminatoronly has access to a finite dataset sampled from pdata(x); therefore, when computing expectationsofV(D;G ), we only draw samples from our finite dataset. This is equivalent to training a GANwithpdata(x) = ~pdatawhich is a distribution consisting of point masses on all the data points in thedataset. For the sake of argument, let’s assume we are training a discriminator and generator, each1rGV=PiDiz@Di@GQj6=i(1Dj) =1z@Dk@GforDk= 1;D6=k= 0. Our argument ignores@Dk@G.4Published as a conference paper at ICLR 2017with infinite capacity. In this case, the global optimum ( pG(x) = ~pdata(x)) fails to capture any ofthe interesting structure from pdata(x), the true distribution we are trying to learn. Therefore, it isactually critical that we avoid this global optimum.x p(x) Figure 2: Consider a dataset consisting of the nine 1-dimensional samples in black. Their corre-sponding probability mass function is given in light gray. After training GMAN, three discrimina-tors converge to distinct local optima which implicitly define distributions over the data (red, blue,yellow). Each discriminator may specialize in discriminating a region of the data space (placingmore diffuse mass in other regions). Averaging over the three discriminators results in the distribu-tion in black, which we expect has higher likelihood under reasonable assumptions on the structureof the true distribution.In practice, this degenerate result is avoided by employing learners with limited capacity and corrupt-ing data samples with noise (i.e., dropout), but we might better accomplish this by simultaneouslytraining a variety of limited capacity discriminators. With this approach, we might obtain a diverseset of seemingly tenable hypotheses for the true pdata(x). Averaging over these multiple locallyoptimal discriminators increases the entropy of ~pdata(x)by diffusing the probability mass over thedata space (see Figure 2 for an example).4.4 A UTOMATING REGULATIONThe problem of keeping the discriminator and generator in balance has been widely recognized inprevious work with GANs. Issues with unstable dynamics, oscillatory behavior, and generator col-lapse are not uncommon. In addition, the discriminator is often times able to achieve a high degree ofclassification accuracy (producing a single scalar) before the generator has made sufficient progresson the arguably more difficult generative task (producing a high dimensional sample). Salimanset al. (2016) suggested label smoothing to reduce the vulnerability of the generator to a relativelysuperior discriminator. Here, we explore an approach that enables the generator to automaticallytemper the performance of the discriminator when necessary, but still encourages the generator tochallenge itself against more accurate adversaries. Specifically, we augment the generator objective:minG;> 0FG(Vi)f() (7)wheref()is monotonically increasing in which appears in the softmax equations, (3)—(5). Inexperiments, we simply set f() =cwithca constant (e.g., 0.001). The generator is incentivizedto increaseto reduce its objective at the expense of competing against the best available adversaryD(see Appendix A.6).5 E VALUATIONEvaluating GANs is still an open problem. In their original work, Goodfellow et al. (2014) reportlog likelihood estimates from Gaussian Parzen windows, which they admit, has high variance andis known not to perform well in high dimensions. Theis et al. (2016) recommend avoiding Parzenwindows and argue that generative models should be evaluated with respect to their intended appli-cation. Salimans et al. (2016) suggest an Inception score , however, it assumes labels exist for thedataset. Recently, Im et al. (2016) introduced the Generative Adversarial Metric (GAM) for mak-ing pairwise comparisons between independently trained GAN models. The core idea behind theirapproach is given two generator, discriminator pairs ( G1;D1) and (G2;D2), we should be able tolearn their relative performance by judging each generator under the opponent’s discriminator.5Published as a conference paper at ICLR 20175.1 M ETRICIn GMAN, the opponent may have multiple discriminators, which makes it unclear how to performthe swaps needed for GAM. We introduce a variant of GAM, the generative multi-adversarial metric(GMAM), that is amenable to training with multiple discriminators,GMAM = logFaGb(Vai)FaGa(Vai).FbGa(Vbi)FbGb(Vbi): (8)whereaandbrefer to the two GMAN variants (see Section 3 for notation FG(Vi)). The idea here issimilar. IfG2performs better than G1with respect to both D1andD2, then GMAM >0 (rememberV0always). IfG1performs better in both cases, GMAM <0, otherwise, the result is indeterminate.5.2 E XPERIMENTSWe evaluate the aforementioned variations of GMAN on a variety of image generation tasks: MNIST(LeCun et al. (1998)), CIFAR-10 (Krizhevsky (2009)) and CelebA (Liu et al. (2015)). We focus onrates of convergence to steady state along with quality of the steady state generator according to theGMAM metric. To summarize, loosely in order of increasing discriminator leniency, we compareF-boost: A single AdaBoost.OL -boosted discriminator (see Appendix A.7).P-boost:Diis trained according to AdaBoost.OL . Amax over the weak learner losses ispresented to the generator instead of the boosted prediction (see Appendix A.7).GMAN- max:maxfVigis presented to the generator.GAN: Standard GAN with a single discriminator (see Appendix A.2).mod-GAN: GAN with modified objective (generator minimizes log(D(G(z))).GMAN-: GMAN with F:=arithmetic softmax with parameter .GMAN: The arithmetic softmax is controlled by the generator through .All generator and discriminator models are deep (de)convolutional networks (Radford et al. (2015)),and aside from the boosted variants, all are trained with Adam (Kingma & Ba (2014)) and batchnormalization (Ioffe & Szegedy (2015)). Discriminators convert the real-valued outputs of theirnetworks to probabilities with squashed -sigmoids to prevent saturating logarithms in the minimaxobjective (+121+ez). See Appendix A.8 for further details. We test GMAN systems with N=f2;5gdiscriminators. We maintain discriminator diversity by varying dropout and network depth.5.2.1 MNISTFigure 3 reveals that increasing the number of discriminators reduces the number of iterations tosteady-state by 2x on MNIST; increasing N(the size of the discriminator ensemble) also has theadded benefit of reducing the variance the minimax objective over runs. Figure 4 displays the vari-ance of the same objective over a sliding time window, reaffirming GMAN’s acceleration to steady-state. Figure 5 corroborates this conclusion with recognizable digits appearing approximately anepoch before the single discriminator run; digits at steady-state appear slightly sharper as well.Our GMAM metric (see Table 1) agrees with the relative quality of images in Figure 5 with GMANachieving the best overall performance. Figure 6 reveals GMAN’s attempt to regulate the difficultyScore Variant GMANGMAN-0 GMAN- max mod-GANBetter!0:127 GMAN-0:0200:0090:0280:0190:0890:0360:007 GMAN-0 0:0200:009 -0:0130:0150:0180:0270:034 GMAN- max 0:0280:019 0:0130:015 -0:0110:0240:122 mod-GAN 0:0890:036 0:0180:027 0:0110:024 -Table 1: Pairwise GMAM metric means with stdev for select models on MNIST. For each column, apositive GMAM indicates better performance relative to the row opponent; negative implies worse.Scores are obtained by summing each variant’s column.6Published as a conference paper at ICLR 2017Figure 3: Generator objective, F, averagedover 5 training runs on MNIST. Increas-ing the number of discriminators acceleratesconvergence of Fto steady state (solid line)and reduces its variance, 2(filled shadow1). Figure 4 provides alternative evidenceof GMAN’s accelerated convergence.Figure 4: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMANwithN= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 3’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.Figure 5: Comparison of image quality across epochs for N=f1;2;5gusing GMAN-0 on MNIST.of the game to accelerate learning. Figure 7 displays the GMAM scores comparing fixed ’s to thevariablecontrolled by GMAN.Figure 6: GMANregulates difficulty of thegame by adjusting . Initially,Greducestoease learning and then gradually increases for a more challenging learning environment.Score= 1= 0(N= 5)Better!0:028 -0:0080:0090:0190:0100:001= 10:0080:009-0:0080:0100:025= 00:0190:0100:0080:010-Figure 7: PairwiseGMAMstdev (GMAM)for GMAN-andGMAN() over 5 runs on MNIST.7Published as a conference paper at ICLR 20175.2.2 C ELEB A & CIFAR-10We see similar accelerated convergence behavior for the CelebA dataset in Figure 8.Figure 8: Image quality improvement across number of generators at same number of iterations forGMAN-0 on CelebA.Figure 9 displays images generated by GMAN-0 on CIFAR-10. See Appendix A.3 for more results.Figure 9: Images generated by GMAN-0 on the CIFAR-10 dataset.We also found that GMAN is robust to mode collapse . We believe this is because the generatormust appease a diverse set of discriminators in each minibatch. Emitting a single sample will scorewell for one discriminator at the expense of the rest of the discriminators. Current solutions (e.g.,minibatch discrimination) are quadratic in batch size. GMAN, however, is linear in batch size.6 C ONCLUSIONWe introduced multiple discriminators into the GAN framework and explored discriminator rolesranging from a formidable adversary to a forgiving teacher. Allowing the generator to automaticallytune its learning schedule (GMAN) outperformed GANs with a single discriminator on MNIST. Ingeneral, GMAN variants achieved faster convergence to a higher quality steady state on a variety oftasks as measured by a GAM-type metric (GMAM). In addition, GMAN makes using the originalGAN objective possible by increasing the odds of the generator receiving constructive feedback.In future work, we will look at more sophisticated mechanisms for letting the generator controlthe game as well as other ways to ensure diversity among the discriminators. Introducing multiplegenerators is conceptually an obvious next step, however, we expect difficulties to arise from morecomplex game dynamics. For this reason, game theory and game design will likely be important.ACKNOWLEDGMENTSWe acknowledge helpful conversations with Stefan Dernbach, Archan Ray, Luke Vilnis, Ben Turtel,Stephen Giguere, Rajarshi Das, and Subhransu Maji. We also thank NVIDIA for donating a K40GPU. This material is based upon work supported by the National Science Foundation under GrantNos. IIS-1564032. Any opinions, findings, and conclusions or recommendations expressed in thismaterial are those of the authors and do not necessarily reflect the views of the NSF.8Published as a conference paper at ICLR 2017BIBLIOGRAPHYMartın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg SCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machinelearning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 , 2016.Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc ̧ois Laviolette, and Mario Marchand.Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446 , 2014.J Andrew Bagnell. Robust supervised learning. In Proceedings Of The National Conference OnArtificial Intelligence , volume 20, pp. 714. Menlo Park, CA; Cambridge, MA; London; AAAIPress; MIT Press; 1999, 2005.Alina Beygelzimer, Satyen Kale, and Haipeng Luo. Optimal and adaptive algorithms for onlineboosting. arXiv preprint arXiv:1502.02651 , 2015.Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-gan: Interpretable representation learning by information maximizing generative adversarial nets.arXiv preprint arXiv:1606.03657 , 2016.Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using alaplacian pyramid of adversarial networks. In Advances in neural information processing systems ,pp. 1486–1494, 2015.Jeff Donahue, Philipp Kr ̈ahenb ̈uhl, and Trevor Darrell. Adversarial feature learning. arXiv preprintarXiv:1605.09782 , 2016.Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi-etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704 ,2016.Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprintarXiv:1511.05897 , 2015.Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. ClassProject for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Wintersemester , 2014, 2014.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. arXiv preprintarXiv:1606.03476 , 2016.Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating imageswith recurrent adversarial networks. arXiv preprint arXiv:1602.05110 , 2016.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. arXiv preprint arXiv:1502.03167 , 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Alex Krizhevsky. Learning multiple layers of features from tiny images. Master’s Thesis , 2009.Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits,1998.Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In Interna-tional Conference on Machine Learning , pp. 1718–1727, 2015.Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild.InProceedings of International Conference on Computer Vision (ICCV) , December 2015.9Published as a conference paper at ICLR 2017Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders.arXiv preprint arXiv:1511.05644 , 2015.Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprintarXiv:1411.1784 , 2014.Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplersusing variational divergence minimization. arXiv preprint arXiv:1606.00709 , 2016.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, and Barnabas Poczos.Enabling dark energy science with deep generative models of galaxy images. arXiv preprintarXiv:1609.05796 , 2016.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.J ̈urgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation ,4(6):863–879, 1992.Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generativeadversarial networks. arXiv preprint arXiv:1511.06390 , 2015.Lucas Theis, A ̈aron van den Oord, and Matthias Bethge. A note on the evaluation of generativemodels. arXiv preprint arXiv:1511.01844v3 , 2016.Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generativeadversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920 ,2016.Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S Paek, and In So Kweon. Pixel-level domaintransfer. arXiv preprint arXiv:1603.07442 , 2016.Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks.InComputer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on , pp. 2528–2535.IEEE, 2010.Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network.arXiv preprint arXiv:1609.03126 , 2016.10Published as a conference paper at ICLR 2017A A PPENDIXA.1 A CCELERATED CONVERGENCE & R EDUCED VARIANCESee Figures 10, 11, 12, and 13.Figure 10: Generator objective, F, averagedover 5 training runs on CelebA. IncreasingN(# ofD) accelerates convergence of Ftosteady state (solid line) and reduces its vari-ance,2(filled shadow1). Figure 11 pro-vides alternative evidence of GMAN-0’s ac-celerated convergence.Figure 11: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMAN-0 with N= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 10’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.Figure 12: Generator objective, F, averagedover 5 training runs on CIFAR-10. Increas-ingN(# ofD) accelerates convergence ofFto steady state (solid line) and reduces itsvariance,2(filled shadow1). Figure 13provides alternative evidence of GMAN-0’saccelerated convergence.Figure 13: Stdev ,, of the generator objec-tive over a sliding window of 500 iterations.Lower values indicate a more steady-state.GMAN-0 with N= 5 achieves steady-stateat2x speed of GAN ( N= 1). Note Fig-ure 12’s filled shadows reveal stdev ofFoverruns, while this plot shows stdev over time.A.2 A DDITIONAL GMAM T ABLESSee Tables 2, 3, 4, 5, 6. Increasing the number of discriminators from 2 to 5 on CIFAR-10 signif-icantly improves scores over the standard GAN both in terms of the GMAM metric and Inceptionscores.A.3 G ENERATED IMAGESSee Figures 14 and 15.11Published as a conference paper at ICLR 2017Score Variant GMANGMAN-1 GAN GMAN-0 GMAN- max mod-GANBetter!0:184 GMAN-0:0070:0400:0200:0280:0890:067 GMAN-1 0:007 -0:0080:0080:0210:0370:030 GAN 0:040 0:008 - 0:0020:0180:0580:005 GMAN-0 0:020 0:008 0:002 -0:0130:0180:091 GMAN- max 0:028 0:021 0:018 0:013 -0:0110:213 mod-GAN 0:089 0:037 0:058 0:018 0:011 -Table 2: Pairwise GMAM metric means for select models on MNIST. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column.Score Variant GMAN-0 GMAN-1 GMANmod-GANBetter!0:172 GMAN-0 -0:0220:0620:0880:050 GMAN-1 0:022 - 0:0060:0780:055 GMAN0:0620:006 -0:0010:167 mod-GAN 0:088 0:078 0:001 -Table 3: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column. GMAN variants were trained with twodiscriminators.GMAN-0 GMAN-1 mod-GAN GMANScore 5:8780:193 5:7650:168 5:7380:176 5:5390:099Table 4: Inception score means with standard deviations for select models on CIFAR-10. Higherscores are better. GMAN variants were trained with twodiscriminators.Score Variant GMAN-0 GMANGMAN-1 mod-GANBetter!0:180 GMAN-0 -0:0080:0410:1320:122 GMAN0:008 -0:0380:0920:010 GMAN-1 0:041 0:038 -0:0890:313 mod-GAN 0:132 0:092 0:089 -Table 5: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positiveGMAM indicates better performance relative to the row opponent; negative implies worse. Scoresare obtained by summing each column. GMAN variants were trained with fivediscriminators.GMAN-1 GMAN-0 GMANmod-GANScore 6:0010:194 5:9570:135 5:9550:153 5:7380:176Table 6: Inception score means with standard deviations for select models on CIFAR-10. Higherscores are better. GMAN variants were trained with fivediscriminators.Figure 14: Sample of pictures generated on CelebA cropped dataset.12Published as a conference paper at ICLR 2017Figure 15: Sample of pictures generated by GMAN-0 on CIFAR dataset.A.4 S OMEWHAT RELATED WORKA GAN framework with two discriminators appeared in Yoo et al. (2016), however, it is applica-ble only in a semi-supervised case where a label can be assigned to subsets of the dataset (e.g.,X=fX1=Domain 1;X2=Domain 2;:::g). In contrast, our framework applies to an unsu-pervised scenario where an obvious partition of the dataset is unknown. Furthermore, extendingGMAN to the semi-supervised domain-adaptation scenario would suggest multiple discriminatorsper domain, therefore our line of research is strictly orthogonal to that of their multi-domain dis-criminator approach. Also, note that assigning a discriminator to each domain is akin to prescribinga new discriminator to each value of a conditional variable in conditional GANs (Mirza & Osindero(2014)). In this case, we interpret GMAN as introducing multiple conditional discriminators and nota discriminator for each of the possibly exponentially many conditional labels.In Section 4.4, we describe an approach to customize adversarial training to better suit the devel-opment of the generator. An approach with similar conceptual underpinnings was described inRavanbakhsh et al. (2016), however, similar to the above, it is only admissible in a semi-supervisedscenario whereas our applies to the unsupervised case.A.5 Softmax REPRESENTABILITYLetsoftmax (Vi) =^V2[minVi;maxVi]. Also leta=argminiVi,b=argmaxiVi, andV(t) =V((1t)Da+tDb)so thatV(0) =VaandV(1) =Vb. The softmax and minimax objectiveV(Di;G)are both continuous in their inputs, so by the intermediate value theorem , we have that9^t2[0;1]s:t:V(^t) = ^V, which implies9^D2Ds:t: V (^D;G ) = ^V. This result implies thatthesoftmax (and any other continuous substitute) can be interpreted as returning V(^D;G )for some^Dselected by computing an another, unknown function over the space of the discriminators. Thisresult holds even if ^Dis not representable by the architecture chosen for D’s neural network.13Published as a conference paper at ICLR 2017A.6 U NCONSTRAINED OPTIMIZATIONTo convert GMANminimax formulation to an unconstrained minimax formulation, we introducean auxiliary variable, , define() = log(1 + e), and let the generator minimize over 2R.A.7 B OOSTING WITH AdaBoost.OLAdaBoost.OL (Beygelzimer et al. (2015)) does not require knowledge of the weak learner’s slightedge over random guessing ( P(correct label) = 0:5 +2(0;0:5]), and in fact, allows <0. Thisis crucial because our weak learners are deep nets with unknown, possibly negative, ’s.Figure 16: Sample of pictures generated across 4 independent runs on MNIST with F-boost (similarresults with P-boost).A.8 E XPERIMENTAL SETUPAll experiments were conducted using an architecture similar to DCGAN (Radford et al. (2015)).We use convolutional transpose layers (Zeiler et al. (2010)) for Gand strided convolutions for Dexcept for the input of Gand the last layer of D. We use the single step gradient method as in(Nowozin et al. (2016)), and batch normalization (Ioffe & Szegedy (2015)) was used in each ofthe generator layers. The different discriminators were trained with varying dropout rates from[0:3;0:7]. Variations in the discriminators were effected in two ways. We varied the architecture byvarying the number of filters in the discriminator layers (reduced by factors of 2, 4 and so on), aswell as varying dropout rates. Secondly we also decorrelated the samples that the disriminators weretraining on by splitting the minibatch across the discriminators. The code was written in Tensorflow(Abadi et al. (2016)) and run on Nvidia GTX 980 GPUs. Code to reproduce experiments and plotsis at https://github.com/iDurugkar/GMAN. Specifics for the MNIST architecture and training are:Generator latent variables zU(1;1)100Generator convolution transpose layers: (4;4;128);(8;8;64);(16;16;32);(32;32;1)Base Discriminator architecture: (32;32;1);(16;16;32);(8;8;64);(4;4;128) .Variants have either convolution 3 (4;4;128) removed or all the filter sizesare divided by 2 or 4. That is, (32;32;1);(16;16;16);(8;8;32);(4;4;64) or(32;32;1);(16;16;8);(8;8;16);(4;4;32).ReLu activations for all the hidden units. Tanh activation at the output units of the generator.Sigmoid at the output of the Discriminator.Training was performed with Adam (Kingma & Ba (2014)) ( lr= 2104,1= 0:5).MNIST was trained for 20 epochs with a minibatch of size 100.CelebA and CIFAR were trained over 24000 iterations with a minibatch of size 100.14
HyecJGP5ge
Under review as a conference paper at ICLR 2017NEUROGENESIS -INSPIRED DICTIONARY LEARNING :ONLINE MODEL ADAPTION IN A CHANGING WORLDSahil GargThe Department of Computer Science, University of Southern California, Los Angeles, CA USAsahilgar@usc.eduIrina Rish, Guillermo Cecchi, Aurelie LozanoIBM Thomas J. Watson Research Center, Yorktown Heights, NY USAfrish, gcecchi, aclozano g@us.ibm.comABSTRACTIn this paper, we focus on online representation learning in non-stationary envi-ronments which may require continuous adaptation of model’s architecture. Wepropose a novel online dictionary-learning (sparse-coding) framework which in-corporates the addition and deletion of hidden units (dictionary elements), and isinspired by the adult neurogenesis phenomenon in the dentate gyrus of the hip-pocampus, known to be associated with improved cognitive function and adapta-tion to new environments. In the online learning setting, where new input instancesarrive sequentially in batches, the “neuronal birth” is implemented by adding newunits with random initial weights (random dictionary elements); the number ofnew units is determined by the current performance (representation error) of thedictionary, higher error causing an increase in the birth rate. “Neuronal death” isimplemented by imposing l1=l2-regularization (group sparsity) on the dictionarywithin the block-coordinate descent optimization at each iteration of our onlinealternating minimization scheme, which iterates between the code and dictionaryupdates. Finally, hidden unit connectivity adaptation is facilitated by introduc-ing sparsity in dictionary elements. Our empirical evaluation on several real-lifedatasets (images and language) as well as on synthetic data demonstrates that theproposed approach can considerably outperform the state-of-art fixed-size (non-adaptive) online sparse coding of Mairal et al. (2009) in the presence of non-stationary data. Moreover, we identify certain properties of the data (e.g., sparseinputs with nearly non-overlapping supports) and of the model (e.g., dictionarysparsity) associated with such improvements.1 I NTRODUCTIONThe ability to adapt to a changing environment is essential for successful functioning in both naturaland artificial intelligent systems. In human brains, adaptation is achieved via neuroplasticity, whichtakes different forms, including synaptic plasticity, i.e. changing connectivity strength among neu-rons, and neurogenesis, i.e. the birth and maturation of new neurons (accompanied with the death ofsome new or old neurons). Particularly, adult neurogenesis (Kempermann, 2006) (i.e., neurogenesisin the adult brain) in the dentate gyrus of the hippocampus is associated with improved cognitivefunctions such as pattern separation (Sahay et al., 2011), and is often implicated as a “candidatemechanism for the specific dynamic and flexible aspects of learning” (Stuchlik, 2014).In the machine-learning context, synaptic plasticity is analogous to parameter tuning (e.g., learningneural net weights), while neurogenesis can be viewed as an online model selection via addition(and deletion) of hidden units in specific hidden-variable models used for representation learning(where hidden variables represent extracted features), from linear and nonlinear component anal-ysis methods such as PCA, ICA, sparse coding (dictionary learning), nonlinear autoencoders, todeep neural nets and general hidden-factor probabilistic models. However, optimal model selectionin large-scale hidden-variable models (e.g., adjusting the number of layers, hidden units, and their1Under review as a conference paper at ICLR 2017connectivity), is intractable due to enormous search space size. Growing a model gradually can be amore feasible alternative; after all, every real brain’s “architecture” development process starts witha single cell. Furthermore, the process of adapting the model’s architecture to dynamically changingenvironments is necessary for achieving a lifelong, continual learning. Finally, an online approachto dynamically expanding and contracting model’s architecture can serve as a potentially more ef-fective alternative to the standard off-line model selection (e.g., MDL-based off-line sparse coding(Ramirez & Sapiro, 2012)), as well as to the currently popular network compression (distillation)approaches (Hinton et al., 2015; Srivastava et al., 2014; Ba & Caruana, 2014; Bucilu et al., 2006),where a very large-scale architecture, such as a deep neural network with millions of parameters,must be first selected in ad-hoc ways and trained on large amounts of data, only to be compressedlater to a more compact and simpler model with similarly good performance; we hypothesize thatadaptive growth and reduction of the network architecture is a viable alternative to the distillationapproach, although developing such an alternative remains the topic of further research.In this paper, we focus on dictionary learning, a.k.a. sparse coding (Olshausen & Field, 1997; Kreutz-Delgado et al., 2003; Aharon et al., 2006; Lee et al., 2006) – a representation learning approachwhich finds a set of basis vectors (atoms, or dictionary elements) and representations (encodings)of the input samples as sparse linear combinations of those elements1. More specifically, our ap-proach builds upon the computationally efficient online dictionary-learning method of Mairal et al.(2009), where the data samples are processed sequentially, one at a time (or in small batches). Onlineapproaches are particularly important in large-scale applications with millions of potential trainingsamples, where off-line learning can be infeasible; furthermore, online approaches are a naturalchoice for building systems capable of continual, lifelong learning.Herein, we propose a novel online dictionary learning approach inspired by adult neurogenesis,which extends the state-of-art method of Mairal et al. (2009) to nonstationary environments by in-corporating online model adaption, i.e. the addition and deletion of dictionary elements (i.e., hiddenunits) in response to the dynamically changing properties of the input data2. More specifically, ateach iteration of online learning (i.e., for every batch of data samples), we add a group of randomdictionary elements (modeling neuronal birth), where the group size depends on the current repre-sentation error, i.e. the mismatch between the new input samples and their approximation based onthe current dictionary: higher error triggers more neurogenesis. The neuronal death, which involvesremoving “useless” dictionary elements, is implemented as an l1=l2group-sparsity regularization;this step is essential in neurogenesis-inspired learning, since it reduces a potentially uncontrolledgrowth of the dictionary, and helps to avoid overfitting (note that neuronal death is also a naturalpart of the adult neurogensis process, where neuronal survival depends on multiple factors, includ-ing the complexity of a learning environment (Kempermann, 2006)). Moreover, we introduce spar-sity in dictionary elements, which reflects sparse connectivity between hidden units/neurons andtheir inputs; this is a more biologically plausible assumption than the fully-connected architectureof standard dictionary learning, and it also works better in our experiments. Thus, adaptation in ourmodel involves not only the addition/deletion of the elements, but adapting their connectivity aswell.We demonstrate on both simulated data and on two real-life datasets (natural images and languageprocessing) that, in presence of a non-stationary input, our approach can significantly outperformnon-adaptive, fixed-dictionary-size online method of Mairal et al. (2009). Moreover, we identify cer-tain data properties and parameter settings associated with such improvements. Finally, we demon-strate that the novel approach not only improves the representation accuracy, but also can boost theclassification accuracy based on the extracted features.Note that, although the group-sparsity constraint enforcing deletion of some dictionary elementswas introduced earlier in the group-sparse coding method of Bengio et al. (2009), it was only im-plemented and tested in the off-line rather than online setting, and, most importantly, it was not ac-1Note that the corresponding neural network interpretation of sparse coding framework is a (single-hidden-layer) linear autoencoder with sparsity constraints: the hidden units are associated with dictionary elements,each element represented by a weight vector associated with unit’s outgoing links in the output layer, and thesparse vector of hidden unit activations corresponding to the encoding of an input.2An early version of our neurogenetic online dictionary learning approach was presented as a poster at the2011 Society for Neuroscience meeting (Rish et al., 2011), although it did not appear before as a peer-reviewedpublication.2Under review as a conference paper at ICLR 2017companied by the neurogenesis. On the other hand, while some prior work considered online nodeaddition in hidden-variable models, and specifically, in neural networks, from cascade correlations(Fahlman & Lebiere, 1989) to the recent work by Draelos et al. (2016a;b), no model pruning wasincorporated in those approaches in order to balance the model expansion. Overall, we are not awareof any prior work which would propose and systematically evaluate, empirically and theoretically, adynamic process involving both addition and deletion of hidden units in the online model selectionsetting, either in sparse coding or in a neural network setting.To summarize, the main contributions of this paper are as follows:we propose a novel online model-selection approach to dictionary learning3, inspired bytheadult neurogenesis phenomenon; our method significantly outperforms the state-of-artbaseline , especially in non-stationary settings;we perform an extensive empirical evaluation, on both synthetic and real data , in orderto identify the conditions when the proposed adaptive approach is most beneficial, bothfor data reconstruction and for classification based on extracted features; we conclude thatthese conditions include a combination of sparse dictionary elements (and thus a morebiologically plausible sparse network connectivity as opposed to fully connected units),accompanied by sufficiently dense codes ;furthermore, we provide an intuitive discussion, as well as theoretical analysis of certaincombinations of the input data properties and the algorithm’s parameters when the pro-posed approach is most beneficial;from the neuroscientific perspective, we propose a computational model which supportsearlier empirical observations indicating that adult neurogenesis is particularly beneficialin changing environments, and that certain amount of neuronal death, which accompaniesthe neuronal birth, is an important component of an efficient neurogenesis process;overall, to the best of our knowledge, we are the first to perform an in-depth evaluationof the interplay between the birth and death of hidden units in the context of online modelselection in representation learning, and, more specifically, in online dictionary learning.This paper is organized as follows. In Sec. 2, we summarize the state-of-art non-adaptive (fixed-size) online dictionary learning method of Mairal et al. (2009). Thereafter, in Sec. 3, we describeour adaptive online dictionary learning algorithm. In Sec. 4, we present our empirical results on bothsynthetic and real datasets, including images and language data. Next, in Sec. 5, we provide sometheoretical, as well as an intuitive analysis of settings which can benefit most from our approach.Finally, we conclude with a summary of our contributions in Sec. 6. The implementation details ofthe algorithms and additional experimental results are described in the Appendix.2 B ACKGROUND ON DICTIONARY LEARNINGTraditional off-line dictionary learning (Olshausen & Field, 1997; Aharon et al., 2006; Lee et al.,2006) aims at finding a dictionaryD2Rmk, which allows for an accurate representation of atraining data set X=fx1;;xn2Rmg, where each sample xiis approximated by a linearcombinationxiD iof the columns of D, called dictionary elements fd1;;dk2Rmg.Hereiis the encoding (code vector , or simply code ) ofxiin the dictionary. Dictionary learningis also referred to as sparse coding , since it is assumed that the code vectors are sparse , i.e. have arelatively small number of nonzeros; the problem is formulated as minimizing the objectivefn(D) =1nnXi=112jjxiD ijj22+cjjijj1 (1)where the first term is the mean square error loss incurred due to approximating the input samplesby their representations in the dictionary, and the second term is the l1-regularization which enforcesthe codes to be sparse. The joint minimization of fn(D)with respect to the dictionary and codes isnon-convex; thus, a common approach is alternating minimization involving convex subproblems offinding optimal codes while fixing a dictionary, and vice versa.3The Matlab code is available at https://github.com/sgarg87/neurogenesis_inspired_dictionary_learning .3Under review as a conference paper at ICLR 2017However, the classical dictionary learning does not scale to very large datasets; moreover, it is notimmediately applicable to online learning from a continuous stream of data. The online dictionarylearning (ODL) method proposed by Mairal et al. (2009) overcomes both of these limitations, andserves as a basis for our proposed approach, presented in Alg. 1 in the next section. While the high-lighted lines in Alg. 1 represent our extension of ODL , the non-highlighted ones are common to bothapproaches, and are discussed first. The algorithms start with some dictionary D0, e.g. a randomlyinitialized one (other approaches include using some of the inputs as dictionary elements (Mairalet al., 2010; Bengio et al., 2009)). At each iteration t, both online approaches consider the next inputsamplext(more generally, a batch of samples) as in the step 3 of Alg. 1 and compute its sparsecodetby solving the LASSO (Tibshirani, 1996) problem (the step 4 in Alg. 1), with respect to thecurrent dictionary. In Alg. 1, we simply use Dinstead ofD(t)to simplify the notation. Next, thestandard ODL algorithm computes the dictionary update, D(t), by optimizing the surrogate objec-tive function ^ft(D)which is defined just as the original objective in eq. (1), for n=t, but with oneimportant difference: unlike the original objective, where each code ifor samplexiis computedwith respect to the same dictionaryD, the surrogate function includes the codes 1;2;;tcomputed at the previous iterations, using the dictionaries D(0);:::;D(t1), respectively; in otherwords, it does not recompute the codes for previously seen samples after each dictionary update.This speeds up the learning without worsening the (asymptotic) performance, since the surrogateobjective converges to the original one in (1), under certain assumptions, including data stationarity(Mairal et al., 2009). Note that, in order to prevent the dictionary entries from growing arbitrarilylarge, Mairal et al. (2009; 2010) impose the norm constraint, i.e. keep the columns of Dwithin theconvex setC=fD2Rmks:t:8jdTjuj1g. Then the dictionary update step computesD(t)= arg min D2C^ft(D), ignoringl1-regularizer over the code which is fixed at this step, asarg minD2C1ttXi=112jjxiD ijj22= arg minD2C12Tr(DTDA)Tr(DTB); (2)whereA=Pti=1iTiandB=Pti=1xiTiare the “bookkeeping” matrices (we also call them“memories” of the model), compactly representing the input samples and encoding history. At eachiteration, once the new input sample xiis encoded, the matrices are updated as A A+tTtandB B+xtTt(see the step 11 of Alg. 1). In (Mairal et al., 2009; 2010), a block coordinatedescent is used to optimize the convex objective in eq. 2; it iterates over the dictionary elements in afixed sequence, optimizing each while keeping the others fixed as shown in eq. (3) (essentially, thesteps 14 and 17 in Alg. 1; the only difference is that our approach will transform ujintowjin orderto impose additional regularizer before computing step 17), until convergence.uj bjPk6=jdkajkajj;dj ujmax(1;jjujjj2)(3)Herein, when the off-diagonal entries ajkinAare as large as the diagonal ajj, the dictionary ele-ments get “tied” to each other, playing complementary roles in the dictionary, thereby constrainingthe updates of each other.It is important to note that, for the experiment settings where we consider dictionary elements tobe sparse in our algorithm NODL (discussed next in Sec. 3), we will actually use as a baselinealgorithm a modified version of the fixed-size ODL, which allows for sparse dictionary elements, i.e.includes the sparsification step 15 in Alg. 1, thus optimizing the following objective in dictionaryupdate step instead of the one in eq. (2):arg minD2C1ttXi=112jjxiD ijj22+Xjjjjdjjj1: (4)From now on, ODL will refer to the above extended version of the fixed-size method of Mairalet al. (2009) wherever we have sparsity in dictionary elements (otherwise, the standard method ofMairal et al. (2009) is the baseline); in our experiments, dictionary sparsity of both the baselineand the proposed method (discussed in the next section) will be matched. Note that Mairal et al.(2010) mention that the convergence guaranties for ODL hold even with the sparsity constraints ondictionary elements.4Under review as a conference paper at ICLR 20173 O URAPPROACH : NEUROGENIC ONLINE DICTIONARY LEARNING (NODL)Our objective is to extend the state-of-art online dictionary learning, designed for stationary inputdistributions, to a more adaptive framework capable of handling nonstationary data effectively, andlearning to represent new types of data without forgetting how to represent the old ones. Towards thisend, we propose a novel algorithm, called Neurogenetic Online Dictionary Learning (see Alg. 1),which can flexibly extend and reduce a dictionary in response to the changes in an input distribution,and possibly to the inherent representation complexity of the data. The main changes, as compared tothe non-adaptive, fixed-dictionary-size algorithm of Mairal et al. (2009), are highlighted in Alg. 1;the two parts involve (1) neurogenesis, i.e. the addition of dictionary elements (hidden units, or“neurons”) and (2) the death of old and/or new elements which are “less useful” than other elementsfor the task of data reconstruction.At each iteration in Alg. 1, the next batch of samples is received and the corresponding codes, inthe dictionary, are computed; next, we add knnew dictionary elements sampled at random fromRm(i.e.,knrandom linear projections of the input sample). The choice of the parameter knisimportant; one approach is to tune it (e.g., by cross-validation), while another is to adjust it dynam-ically, based on the dictionary performance: e.g., if the environment is changing, the old dictionarymay not be able to represent the new input well, leading to decline in the representation accuracy,which triggers neurogenesis. Herein, we use as the performance measure the Pearson correlationbetween a new sample and its representation in the current dictionary r(xt;D(t1)t), i.e. denotedaspc(xt;D(t1);t)(for a batch of data, the average over pc(:)is taken). If it drops below a certainpre-specified threshold (where 01), the neurogenesis is triggered (the step 5 in Alg. 1).The number knof new dictionary elements is proportional to the error 1pc(), so that worse per-formance will trigger more neurogenesis, and vice versa; the maximum number of new elements isbounded by ck(the step 6 in Alg. 1). We refer to this approach as conditional neurogenesis as itinvolves the conditional birth of new elements. Next, knrandom elements are generated and addedto the current dictionary (the step 7), and the memory matrices A;Bare updated, respectively, toaccount for larger dictionary (the step 8). Finally, the sparse code is recomputed for xt(or, all thesamples in the current batch) with respect to the extended dictionary (the step 9).The next step is the dictionary update, which uses, similarly to the standard online dictionary learn-ing, the block-coordinate descent approach. However, the objective function includes additionalregularization terms, as compared to (2):D(t)=arg minD2C1ttXi=112jjxiD ijj22+gXjjjdjjj2+Xjjjjdjjj1: (5)The first term is the standard reconstruction error, as before. The second term, l1=l2-regularization,promotes group sparsity over the dictionary entries, where each group corresponds to a column, i.e.a dictionary element. The group-sparsity (Yuan & Lin, 2006) regularizer causes some columns inDto be set to zero (i.e. the columns less useful for accurate data representation), thus effectivelyeliminating the corresponding dictionary elements from the dictionary (“killing” the correspondinghidden units). As it was mentioned previously, Bengio et al. (2009) used the l1=l2-regularizer indictionary learning, though not in online setting, and without neurogenesis.Finally, the third term imposes l1-regularization on dictionary elements thus promoting sparse dic-tionary, besides the sparse coding. Introducing sparsity in dictionary elements, corresponding to thesparse connectivity of hidden units in the neural net representation of a dictionary, is motivated byboth their biological plausibility (neuronal connectivity tends to be rather sparse in multiple brainnetworks), and by the computational advantages this extra regularization can provide, as we observelater in experiments section (Sec. 4).As in the original algorithm of Mairal et al. (2009), the above objective is optimized by the block-coordinate descent, where each block of variables corresponds to a dictionary element, i.e., a columninD; the loop in steps 12-19 of the Alg. 1 iterates until convergence, defined by the magnitude ofchange between the two successive versions of the dictionary falling below some threshold. Foreach column update, the first and the last steps (the steps 14 and 17) are the same as in the originalmethod of Mairal et al. (2009), while the two intermediate steps (the steps 15 and 16) are implement-ing additional regularization. Both steps 15 and 16 (sparsity and group sparsity regularization) are5Under review as a conference paper at ICLR 2017Algorithm 1 Neurogenetic Online Dictionary Learning (NODL)Require: Data streamx1;x2;;xn2Rm; initial dictionary D2Rmk; conditionalneurogenesis threshold, ; max number of new elements added per data batch, ck; group sparsity regularizationparameter,g; number of non-zeros in a dictionary element, d; number of non-zeros in a code, c.1:Initialize:A 0,B 0% reset the ‘‘memory’’% assuming single data in a batch, for the simpler exposition2:fort= 1 tondo3: Inputxt% representing the tthbatch of data% Sparse coding of data:4:t= arg2Rkmin12jjxtDjj22+cjjjj1%ctuned to have cnon-zeros in t% Conditional neurogenesis: if accuracy below threshold, add moreelements (should not be more than the number of data in a batch5: ifpc(xt;D;t)then6:kn= (1pc(xt;D;t))ck% the count of the births of neurons7:Dn initializeRand (kn),D [D Dn]8:A A00 0,B [B0],k k+kn% Repeat sparse coding, now including the new dictionary elements9:t= arg2Rkmin12jjxtDjj22+cjjjj110: end if % End of neurogenesis% ‘‘Memory’’ update:11:A A+tTt,B B+xtTt% Dictionary update by block-coordinate descent with l1=l2group sparsity12: repeat13: forj= 1 tokdo14:uj bjPk 6=jdkajkajj% Sparsifying elements (optional):15:vj Proxjjj:jj1(uj) =sgn(uj)(jujjj)+;%jtuned to get dnon-zeros in vj% Killing useless elements with l1=l2group sparsity16:wj vj1gjjvjjj2+17:dj wjmax(1;jjwjjj2)18: end for19: until convergence20:end for21:returnDimplemented using the standard proximal operators as described in Jenatton et al. (2011). Note thatwe actually use as input the desired number of non-zeros, and determine the corresponding sparsityparametercandjusing a binary search procedure (see Appendix).Overall, the key features of our algorithm is the interplay of both the (conditional) birth and (group-sparsity) death of dictionary elements in an online setting.3.1 D ISCUSSION OF IMPORTANT ALGORITHMIC DETAILSA rationale behind sparsity of dictionary elements. We focus here on sparse dictionary ele-ments, which, in the network terms, correspond to sparse connectivity between hidden units andtheir inputs; one reason for this choice was that sparse connectivity appears to be a more biologi-cally plausible assumption than a fully-connected architecture implied by dense dictionary, in manybrain areas, and specifically between dentate gyrus and CA3. The other reason relates to computa-tional advantages.Note that (Mairal et al., 2009) state that convergence guaranties for the original ODL algorithmwould also hold for the case of sparse dictionary elements. However, no empirical evaluation isprovided for this case; furthermore, we are not aware of any previous work on sparse coding whichwould involve and extensive empirical evaluation for such setting. Prior focus on dense rather thansparse dictionary elements is perhaps more natural when the input consists of a large number ofrelatively small image patches, and thus each element also represents a small patch. In our work,however, dictionary is being learned on full images, and thus a nonzero pattern in a sparse dictionaryelement corresponds to a small patch within a larger image, with multiple sparse elements (patches)covering the image. Thus, rather than explicitly representing an image as a set of patches and then6Under review as a conference paper at ICLR 2017learning a dictionary of dense elements for accurate representation of such patches, a dictionary offull-image-size, but sparse dictionary elements can be used to implicitly represents an image as alinear combination of those elements, with possible overlap of non-zero pixels between elements;the non-zero pixels in a sparse element of a dictionary are learned automatically. Computationaladvantages of using sparse dictionaries are demonstrated in our experiment results (Sec. 4), whereclassifiers learned on top of representations extracted with sparse dictionaries yield smaller errors.The memory matrix Aand its properties. The matrixAkeeps the “memory” of the encodingstfor the previous data samples, in a sense, as it accumulates the sum of tTtmatrices fromeach iteration t. It turns out that the matrix Acan have a significant effect on dictionary learningin both ODL and NODL algorithms. As it is pointed out in (Mairal et al., 2009), the quadraticsurrogate function in (2) is strictly convex with a lower-bounded Hessian Aensuring convergence toa solution. From the practical standpoint, when the matrix Ahas a high condition number (the ratioof the largest to smallest singular value in the singular value decomposition of a matrix), despiteits lower-bounded eigenvalues, the adaptation of a dictionary elements using the standard ODLalgorithm can be difficult, as we see in our experiments. Specifically, when the dictionary elementsare sparse, this effect is more pronounced, since the condition number of Abecomes high dueto the complementary roles of sparse dictionary elements in the reconstruction process (see thecomparison of Afrom dense elements and sparse elements in 6(a) and 6(b), respectively). In suchscenarios, the submatrix of Acorresponding to the new elements in a dictionary, added by ourNODL algorithm, can have a better condition number, leading to an improved adaptation of thedictionary.Code Sparsity. Code sparsity is controlled by the parameter c, the number of nonzeros, whichdetermines the corresponding regularization weight cin step 4 of Alg. 1; note that cis determinedvia binary search for each input sample separately, as shown in Algorithm 2, and thus may varyslightly for different instances given a fixed c.Selecting an appropriate level of code sparsity depends on the choice of other parameters, such as theinput batch size, sparsity of the dictionary elements, the extent of non-stationarity and complexityof the data, and so on. When the dictionary elements are themselves sparse, denser codes may bemore appropriate, since each sparse dictionary element represents only a relatively small subset ofimage pixels, and thus a large number of those subsets covering the whole image may be needed foran accurate input representation.Interestingly, using very sparse codes in combination with non-sparse dictionary elements in thestandard ODL approach can sometimes lead to creation of “dead” (zero l2-norm) elements in thedictionary, especially if the input batch size is small. This is avoided by our NODL algorithm, sincesuch dead elements are implicitly removed via group sparsity at the dictionary update step, alongwith the “weak” (very small l2-norm) elements. Also, a very high code sparsity in combination withdense dictionary elements can lead to a significant decrease in the reconstruction accuracy for bothODL and our NODL when the online data stream is non-stationary. Such shortcomings were notencountered in (Mairal et al., 2009; 2010), where only stationary data streams were studied, both intheoretical and empirical results. On the other hand, high sparsity in dictionary elements does notseem to cause a degradation in the reconstruction accuracy, as long as the codes are not too sparse.The choice and tuning of metric for conditional neuronal birth. In the “conditional birth” ap-proach described above, the number of new elements knis determined based on the performance ofthe current dictionary, using the Pearson correlation between the actual and reconstructed data, forthe current batch. This is, of course, just one particular approach to measuring data nonstationarityand the need for adaptation, but we consider it a reasonable heuristic. Low reconstruction error in-dicates that the old dictionary is still capable of representing the new data, and thus less adaptationmight be needed, while a high error indicates that the data distribution might have changed, andtrigger neurogenesis in order to better adapt to a new environment. We choose the Pearson correla-tion as the measure of reconstruction accuracy since its value is easily interpretable, is always in therange [0;1](unlike, for example, the mean-square error), which simplifies tuning the threshold pa-rameter. Clearly, one can also try other interpretable metrics, such as, for example, the Spearmancorrelation.7Under review as a conference paper at ICLR 2017Tuning parameters: group sparsity gand others. The group sparsity regularization parametergcontrols the amount of removal (“death”) of elements in NODL : in step 16 of the Alg. 1, all ele-ments withl2-norm below g(i.e., “weak” elements), are set to zero (“killed”). Since the dictionaryelements are normalized to have l2-norm less than one, we only need to consider g2[0;1]. (Notethat the step of killing dictionary elements precedes the normalization step in the algorithm. Thus,the tuning of gis affected by the normalization of the elements from the previous iteration.) Notethat increasing the sparsity of the dictionary elelments, i.e. decreasing d(the number of nozeros indictionary elements) may require the corresponding reduction of g, while an increase in the inputdimensionality mmay also require an increase in the gparameter. Tuning the rest of the parametersis relatively easy. Clearly, the batch size should be kept relatively small, and, ideally, not exceed the“window of stationarity” size in the data (however, the frequency of the input distribution changemay need to be also estimated from the data, and thus the batch size may need to be tuned adaptively,which is outside of the scope of this paper). Mairal et al. (2009) suggest to use a batch size of 256intheir experiments while getting similar performance with values 128and512. As to the maximumnumber of new elements ckadded at each iteration, it is reasonable to keep it smaller than the batchsize.4 E XPERIMENTSWe now evaluate empirically the proposed approach, NODL, against ODL, the standard (non-adaptive) online dictionary learning of Mairal et al. (2009). Moreover, in order to evaluate separatelythe effects of either only adding, or only deleting dictionary elements, we also evaluate two restrictedversions of our method: NODL+ involves only addition but no deletion (equivalent to NODL withno group-sparsity, i.e. g= 0), and NODL- which, vice versa, involves deletion only but no addition(equivalent to NODL with the number of new elements ck= 0). The above algorithms are evalu-ated in a non-stationary setting, where a sequence of training samples from one environment (firstdomain) is followed by another sequence from a different environment (second domain), in order totest their ability to adapt to new environments without “forgetting” the previous ones.4.1 R EAL-LIFE IMAGESOur first domain includes the images of Oxford buildings4(urban environment), while the seconduses a combination of images from Flowers5and Animals6image databases (natural environment);examples of both types of images are shown in Fig. 1(a) and 1(b). We converted the original colorimages into black&white format and compressed them to smaller sizes, 32x32 and 100x100. Notethat, unlike (Mairal et al., 2009), we used full images rather than image patches as our inputs.(a) Urban: Oxford Buildings (b) Nature: Flowers and AnimalsFigure 1: The image data sets for the evaluation of the online dictionary learning algorithms.We selected 5700 images for training and another 5700 for testing; each subset contained 1900images of each type (i.e., Oxford, Flowers, Animals). In the training phase, as mentioned above,4http://www.robots.ox.ac.uk/ ̃vgg/data/oxbuildings/index.html5http://www.robots.ox.ac.uk/ ̃vgg/data/flowers/102/6http://www.robots.ox.ac.uk/ ̃vgg/data/pets/8Under review as a conference paper at ICLR 2017(a) Learned Dictionary Size (b) 1st domain (Oxford) (c) 2nd domain (Flowers)Figure 2: Reconstruction accuracy of NODL and ODL on 32x32 images (sparse dictionary).(a) 1st domain (Oxford) (b) 2nd domain (Flowers) (c) Classification ErrorFigure 3: Reconstruction accuracy of NODL and ODL on 100x100 images with sparse dictionaryelements (50 non-zeros) and non-sparse codes.each online dictionary learning algorithm receives a sequence of 1900 samples from the first, urbandomain (Oxford), and then a sequence of 3800 samples from the second, natural domain (1900Flowers and 1900 Animals, permuted randomly). At each iteration, a batch of 200 images is receivedas an input. (For comparison, Mairal et al. (2009) used a batch of size 256, though image patchesrather than full images.) The following parameters are used by our algorithm: Pearson correlationthreshold= 0:9, group sparsity parameter g= 0:03andg= 0:07, for 32x32 and 100x100images, respectively. The upper bound on the number of new dictionary elements at each iteration isck= 50 . (We observed that the results are only mildly sensitive to the specified parameter values.)Once the training phase is completed, the resulting dictionary is evaluated on test images from boththe first (urban) and the second (natural) domains; for the second domain, separate evaluation isperformed for flowers and animals. First, we evaluate the reconstruction ability of the resultingdictionaryD, comparing the actual inputs xversus approximations x=D, using the meansquare error (MSE), Pearson correlation, and the Spearman correlation. We present the results forPearson correlations between the actual and reconstructed inputs, since all the three metrics showconsistent patterns (for completeness, MSE results are shown in Appendix). Moreover, we evaluatethe dictionaries in a binary classification setting (e.g., flowers vs animals), using as features thecodes of test samples in a given dictionary. Finally, we explored a wide range of sparsity parametersfor both the codes and the dictionary elements.Our key observations are that: (1) the proposed method frequently often outperforms (or is at leastas good as) its competitors, on both the new data (adaptation) and the old ones (memory); (2) it ismost beneficial when dictionary elements are sparse; (3) vice versa, when dictionary elements aredense, neurogenetic approach matches the baseline, fixed-size dictionary learning. We now discussthe results in detail.Sparse Dictionary ElementsIn Fig. 2, we present the results for sparse dictionaries, where each column (an element in thedictionary) has 5 nonzeros out of the 1024 dimensions; the codes are relatively dense, with at most200 nonzeros out of k(the number of dictionary elements), and kranging from 5 to 1000 (i.e. thecodes are not sparse for k200). Due to space limitations, we put in the Appendix (Sec. B.2)our results on a wider range of values for the dictionary and code sparsity (Fig. 12). In Fig. 2(a),we compare the dictionary size for different methods: the final dictionary size after completing thetraining phase (y-axis) is plotted against the initial dictionary size (x-axis). Obviously, the baseline(fixed-size) ODL method (magenta plot) keeps the size constant, deletion-only NODL- approachreduces the initial size (red plot), and addition-only NODL+ increases the size (light-blue plot).9Under review as a conference paper at ICLR 2017However, the interplay between the addition and deletion in our NODL method (dark-blue) producesa more interesting behavior: it tends to adjust the representation complexity towards certain balancedrange, i.e. very small initial dictionaries are expanded, while very large ones are, vice versa, reduced.Our main results demonstrating the advantages of the proposed NODL method are shown next inFig. 2(b) and Fig. 2(c), for the “old” (Oxford) and “new” (Flowers) environment (domain), respec-tively. (Very similar result are shown for Animals as well, in the Appendix). The x-axis shows thefinal dictionary size, and the y-axis is the reconstruction accuracy achieved by the trained dictionaryon the test samples, measured by Pearson correlation between the actual and reconstructed data.NODL clearly outperforms the fixed-size ODL, especially on smaller dictionary sizes; remarkably,this happens on both domains, i.e. besides improved adaptation to the new data, NODL is also betterat preserving the “memories” of the old data, without increasing the representation complexity, i.e.for the same dictionary size .Interestingly, just deletion would not suffice, as deletion-only version, NODL-, is inferior to ourNODL method. On the other hand, addition-only, or NODL+, method is as accurate as NODL, buttends to increase the dictionary size too much. The interplay between the addition and deletion pro-cesses in our NODL seems to achieve the best of the two worlds, achieving superior performancewhile keeping the dictionary size under control, in a narrower range (400 to 650 elements), expand-ing, as necessary, small dictionaries, while compressing large ones7.We will now focus on comparing the two main methods, the baseline ODL and the proposed NODLmethod. The advantages of our approach become even more pronounced on larger input sizes, e.g.100x100 images, in similar sparse-dictionary, dense-code settings. (We keep the dictionary elementsat the same sparsity rate, 50 nonzeros out of 10,000 dimensions, and just use completely non-sparsecodes). In Fig. 3(a) and Fig. 3(b), we see that NODL considerably outperforms ODL on both thefirst (Oxford) and the (part of the ) second domain (Flowers); the results for Animals are very similarand are given in the Appendix in Fig. 10. In Appendix Sec. B.6, Fig. 17 depicts examples of actualanimal images and the corresponding reconstructions by the fixed-size ODL and our NODL methods(not included here due to space restrictions). A better reconstruction quality of our method can beobserved (e.g., a more visible dog shape, more details such as dog’s legs, as opposed to a collectionclusters produced by the ODL methods note however that printer resolution may reduce the visibledifference, and looking at the images in online version of this paper is recommended).Moreover, NODL can be also beneficial in classification settings. Given a dictionary, i.e. a sparse lin-ear autoencoder trained in an unsupervised setting, we use the codes (i.e., feature vectors) computedon the test data from the second domain (Animals and Flowers) and evaluate multiple classifierslearned on those features in order to discriminate between the two classes. In Fig. 3(c), we show thelogistic regression results using 10-fold cross-validation; similar results for several other classifiersare presented in the Appendix, Fig. 10. Note that we also perform filter-based feature subset selec-tion, using the features statistical significance as measured by its p-value as the ranking function,and selecting subsets of top kfeatures, increasing kfrom 1 to the total number of features (the codelength, i.e. the number of dictionary elements). The x-axis in Fig. 3(c) shows the value of k, whilethe y-axis plots the classification error rate for the features derived by each method. We can see thatour NODL method (blue) yields lower errors than the baseline ODL (magenta) for relatively smallsubsets of features, although the difference is negligible for the full feature set. Overall, this suggeststhat our NODL approach achieves better reconstruction performance of the input data, without extraoverfitting in classification setting, since it generalizes at least as good as, and often better than thebaseline ODL method.Non-sparse dictionary elementsWhen exploring a wide range of sparsity settings (see Appendix), we observed quite different resultsfor non-sparse dictionaries as opposed to those presented above. Fig. 8(b) (in Appendix, due tospace constraints) summarizes the results for a particular setting of fully dense dictionaries (nozero entries), but sparse codes (50 non-zeros out of up to 600 dictionary elements; however, thecodes are still dense when dictionary size is below 50). In this setting, unlike the previous one,we do not observe any significant improvement in accuracy due to neurogenetic approach, neither inreconstruction nor in classification accuracy; both methods perform practically the same. (Also, note7In our experiments, we also track which dictionary elements are deleted by our method; generally, both oldand newly added elements get deleted, depending on specific settings.10Under review as a conference paper at ICLR 2017a somewhat surprising phenomenon: after a certain point, i.e. about 50 elements, the reconstructionaccuracy of both methods actually declines rather than improves with increasing dictionary size.)It is interesting to note, however, that the overall classification errors, for both methods, are muchhigher in this setting (from 0.4 to 0.52) than in the sparse-dictionary setting (from 0.22 to 0.36).Even using non-sparse codes in the non-sparse dictionary setting still yields inferior results whencompared to sparse dictionaries (see the results in the Appendix).In summary, on real-life image datasets we considered herein, our NODL approach is often superior(and never inferior) to the standard ODL method; also, there is a consistent evidence that ourapproach is most beneficial in sparse dictionary settings.4.2 S PARSE ORTHOGONAL INPUTS : NLP AND SYNTHETIC DATASo far, we explored some conditions on methods properties (e.g., sparse versus dense dictionaries,as well as code sparsity/density) which can be beneficial for the neurogenetic approach. Our furtherquestion is: what kind of specific data properties would best justify neurogenetic versus traditional,fixed-size dictionary learning? As it turns out, the fixed-size ODL approach has difficulties adaptingto a new domain in nonstationary settings, when the data in both domains are sparse and, acrossthe domains, the supports (i.e., the sets of non-zero coordinates) are almost non-overlapping (i.e.,datasets are nearly orthogonal). This type of data properties is related to a natural language process-ing problem considered below. Furthermore, pushing this type of structure to the extreme, we usedsimulations to better understand the behavior of our method. Herein, we focused, again, on sparsedictionary elements, as a well-suited basis for representing sparse data. Moreover, our empirical re-sults confirm that using dense dictionary elements does not yield good reconstruction of sparse data,as expected.Sparse Natural Language Processing ProblemWe consider a very sparse word co-occurrence matrix (on average, about 14 non-zeros in a columnof size 12,883) using the text from two different domains, biology and mathematics, with the totalvocabulary size of approximately 12,883 words. The full matrix was split in two for illustrationpurposes and shown in Fig. 4(c) and 4(d), where math terms correspond to the first block of columnsand the biology terms correspond to the second one (though it might be somewhat hard to see in thepicture, the average number of nozeros per row/column is indeed about 14).We use the sparse columns (or rows) in the matrix, indexed by the vocabulary words, as our inputdata to learn the dictionary of sparse elements (25 non-zeros) with sparse codes (38 non-zeros). Thecorresponding word codes in the learned dictionary can be later used as word embeddings, or wordvectors, in various NLP tasks such as information extraction, semantic parsing, and others Yogatamaet al. (2015); Faruqui et al. (2015); Sun et al. (2016). (Note that many of the non-domain specificwords were removed from the vocabulary to obtain the final size of 12,883.) Herein, we evaluateour NODL method (i.e. NODL (sparse) in the plots) versus baseline ODL dictionary learning ap-proach (i.e. ODL (sparse)) in the settings where the biology domain is processed first and then onehave to switch to the the mathematics domain. We use 2750 samples from each of the domainsfor training and the same number for testing. The evaluation results are shown in Fig. 4. For thefirst domain (biology), both methods perform very similarly (i.e., remember the old data equallywell), while for the second, more recent domain, our NODL algorithm is clearly outperforming itscompetitor. Moreover, as we mention above, non-sparse (dense) dictionaries are not suited for themodeling of highly sparse data such as our NLP data. In the Fig. 4, both random dense dictionar-ies (random-D) and the dense dictionaries learned with ODL (i.e. ODL (dense)) do poorly in thebiology and mathematics domains.However, the reconstruction accuracy as measured by Pearson correlation was not too high, overall,i.e. the problem turned out to be more challenging than encoding image data. It gave us an intuitionabout the structure of sparse data that may be contributing to the improvements due to neurogenesis.Note that the word co-occurrence matrix from different domains such as biology and mathemat-ics tends to have approximately block-diagonal structure, where words from the same domain areoccurring together more frequently than they co-occur with the words from the different domain.Pushing this type of structure to extreme, we studied next the simulated sparse dataset where thesamples from the two different domains are not only sparse, but have completely non-overlappingsupports, i.e. the data matrix is block-diagonal (see Fig. 7(c) in Appendix).11Under review as a conference paper at ICLR 2017(a)1st domain (Biology) (b)2nd Domain (Mathematics) (c)Biology (d)MathFigure 4: Reconstruction accuracy for the sparse NLP data.(a)Pearson- First Domain (b)Pearson- Second Domain (c)D- ODL (d)D- NODL (ours)Figure 5: Reconstruction accuracy for the sparse synthetic data.Synthetic Sparse DataWe generated a synthetic sparse dataset with 1024 dimension, and only 50 nonzeros in each sam-ple. Moreover, we ensured that the data in both domains had non-overlapping supports (i.e., non-intersecting sets of non-zero coordinates), by always selecting nonzeros in the first domain from thefirst 512 dimensions, while only using the last 512 dimensions for the second domain Fig. 7(c) inAppendix). For the evaluation on the synthetic data, we use the total of 200 samples for the trainingand testing purposes each (100 samples for each of the two domains), and smaller batches for onlinetraining, containing 20 samples each (instead of 200 samples used earlier for images and languagedata).Since the data is sparse, we accordingly adjust the sparsity of dictionary elements (50 nonzeros inan element; for the code sparsity, we will present the results with 50 nonzeros as well). In Fig. 5,we see reconstruction accuracy, for the first and second domain data. For the first domain, the base-line ODL method (i.e. ODL (sparse) in the plots) and our NODL (i.e. NODL (sparse)) performequally well. On the other hand, for the second domain, the ODL algorithm’s performance degradessignificantly compared to the first domain. This is because the data from the second domain havenon-overlapping support w.r.t. the data from the first domain. Our method is able to perform verywell on the second domain (almost as good as the first domain). It is further interesting to analyzethe case of random non-sparse dictionary (random-D) which even performs better than the baselineODL method, for the second domain. This is because random dictionary elements remain non-sparsein all the dimensions thereby doing an average job in both of the domains. Along the same lines,ODL (dense) performs better than the ODL (sparse) in the second domain. Though, the performanceof non-sparse dictionaries should degrade significantly with an increase in the sparsity of data, aswe see above for the NLP data. Clearly, our NODL (sparse) gives consistently better reconstructionaccuracy, compared to the other methods, across the two domains.In Fig. 5(c) and Fig. 5(d), we see the sparsity structure of the dictionary elements learned using thebaseline ODL method and our NODL method respectively. From these plots, we get better insightson why the baseline method does not work. It keeps same sparsity structure as it used for the datafrom the first domain. Our NODL adapts to the second domain data because of its ability to add newdictionary elements, that are randomly initialized with non-zero support in all the dimensions.Next, in Sec. 5, we discuss our intuitions on why NODL performs better than the ODL algorithmunder certain conditions.12Under review as a conference paper at ICLR 20175 W HEN NEUROGENESIS CANHELP,AND WHYIn the Sec. 4, we observed that our NODL method outperforms the ODL algorithm in two generalsettings, both involving sparse dictionary elements: (i) non-sparse data such as real-life images, and(ii) sparse data with (almost) non-overlapping supports. In this section, we attempt to analyze whatcontributes to the success of our approach in these settings, starting with the last one.Sparse data with non-overlapping supports, sparse dictionaryAs discussed above, in this scenario, the data from both the first and the second domain are sparse,and their supports (non-zero dimensions) are non-overlapping, as shown in the Fig. 7(c). Note that,when training a dictionary using the fixed-size, sparse-dictionary ODL method, we observe only aminor adaptation to the second domain after training on the first domain, as shown in Fig. 5(c).Our empirical observations are supported by the theoretical result summarized in Lemma 1 below.Namely, we prove that when using the ODL algorithm in the above scenario, the dictionary trainedon the first domain can not adapt to the second domain. (The minor adaptation, i.e., a few nonzeros,observed in our results in Fig. 5(c) occurs only due to implementation details involving normal-ization of sparse dictionary elements when computing codes in the dictionary – the normalizationintroduces non-zeros of small magnitude in all dimensions (see Appendix for the experiment resultswith no normalization of the elements, conforming to the Lemma 1)).Lemma 1. Letx1;x2;;xt12Rmbe a set of samples from the first domain, with non-zeros(support) in the set of dimensions PM=f1;;mg, and letxt;xt+1;;xn2Rmbe aset of samples from the second domain, with non-zeros (support) in dimensions QM, such thatP\Q= ;jPj=jQj=l. Let us denote as d1;d2;;dk2Rmdictionary elements learned byODL algorithm, with the sparsity constraint of at most lnonzeros in each element8, on the data fromthe first domain, x1;;xt1. Then (1) those elements have non-zero support in Ponly, and (2)after learning from the second domain data, the support (nonzero dimensions) of the correspondingupdated dictionary elements will remain in P.Proof Sketch. Let us consider processing the data from the first domain. At the first iteration, asamplex1is received, its code 1is computed, and the matrices AandBare updated, as shown inAlg. 1 (non-highlighted part); next, the dictionary update step is performed, which optimizesD(1)=arg minD2C12Tr(DTDA)Tr(DTB) +Xjjjjdjjj1: (6)Since the support of x1is limited to P, we can show that optimal dictionary Dmust also haveall columns/elements with support in P. Indeed, assuming the contrary, let dj(i)6= 0 for somedictionary element/column j, wherei =2P. But then it is easy to see that setting dj(i)to zeroreduces the sum-squared error and the l1-norm in (6), yielding another dictionary that achieves alower overall objective; this contradicts our assumption that Dwas optimal. Thus, the dictionaryupdate step must produce a dictionary where all columns have their support in P. By induction,this statement will also be true for the dictionary obtained after processing all samples from the firstdomain. Next, the samples from the second domain start arriving; note that those samples belong to adifferent subspace, spanning the dimensions within the support set Q, which is not intersecting withP. Thus, using the current dictionary, the encoding tof first sample xtfrom the second domain(i.e. the solution of the LASSO problem in step 4 of the Alg. 1 ) will be a zero vector. Therefore, thematricesAandBremains unchanged during the update in step 11, and thus the support of each bj,and, consequently, ujand the updated dictionary elements djwill remain in P. By induction, everydictionary update in response to a new sample from the second domain will preserve the support ofthe dictionary elements, and thus the final dictionary elements will also have their support only inP.Non-sparse data, sparse dictionaryWe will now discuss an intuitive explanation behind the success of neurogenetic approach in thisscenario, leaving a formal theoretical analysis as a direction for future work. When learning sparse8lcorresponds to din Alg. 113Under review as a conference paper at ICLR 2017(a)Awith ODL method (with dense elements) (b)Awith ODL method (with sparse elements)(c)Awith our method (with sparse elements) (d)Dwith ODL method (with sparse elements)Figure 6: Visualization of the sparse dictionary and the matrix Alearned on the first imagingdomain (Oxford images), using the baseline ODL method and our method.dictionaries on non-sparse data such as natural images, we observed that many dictionary elementshave non-overlapping supports with respect to each other; see, for example, Fig. 6(d), where eachcolumn corresponds to a 10000-dimensional dictionary element with nonzero dimensions shownin black color. Apparently, the non-zeros dimensions of an element tend to cluster spatially, i.e.to form a patch in an image. The non-overlapping support of dictionary elements results into aspecific structure of the matrix A. As shown in Fig. 6(b), for ODL approach, the resulting matrixAincludes many off-diagonal nonzero elements of large absolute values (along with high valueson the diagonal). Note that, by definition, Ais an empirical covariance of the code vectors, andit is easy to see that a nonzero value of ajkimplies that the j-th and thek-th dictionary elementswere used jointly to explain the same data sample(s). Thus, the dense matrix structure with manynon-zero off-diagonal elements, shown in Fig. 6(b), implies that, when the dictionary elements aresparse, they will be often used jointly to reconstruct the data. On the other hand, in the case ofnon-sparse dictionary elements, the matrix Ahas an almost diagonally-dominant structure, i.e. onlya few dictionary elements are used effectively in the reconstruction of each data sample even withnon-sparse codes (see Appendix for details).Note that in the dictionary update expression uj bjPk 6=jdkajkajjin (3), when the values ajk=ajjare large for multiple k, thejthdictionary element becomes tightly coupled with other dictionaryelements, which reduces its adaptability to new, non-stationary data. In our algorithm, the valuesajk=ajjremain high if both elements jandkhave similar “age”; however, those values are muchlower if one of the elements is introduced by neurogenesis much more recently than the other one.In 6(c), the upper left block on the diagonal, representing the oldest elements (added during theinitialization), is not diagonally-dominant (see the sub-matrices of Awith NODL in Fig. 14 in theAppendix). The lower right block, corresponding to the most recently added new elements, may alsohave a similar structure (though not visible due to relatively low magnitudes of the new elements;see the Appendix). Overall, our interpretation is that the old elements are tied to each other whereasthe new elements may also be tied to each other but less strongly, and not tied to the old elements,yielding a block-diagonal structure of Ain case of neurogenetic approach, where blocks correspond14Under review as a conference paper at ICLR 2017to dictionary elements adapted to particular domains. In other words, neurogenesis allows for anadaptation to a new domain without forgetting the old one.6 C ONCLUSIONSIn this work, we proposed a novel algorithm, Neurogenetic Online Dictionary Learning (NODL),for the problem of learning representations in non-stationary environments. Our algorithm buildsa dictionary of elements by learning from an online stream of data while also adapting the dic-tionary structure (the number of elements/hidden units and their connectivity) via continuous birth(addition) and death (deletion) of dictionary elements, inspired by the adult neurogenesis process inhippocampus, which is known to be associated with better adaptation of an adult brain to changingenvironments. Moreover, introducing sparsity in dictionary elements allows for adaptation of thehidden unit connectivity and further performance improvements.Our extensive empirical evaluation on both real world and synthetic data demonstrated that the in-terplay between the birth and death of dictionary elements allows for a more adaptive dictionarylearning, better suited for non-stationary environments than both of its counterparts, such as thefixed-size online method of Mairal et al. (2009) (no addition and no deletion), and the online ver-sion of the group-sparse coding method by Bengio et al. (2009) (deletion only). Furthermore weevaluated, both empirically and theoretically, several specific conditions on both method’s and dataproperties (involving the sparsity of elements, codes, and data) where our method has significantadvantage over the standard, fixed-size online dictionary learning. Overall, we can conclude thatneurogenetic dictionary learning typically performs as good as, and often much better than its com-petitors. In our future work, we plan to explore the non-linear extension of the dictionary model, aswell as a stacked auto-encoder consisting of multiple layers.REFERENCESMichal Aharon, Michael Elad, and Alfred Bruckstein. K-svd: An algorithm for designing overcompletedictionaries for sparse representation. Signal Processing, IEEE Transactions on , 2006.Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Advances in neural informationprocessing systems , 2014.Samy Bengio, Fernando Pereira, Yoram Singer, and Dennis Strelow. Group sparse coding. In Advancesin Neural Information Processing Systems 22 . 2009.Cristian Bucilu, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proceedings ofthe 12th ACM SIGKDD international conference on Knowledge discovery and data mining , 2006.Timothy J. Draelos, Nadine E. Miner, Jonathan A. Cox, Christopher C. Lamb, Conrad D. James, andJames B. Aimone. Neurogenic deep learning. In ICLR 2016 Workshop Track , 2016a.Timothy J Draelos, Nadine E Miner, Christopher C Lamb, Craig M Vineyard, Kristofor D Carlson, Con-rad D James, and James B Aimone. Neurogenesis deep learning. arXiv preprint arXiv:1612.03770 ,2016b.Scott E Fahlman and Christian Lebiere. The cascade-correlation learning architecture. 1989.Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah Smith. Sparse overcompleteword vector representations. arXiv preprint arXiv:1506.02004 , 2015.Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXivpreprint arXiv:1503.02531 , 2015.Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski, and Francis Bach. Proximal methods for hierar-chical sparse coding. Journal of Machine Learning Research , 2011.Gerd Kempermann. Adult neurogenesis: stem cells and neuronal development in the adult brain . 2006.Kenneth Kreutz-Delgado, Joseph F Murray, Bhaskar D Rao, Kjersti Engan, Te-Won Lee, and Terrence JSejnowski. Dictionary learning algorithms for sparse representation. Neural computation , 2003.Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y Ng. Efficient sparse coding algorithms. InAdvances in neural information processing systems , 2006.15Under review as a conference paper at ICLR 2017Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online dictionary learning for sparsecoding. In Proceedings of the 26th annual international conference on machine learning , 2009.Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online learning for matrix factorizationand sparse coding. Journal of Machine Learning Research , 2010.Bruno A Olshausen and David J Field. Sparse coding with an overcomplete basis set: A strategy employedby v1? Vision research , 1997.Ignacio Ramirez and Guillermo Sapiro. An mdl framework for sparse coding and dictionary learning.IEEE Transactions on Signal Processing , 60(6):2913–2927, 2012.Irina Rish, Guillermo A. Cecchi, Aurelie Lozano, and Ravi Rao. Adult neurogenesis as efficient sparsifi-cation. In Society for Neuroscience meeting (poster presentation), November 12-16 , 2011.Amar Sahay, Kimberly N Scobie, Alexis S Hill, Colin M O’Carroll, Mazen A Kheirbek, Nesha SBurghardt, Andr ́e A Fenton, Alex Dranovsky, and Ren ́e Hen. Increasing adult hippocampal neuro-genesis is sufficient to improve pattern separation. Nature , 2011.Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine LearningResearch , 2014.Ales Stuchlik. Dynamic learning and memory, synaptic plasticity and neurogenesis: an update. Frontiersin behavioral neuroscience , 2014.Fei Sun, Jiafeng Guo, Yanyan Lan, Jun Xu, and Xueqi Cheng. Sparse word embeddings using l1 regular-ized online learning. In Proceedings of the Twenty-Fifth International Joint Conference on ArtificialIntelligence , 2016.Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal StatisticalSociety. Series B (Methodological) , 1996.Dani Yogatama, Manaal Faruqui, Chris Dyer, and Noah A Smith. Learning word representations withhierarchical sparse coding. In Proc. of ICML , 2015.Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal ofthe Royal Statistical Society: Series B (Statistical Methodology) , 2006.A I MPLEMENTATION DETAILSIn our implementation of a sparsity constraint with a given number of non-zeros, we perform abinary search for the value of the corresponding regularization parameter, , as shown in Alg. 2.This approach costs much lesser than other techniques such as LARS while the quality of solutionsare very similar.Algorithm 2 Binary search of with the proximal method based sparsityRequire:u(vector to be sparsified) , (numbers of non-zeros),(acceptable error in ),(acceptable error in )1:u+=abs(u)2:min= 0 (if no sparsity)3:max =max(u+)4:while truedo5:mean =min+max26:=nnz((u+mean )+)(non zeros with proximal operator)7: ifabs(maxmin)max<orabs()then8:=mean9: return10: else if>then11:min=mean12: else if<then13:max =mean14: else15: error: this condition is not possible.16: end if17:end while16Under review as a conference paper at ICLR 2017B E XPERIMENTAL RESULTSB.1 A DDITIONAL PLOTS FOR THE EXPERIMENT RESULTS DISCUSSED IN SEC. 4(a) Urban: Oxford Buildings (b) Nature: Flowers and Animals0 20 40 60 80 100 120 140 160 180 20001002003004005006007008009001000nz = 10200 (c) Synthetic dataFigure 7: The data sets for the evaluation of the online dictionary learning algorithms.(a) 1st domain (Oxford)(b) 2nd domain (Flowers) (c) Classification ErrorFigure 8: Reconstruction accuracy for 100x100 size images with non-sparse dictionary but sparsecode (50 non-zeros) settings.Fig. 9 is an extension of the original Fig. 2 in the main paper, for the same experiments, on the imagescompressed to size 32x32, in the learning settings of sparse dictionary elements, and relatively lesssparse codes. This figure is included to show that: (i) our analysis for the results presented in Fig. 2extends to the other metric, mean square error (MSE); (ii) the results for the second domain dataof flowers and animals are also highly similar to each other. Along the same lines, Fig. 10, Fig. 11extend Fig. 3, Fig. 8 respectively. In these extended figures, we see the similar generalizations acrossthe evaluation metrics, the data sets and the classifiers.B.2 T RADE OFF BETWEEN THE SPARSITY OF DICTIONARY ELEMENTS AND CODESIn this section, our results from the experiments on 32x32 size compression of images are presentedwhere we vary the sparsity of codes as well as the sparsity of dictionary elements for a furtheranalysis of the trade off between the two. From the left to right in Fig. 12, we keep dictionaryelements sparse but slowly decrease their sparsity while increasing the sparsity of codes. Here,the number of nonzeros in a dictionary element (dnnz) and a code (cnnz) are decided such that itproduces the overall number of non-zeros approx. to the size of an image (i.e. 32x32) if there wereno overlaps of non-sparse patches between the elements. We observe the following from the figure:(i) the overall reconstruction gets worse when we trade off the sparsity of dictionary elements forthe sparsity of codes. (ii) the performance of our NODL method is better than the baseline ODLmethod, especially when there is higher sparsity in dictionary elements.B.3 N ON-SPARSE DICTIONARY ELEMENTS AND NON -SPARSE CODESWe also performed experiments for the settings where dictionary elements and codes are both non-sparse. See Fig. 13. For this scenario, while we get very high reconstruction accuracy, the overallclassification error remains much higher (ranging between 0.48 to 0.32) compared to the sparse17Under review as a conference paper at ICLR 2017dictionary elements setting in Fig. 10 (0.36 to 0.22), though lower than the settings of non-sparsedictionary with sparse codes in Fig. 11 (0.52 to 0.40).B.4 A DDITION PLOTS FOR THE ANALYSIS OF SPARSE DICTIONARY ELEMENTSFig. 14 extends Fig. 6. For the case of non-sparse dictionary elements, the structure of matrix Awith the ODL algorithm after processing the first domain image data (oxford images) is shown inFig. 14(a) for non-sparse codes settings (similar structure for sparse code settings). In both cases ofnon-sparse elements, sparse codes as well as non-sparse codes, the matrix is diagonally dominant,in contrast to the scenario of sparse dictionary elements in Fig. 6(b). For our algorithm NODL inthe settings of sparse dictionary elements, we show the matrix Ain Fig. 6(c) and its sub-matricesin Fig. 14(b), 14(c) and 14(d). Fig. 14(b) demonstrates that the old dictionary elements are tied toeach other (i.e. high values ofajkajj8k6=j). Similar argument applies to the recently added newdictionary elements, as in Fig. 14(d), though the overall magnitude range is smaller compared to theold elements in Fig. 14(b). Also, we see that the new elements are not as strongly tied to each otheras the old elements, but more than the case of non-sparse dictionary elements. In Fig. 14(c), we cansee more clearly that the new elements are not tied to the old elements. Overall, from the above plots,our analysis is that the new elements are more adaptive to the new non-stationary environments asthose new elements are not tied to the old elements, and only weakly tied to each other.B.5 S YNTHETIC SPARSE DATA SETTINGSFor the case of modeling the synthetic sparse data with sparse dictionary elements, Fig. 15 extendsFig. 5 with the plots on the other metric, mean square error (MSE). In this figure, the ODL algorithmadapts to the second domain data though not as good as our algorithm NODL. Even this adaptationof ODL is due to the normalization of dictionary elements, when computing codes, as we mentionin the main draft. If there is no normalization of dictionary elements, the ODL algorithm doesn’tadapt to the second domain data at all. For these settings, the results are shown in Fig. 16.B.6 R ECONSTRUCTED IMAGESIn Fig. 17, 18, we show the reconstruction, for some of the randomly picked images from the ani-mals data set, with sparse dictionary elements, and non-sparse elements respectively (500 elements).We suggest to view these reconstructed images in the digital version to appreciate the subtle com-parisons. For the case of non-sparse elements in Fig. 18, the reconstructions are equally good forboth ODL and our NODL algorithm. On the other hand, for the sparse elements settings, our algo-rithm NODL gives much better reconstruction than the baseline ODL, as we see visually in Fig. 17.These comparisons of the reconstructed images conform to the evaluation results presented above.It is interesting to see that, with sparse dictionary elements, the background is smoothed out withan animal in the focus in an image, with good reconstruction of the body parts (especially the oneswhich distinguish between the different species of animals).Whereas, the non-sparse dictionary does not seem to distinguish between the two, the backgroundand animal in an image; in some of the reconstructed images, it is hard to distinguish an animalfrom the background. Clearly, the background in an image should lead to noise in features for taskssuch as the binary classification considered above (discussed in Sec. 4). This should also explainwhy we get much better classification accuracy with the use of sparse dictionary elements ratherthan non-sparse elements. For the scenario of sparse codes with non-sparse dictionary elements, thereconstructed images are even worse; not shown here due to space constraints.B.7 R E-INITIALIZATION OF “DEAD ”DICTIONARY ELEMENTSIn Mairal et al. (2009), it was also noted that, during dictionary updates, some elements may turninto zero-column (i.e., zero l2norm); those elements were referred to as “dead” elements, sincethey do not contribute to the data reconstruction task. The fraction of such dead elements elementswas typically very small in our experiments with the original ODLmethod (i.e., without the explicit“killing” of the elements via the group sparsity regularization). In Mairal et al. (2009), it is proposedto reinitialize such dead elements, using, for example, the existing batch of data (random values areanother option). Here, we will refer to such extension of the baseline ODL method as to ODL*.18Under review as a conference paper at ICLR 2017Specifically, in ODL*, we reinitialize the “dead” elements with random values, and then continueupdating them along with the other dictionary elements, on the current batch of data. Fig. 19 extendsFig. 2 with the additional plots including the ODL* extension of the baseline ODLalgorithm, whilekeeping all experiment settings the same. We can see that there the difference in the performance ofODL and its extension ODL* is negligible, perhaps due to the the fact that the number of the deadelements, without an explicit group-sparsity regularization, is typically very small, as we alreadymentioned above. We observe that our method outperforms the ODL* version, as well as the originalODL baseline.B.8 E VALUATING POSSIBLE EFFECTS OF VARYING THE ORDER OF TRAINING DATASETSIn our original experiments presented in the main paper (Sec. 4), the Oxford buildings images areprocessed as part of the first domain data set, followed by the mixture of flower and animal imagesas the second domain data set. One can ask whether a particular sequence of the input datasets hada strong influence on our results; in this section, we will evaluate different permutations of the inputdata sets. Specifically, we will pick any two out of the three data sets available, and use them asthe first and the second domain data, respectively. In Fig. 20, we present test results on the seconddomain data for the baseline ODL and for our NODL methods, with each subfigure corresponding toone of the six processing orders on data sets used for training. All experimental settings are exactlythe same as those used to produce the plots in Fig. 2. Overall, we observe that, for all possible ordersof the input datasets, our NODL approach is either superior or comparable to ODL , but neverinferior. We see a significant advantage of NODL over ODL when using Oxford or Flowers datasets as the first domain data. However, this advantage is less pronounced when using the Animalsdata set as the first domain. One possible speculation can be that animal images can be somewhatmore complex to reconstruct as compared to the other two types of data, and thus learning theirrepresentation first is sufficient for subsequent representation of the other two types of datasets.Investigating this hypothesis, as well as, in general, the effects of the change in the training datacomplexity, from simpler to more complex or vice versa, where complexity can be measured, forexample, as image compressibility, remains an interesting direction for further research.B.9 R OBUSTNESS OF OUR NODL ALGORITHM W .R.T.THE TUNING PARAMETERSTo demonstrate the robustness of our NODL algorithm w.r.t. the tuning parameters, we performadditional experiments by varing each of the tuning parameters, over a wide range of values, whilekeeping the others same as those used for producing the Fig. 2. In Fig. 21, 22, 23, 24, 25, 26, wevary the tuning parameters batchsize ,ck,g,c,d,respectively, and show the correspondingtest results on the flowers dataset of the second domain (see the Alg. 1, in the Sec. 3, for the rolesof the tuning parameters in our NODL algorithm). In these plots, we see that our NODL algorithmoutperforms the baseline ODL algorithm, consistently across all the parameter settings.19Under review as a conference paper at ICLR 2017(a) Pearson- Animals (b) MSE- Oxford(c) MSE- Flowers (d) MSE- AnimalsFigure 9: Reconstruction Error for 32x32 size images with sparse dictionary settings.(a) Pearson- Animals (b) MSE- Flowers (c) MSE- Animals(d) Random Forest (e) Nearest Neighbor (f) Naive BayesFigure 10: Reconstruction Error for 100x100 size images with sparse dictionary (50 non-zeros) andnon-sparse code settings (2000 non-zeros).20Under review as a conference paper at ICLR 2017(a) Pearson- Animals (b) MSE- Flowers (c) MSE- Animals(d) Random Forest (e) Nearest Neighbor (f) Naive BayesFigure 11: Reconstruction Error for 100x100 size images with non-sparse dictionary but sparsecode (50 non-zeros) settings.(a) Dnnz:10, Cnnz:100 (Pearson) (b) Dnnz:30, Cnnz:33 (Pearson) (c) Dnnz:100, Cnnz:10 (Pearson)(d) Dnnz:10, Cnnz:100 (MSE) (e) Dnnz:30, Cnnz:33 (MSE) (f) Dnnz:100, Cnnz:10 (MSE)Figure 12: Reconstruction Error for 32x32 size images, on the animals data, with varying sparsityin dictionary elements and codes.21Under review as a conference paper at ICLR 2017(a) Pearson- Oxford (b) Pearson- Flowers (c) Pearson- Animals(d) MSE- Flowers (e) MSE- Animals (f) Random Forest(g) Nearest Neighbor (h) Logistic Regression (i) Naive BayesFigure 13: Reconstruction Error for 100x100 size images with non-sparse dictionary and non-sparsecodes (500 non-zeros) settings.22Under review as a conference paper at ICLR 2017(a)Awith ODL method (non-sparse codes, non-sparse dictionary)(b)Awith our method–the old 50 ele-ments (non-sparse codes, sparse dictionary)(c)Awith our method–all the new ele-ments (non-sparse codes, sparse dictionary)(d)Awith our method–the recently added newelements (non-sparse codes, sparse dictionary)Figure 14: The structure of a sparse dictionary that is learned from the processing of the first domainimage data (Oxford images) using the baseline ODL method.(a) MSE- First Domain (b) MSE- Second DomainFigure 15: Reconstruction error for the synthetic data from sub-spaces with non-overlapping sup-ports of non-zeros.23Under review as a conference paper at ICLR 2017(a) Pearson- First Domain (b) Pearson- Second Domain (c) MSE- Second DomainFigure 16: Reconstruction error for the synthetic data from sub-spaces with non-overlapping sup-ports of non-zeros (without normalization of dictionary elements when computing codes).24Under review as a conference paper at ICLR 2017Figure 17: Reconstructed animal images of size100x100 (test data), with 500 sparse dictionary ele-ments (non-sparse codes) . In each row, the original im-age is on the left, and the reconstructions, computed withODL and NODL (our algorithm), are in the center andright respectively.Figure 18: Reconstructed animal images of size100x100 (test data), with 500 non-sparse dictionary ele-ments (non-sparse codes) . In each row, the original imageis on the left, and the reconstructions, computed with ODLand NODL (our algorithm), are in the center and right re-spectively.25Under review as a conference paper at ICLR 2017(a) 2nd domain (Flowers) (b) 2nd domain (Animals)Figure 19: Extension of Fig. 2, with the results for the ODL* version of ODL where occasional“dead” elements are reinitialized with random values.26Under review as a conference paper at ICLR 2017(a) Training Order: 1st domain (Oxford), 2nd do-main (Flowers)(b) Training Order: 1st domain (Oxford), 2nd do-main (Flowers)(c) Training Order: 1st domain (Flowers), 2nd do-main (Oxford)(d) Training Order: 1st domain (Flowers), 2nd do-main (Animals)(e) Training Order: 1st domain (Animals), 2nd do-main (Oxford)(f) Training Order: 1st domain (Animals), 2nd do-main (Flowers)Figure 20: Evaluating the effects of the input data order; the experimental setup coincides with theone used to produce Fig. 2 (32x32 images). Different processing orders of the available datasets areused during the training phase; performance results on the test subset taken from the second domainare presented.27Under review as a conference paper at ICLR 2017(a) Batch Size- 125 (b) Batch Size- 200(c) Batch Size- 350 (d) Batch Size- 500Figure 21: Evaluating the effects w.r.t. batch size while keeping the other experimental settingssame as the ones used to produce Fig. 2 (32x32 images).(a)ck= 10 (b)ck= 50(c)ck= 100 (d)ck= 250Figure 22: Evaluating the effects w.r.t. the tuning parameter ck(the upper bound on the number ofnew elements added in a batch) while keeping the other experimental settings same as the ones usedto produce Fig. 2 (32x32 images).28Under review as a conference paper at ICLR 2017(a)g= 3e3 (b)g= 1e2(c)g= 3e2 (d)g= 5e2Figure 23: Evaluating the effects w.r.t. the tuning parameter g(the regularization parameter for thekilling of “weak” elements) while keeping the other experimental settings same as the ones used toproduce Fig. 2 (32x32 images).(a)c= 100 (b)c= 200(c)c= 500 (d)c= 1000Figure 24: Evaluating the effects w.r.t. the tuning parameter c(the number of non-zeros in acode) while keeping the other experimental settings same as the ones used to produce Fig. 2 (32x32images).29Under review as a conference paper at ICLR 2017(a)d= 5 (b)d= 10(c)d= 20 (d)d= 50Figure 25: Evaluating the effects w.r.t. the tuning parameter d(the number of non-zeros in adictionary element) while keeping the other experimental settings same as the ones used to produceFig. 2 (32x32 images).(a)= 0:7 (b)= 0:8(c)= 0:9 (d)= 1:0Figure 26: Evaluating the effects w.r.t. the tuning parameter (the threshold parameter for condi-tional neurogenesis) while keeping the other experimental settings same as the ones used to produceFig. 2 (32x32 images).30
SJk01vogl
Under review as a conference paper at ICLR 2017ADVERSARIAL EXAMPLES FOR GENERATIVE MODELSJernej KosNational University of SingaporeIan FischerGoogle ResearchDawn SongUniversity of California, BerkeleyABSTRACTWe explore methods of producing adversarial examples on deep generative mod-els such as the variational autoencoder (V AE) and the V AE-GAN. Deep learningarchitectures are known to be vulnerable to adversarial examples, but previouswork has focused on the application of adversarial examples to classification tasks.Deep generative models have recently become popular due to their ability to modelinput data distributions and generate realistic examples from those distributions.We present three classes of attacks on the V AE and V AE-GAN architectures anddemonstrate them against networks trained on MNIST, SVHN and CelebA. Ourfirst attack leverages classification-based adversaries by attaching a classifier tothe trained encoder of the target generative model, which can then be used to in-directly manipulate the latent representation. Our second attack directly uses theV AE loss function to generate a target reconstruction image from the adversarialexample. Our third attack moves beyond relying on classification or the standardloss for the gradient and directly optimizes against differences in source and tar-get latent representations. We also motivate why an attacker might be interestedin deploying such techniques against a target generative network.1 I NTRODUCTIONAdversarial examples have been shown to exist for a variety of deep learning architectures.1Theyare small perturbations of the original inputs, often barely visible to a human observer, but carefullycrafted to misguide the network into producing incorrect outputs. Seminal work by Szegedy et al.(2013) and Goodfellow et al. (2014), as well as much recent work, has shown that adversarialexamples are abundant and finding them is easy.Most previous work focuses on the application of adversarial examples to the task of classification,where the deep network assigns classes to input images. The attack adds small adversarial perturba-tions to the original input image. These perturbations cause the network to change its classificationof the input, from the correct class to some other incorrect class (possibly chosen by the attacker).Critically, the perturbed input must still be recognizable to a human observer as belonging to theoriginal input class.2Deep generative models, such as Kingma & Welling (2013), learn to generate a variety of outputs,ranging from handwritten digits to faces (Kulkarni et al., 2015), realistic scenes (Oord et al., 2016),videos (Kalchbrenner et al., 2016), 3D objects (Dosovitskiy et al., 2016), and audio (van den Oordet al., 2016). These models learn an approximation of the input data distribution in different ways,and then sample from this distribution to generate previously unseen but plausible outputs.To the best of our knowledge, no prior work has explored using adversarial inputs to attack gen-erative models. There are two main requirements for such work: describing a plausible scenarioin which an attacker might want to attack a generative model; and designing and demonstrating anattack that succeeds against generative models. We address both of these requirements in this work.One of the most basic applications of generative models is input reconstruction. Given an input im-age, the model first encodes it into a lower-dimensional latent representation, and then uses that rep-resentation to generate a reconstruction of the original input image. Since the latent representation1Adversarial examples are even easier to produce against most other machine learning architectures, asshown in Papernot et al. (2016), but we are focused on deep networks.2Random noise images and “fooling” images (Nguyen et al., 2014) do not belong to this strict definition ofan adversarial input, although they do highlight other limitations of current classifiers.1Under review as a conference paper at ICLR 2017usually has much fewer dimensions than the original input, it can be used as a form of compression.The latent representation can also be used to remove some types of noise from inputs, even when thenetwork has not been explicitly trained for denoising, due to the lower dimensionality of the latentrepresentation restricting what information the trained network is able to represent. Many genera-tive models also allow manipulation of the generated output by sampling different latent values ormodifying individual dimensions of the latent vectors without needing to pass through the encodingstep.These properties of input reconstruction generative networks suggest a variety of different attacksthat would be enabled by effective adversaries against generative networks. Any attack that targetsthe compression bottleneck of the latent representation can exploit natural security vulnerabilities inapplications built to use that latent representation. Specifically, if the person doing the encoding stepis separated from the person doing the decoding step, the attacker may be able to cause the encodingparty to believe they have encoded a particular message for the decoding party, but in reality theyhave encoded a different message of the attacker’s choosing. We explore this idea in more detail asit applies to the application of compressing images using a V AE or V AE-GAN architecture.2 R ELATED WORK AND BACKGROUNDThis work focuses on adversaries for variational autoencoders (V AEs, proposed in Kingma &Welling (2013)) and V AE-GANs (V AEs composed with a generative adversarial network, proposedin Larsen et al. (2015)).2.1 R ELATED WORK ON ADVERSARIESMany adversarial attacks on classification models have been described in existing literature (Good-fellow et al., 2014; Szegedy et al., 2013). These attacks can be untargeted, where the adversary’sgoal is to cause any misclassification, or the least likely misclassification (Goodfellow et al., 2014;Kurakin et al., 2016); or they can be targeted, where the attacker desires a specific misclassification.Moosavi-Dezfooli et al. (2016) gives a recent example of a strong targeted adversarial attack. Someadversarial attacks allow for a threat model where the adversary does not have access to the targetmodel (Szegedy et al., 2013; Papernot et al., 2016), but commonly it is assumed that the attackerdoes have that access, in an online or offline setting (Goodfellow et al., 2014; Kurakin et al., 2016).3Given a classifier f(x) : x2 X !y2 Y and original inputs x2 X , the problemof generating untargeted adversarial examples can be expressed as the following optimization:argminxL(x;x)s:t: f (x)6=f(x), whereL()is a chosen distance measure between exam-ples from the input space (e.g., the L2norm). Similarly, generating a targeted adversarial attack ona classifier can be expressed as argminxL(x;x)s:t:f (x) =yt, whereyt2Y is some targetlabel chosen by the attacker.These optimization problems can often be solved with optimizers like L-BFGS or Adam (Kingma& Ba, 2015), as done in Szegedy et al. (2013) and Carlini & Wagner (2016). They can also beapproximated with single-step gradient-based techniques like fast gradient sign (Goodfellow et al.,2014), fast gradient L2(Huang et al., 2015), or fast least likely class (Kurakin et al., 2016); or theycan be approximated with iterative variants of those and other gradient-based techniques (Kurakinet al., 2016; Moosavi-Dezfooli et al., 2016).An interesting variation of this type of attack can be found in Sabour et al. (2015). In that work,they attack the hidden state of the target network directly by taking an input image xand a targetimage xtand searching for a perturbed variant of xthat generates similar hidden state at layer lofthe target network to the hidden state at the same layer generated by xt. This approach can also beapplied directly to attacking the latent vector of a generative model.A variant of this attack has also been applied to V AE models in the concurrent work of Tabacofet al. (2016)4, which uses the KL divergence between the latent representation of the source andtarget images to generate the adversarial example. However in their paper, the authors mention thatthey tried attacking the output directly and that this only managed to make the reconstructions more3See Papernot et al. (2015) for an overview of different adversarial threat models.4This work was made public shortly after we published our early drafts.2Under review as a conference paper at ICLR 2017ReceiverzSender Attackerfenc fdecFigure 1: Depiction of the attack scenario. The V AE is used as a compression scheme to transmita latent representation of the image from the sender (left) to the receiver (right). The attacker con-vinces the sender to compress a particular image into its latent vector, which is sent to the receiver,where the decoder reconstructs the latent vector into some other image chosen by the attacker.blurry. While they do not explain the exact experimental setting, the attack sounds similar to ourLVAE attack, which we find very successful. Also, in their paper the authors do not consider themore advanced V AE-GAN models and more complex datasets like CelebA.2.2 B ACKGROUND ON VAE S AND VAE-GAN SThe general architecture of a variational autoencoder consists of three components, as shown in Fig-ure 8. The encoderfenc(x)is a neural network mapping a high-dimensional input representationxinto a lower-dimensional (compressed) latent representation z. All possible values of zform alatent space. Similar values in the latent space should produce similar outputs from the decoder ina well-trained V AE. And finally, the decoder/generator fdec(z), which is a neural network map-ping the compressed latent representation back to a high-dimensional output ^x. Composing thesenetworks allows basic input reconstruction ^x=fdec(fenc(x)). This composed architecture is usedduring training to backpropagate errors from the loss function.The variational autoencoder’s loss function LVAE enables the network to learn a latent representationthat approximates the intractable posterior distribution p(zjx):LVAE=DKL[q(zjx)jjp(z)] +Eq[logp(xjz)]: (1)q(zjx)is the learned approximation of the posterior distribution p(zjx).p(z)is the prior distributionof the latent representation z.DKLdenotes the Kullback–Leibler divergence. Eq[logp(xjz)]isthe variational lower bound, which in the case of input reconstruction is the cross-entropy H[x;^x]between the inputs xand their reconstructions ^x. In order to generate ^xthe V AE needs to sampleq(zjx)and then compute fdec(z).For the V AE to be fully differentiable while sampling from q(zjx), the reparametrization trick(Kingma & Welling, 2013) extracts the random sampling step from the network and turns it intoan input,". V AEs are often parameterized with Gaussian distributions. In this case, fenc(x)outputsthe distribution parameters and2. That distribution is then sampled by computing z=+"p2where"N(0;1)is the input random sample, which does not depend on any parameters of fenc,and thus does not impact differentiation of the network.The V AE-GAN architecture of Larsen et al. (2015) has the same fencandfdecpair as in the V AE.It also adds a discriminator fdiscthat is used during training, as in standard generative adversarialnetworks (Goodfellow et al., 2014). The loss function of fdecuses the disciminator loss instead ofcross-entropy for estimating the reconstruction error.3 P ROBLEM DEFINITIONWe provide a motivating attack scenario for adversaries against generative models, as well as aformal definition of an adversary in the generative setting.3.1 M OTIVATING ATTACK SCENARIOTo motivate the attacks presented below, we describe the attack scenario depicted in Figure 1. Inthis scenario, there are two parties, the sender and the receiver, who wish to share images with eachother over a computer network. In order to conserve bandwidth, they share a V AE trained on theinput distribution of interest, which will allow them to send only latent vectors z.3Under review as a conference paper at ICLR 2017Figure 2: Results for the L2optimization latent attack (see Section 4.3) on the V AE-GAN, targetinga specific image from the class 0. Shown are the first 12 non-zero images from the test SVHN dataset. The columns are, in order: the original image, the reconstruction of the original image, theadversarial example, the predicted class of the adversarial example, the reconstruction of the adver-sarial example, the predicted class of the reconstructed adversarial example, the reconstruction of thereconstructed adversarial example (see Section 4.5), and the predicted class of that reconstruction.The attacker’s goal is to convince the sender to send an image of the attacker’s choosing to thereceiver, but the attacker has no direct control over the bytes sent between the two parties. However,the attacker has a copy of the shared V AE. The attacker presents an image xto the sender whichresembles an image xthat the sender wants to share with the receiver. For example, the senderwants to share pictures of kittens with the receiver, so the attacker presents a web page to the senderwith a picture of a kitten, which is x. The sender chooses xand sends its corresponding zto thereceiver, who reconstructs it. However, because the attacker controlled the chosen image, when thereceiver reconstructs it, instead of getting a faithful reproduction ^xofx(e.g., a kitten), the receiversees some other image of the attacker’s choosing, ^xadv, which has a different meaning from x(e.g.,a request to send money to the attacker’s bank account).There are other attacks of this general form, where the sender and the receiver may be separatedby distance, as in this example, or by time, in the case of storing compressed images to disk forlater retrieval. In the time-separated attack, the sender and the receiver may be the same person ormultiple different people. In either case, if they are using the insecure channel of the V AE’s latentspace, the messages they share may be under the control of an attacker. For example, an attackermay be able to fool an automatic surveillance system if the system uses this type of compression tostore the video signal before it is processed by other systems. In this case, the subsequent analysisof the video signal could be on compromised data showing what the attacker wants to show.While we do not specifically attack their models, viable compression schemes based on deep neuralnetworks have already been proposed in the literature, showing promising results Toderici et al.(2015; 2016).3.2 D EFINING ADVERSARIAL EXAMPLES AGAINST GENERATIVE MODELSWe make the following assumptions about generating adversarial examples on a target generativemodel,Gtarg(x) =fdec(fenc(x)).Gtargis trained on inputs Xthat can naturally be labeled withsemantically meaningful classes Y, although there may be no such labels at training time, or thelabels may not have been used during training. Gtargnormally succeeds at generating an output^x=Gtarg(x)in classywhen presented with an input xfrom classy. In other words, whatevertarget output class the attacker is interested in, we assume that Gtargsuccessfully captures it in thelatent representation such that it can generate examples of that class from the decoder. This targetoutput class does not need to be from the most salient classes in the training dataset. For example, onmodels trained on MNIST, the attacker may not care about generating different target digits (whichare the most salient classes). The attacker may prefer to generate the same input digits in a differentstyle (perhaps to aid forgery). We also assume that the attacker has access to Gtarg. Finally, theattacker has access to a set of examples from the same distribution as Xthat have the target label4Under review as a conference paper at ICLR 2017xEncoderfenczDecoderfdecxClassifierfclassVAE-GANDiscriminatorfdisc(0, 1)yFigure 3: The V AE-GAN classifier architecture used to generate classifier-based adversarial exam-ples on the V AE-GAN. The V AE-GAN in the dashed box is the target network and is frozen whiletraining the classifier. The path x!fenc!z!fclass!^yis used to generate adversarialexamples in z, which can then be reconstructed by fdec.ytthe attacker wants to generate. This does not mean that the attacker needs access to the labeledtraining dataset (which may not exist), or to an appropriate labeled dataset with large numbers ofexamples labeled for each class y2Y (which may be hard or expensive to collect). The attacksdescribed here may be successful with only a small amount of data labeled for a single target classof interest.One way to generate such adversaries is by solving the optimization problemargminxL(x;x)s:t: ORACLE (Gtarg(x)) =yt, where O RACLE reliably discriminatesbetween inputs of class ytand inputs of other classes. In practice, a classifier trained by theattacker may server as O RACLE . Other types of adversaries from Section 2.1 can also be used toapproximate this optimization in natural ways, some of which we describe in Section 4.If the attacker only needs to generate one successful attack, the problem of determining if an attackis successful can be solved by manually reviewing the xand^xadvpairs and choosing whicheverthe attacker considers best. However, if the attacker wants to generate many successful attacks, anautomated method of evaluating the success of an attack is necessary. We show in Section 4.5 howto measure the effectiveness of an attack automatically using a classifier trained on z=fenc(x).4 A TTACK METHODOLOGYThe attacker would like to construct an adversarially-perturbed input to influence the latent repre-sentation in a way that will cause the reconstruction process to reconstruct an output for a differentclass. We propose three approaches to attacking generative models: a classifier-based attack, wherewe train a new classifier on top of the latent space zand use that classifier to find adversarial exam-ples in the latent space; an attack using LVAE to target the output directly; and an attack on the latentspace, z. All three methods are technically applicable to any generative architecture that relies on alearned latent representation z. Without loss of generality, we focus on the V AE-GAN architecture.4.1 C LASSIFIER ATTACKBy adding a classifier fclass to the pre-trained generative model5, we can turn the problem of gen-erating adversaries for generative models back into the previously solved problem of generatingadversarial examples for classifiers. This approach allows us to apply all of the existing attackson classifiers in the literature. However, as discussed below, using this classifier tends to producelower-quality reconstructions from the adversarial examples than the other two attacks due to theinaccuracies of the classifier.Step 1. The weights of the target generative model are frozen, and a new classifier fclass(z)!^yistrained on top of fencusing a standard classification loss Lclassier such as cross-entropy, as shownin Figure 3. This process is independent of how the original model is trained, but it requires a5This is similar to the process of semi-supervised learning in Kingma et al. (2014), although the goal isdifferent.5Under review as a conference paper at ICLR 2017training corpus pulled from approximately the same input distribution as was used to train Gtarg,with ground truth labels for at least two classes: ytandy~t, the negative class.Step 2. With the trained classifier, the attacker finds adversarial examples xusing the methodsdescribed in Section 4.4.Usingfclass to generate adversarial examples does not always result in high-quality reconstructions,as can be seen in the middle column of Figure 5 and in Figure 11. This appears to be due tothe fact that fclass adds additional noise to the process. For example, fclass sometimes confidentlymisclassifies latent vectors zthat represent inputs that are far from the training data distribution,resulting infdecfailing to reconstruct a plausible output from the adversarial example.4.2LVAE ATTACKOur second approach generates adversarial perturbations using the V AE loss function. The attackerchooses two inputs, xs(the source) and xt(the target), and uses one of the standard adversarialmethods to perturb xsintoxsuch that its reconstruction ^xmatches the reconstruction of xt, usingthe methods described in Section 4.4.The adversary precomputes the reconstruction ^xtby evaluating fdec(fenc(xt))once before per-forming optimization. In order to use LVAE in an attack, the second term (the reconstruction loss)ofLVAE (see Equation 1) is changed so that instead of computing the reconstruction loss between xand^x, the loss is computed between ^xand^xt. This means that during each optimization iteration,the adversary needs to compute ^x, which requires the full fdec(fenc(x))to be evaluated.4.3 L ATENT ATTACKOur third approach attacks the latent space of the generative model.Single latent vector target. This attack is similar to the work of Sabour et al. (2015), in whichthey use a pair of source image xsand target image xtto generate xthat induces the target networkto produce similar activations at some hidden layer las are produced by xt, while maintainingsimilarity between xsandx.For this attack to work on latent generative models, it is sufficient to compute zt=fenc(xt)andthen use the following loss function to generate adversarial examples from different source imagesxs, using the methods described in Section 4.4:Llatent =L(zt;fenc(x)): (2)L()is a distance measure between two vectors. We use the L2norm, under the assumption that thelatent space is approximately euclidean.We also explored a variation on the single latent vector target attack, which we describe in Sec-tion A.1 in the Appendix.4.4 M ETHODS FOR SOLVING THE ADVERSARIAL OPTIMIZATION PROBLEMWe can use a number of different methods to generate the adversarial examples. We initially evalu-ated both the fast gradient sign Goodfellow et al. (2014) method and an L2optimization method. Asthe latter produces much better results we focus on the L2optimization method, while we includesome FGS results in the Appendix. The attack can be used either in targeted mode (where we wanta specific class, yt, to be reconstructed) or untargeted mode (where we just want an incorrect classto be reconstructed). In this paper, we focus on the targeted mode of the attacks.L2optimization. The optimization-based approach, explored in Szegedy et al. (2013) and Carlini& Wagner (2016), poses the adversarial generation problem as the following optimization problem:argminxL(x;x) +L(x;yt): (3)As above,L()is a distance measure, and Lis one ofLclassier ,LVAE, orLlatent . The constantis used to balance the two loss contributions. For the LVAE attack, the optimizer must do a full6Under review as a conference paper at ICLR 2017reconstruction at each step of the optimizer. The other two attacks do not need to do reconstructionswhile the optimizer is running, so they generate adversarial examples much more quickly, as shownin Table 1.4.5 M EASURING ATTACK EFFECTIVENESSTo generate a large number of adversarial examples automatically against a generative model, theattacker needs a way to judge the quality of the adversarial examples. We leverage fclass to estimatewhether a particular attack was successful.6Reconstruction feedback loop. The architecture is the same as shown in Figure 3. We use thegenerative model to reconstruct the attempted adversarial inputs xby computing:^x=fdec(fenc(x)): (4)Then,fclass is used to compute:^y=fclass(fenc(^x)): (5)The input adversarial examples xare not classified directly, but are first fed to the generative modelfor reconstruction. This reconstruction loop improves the accuracy of the classifier by 60% on av-erage against the adversarial attacks we examined. The predicted label ^yafter the reconstructionfeedback loop is compared with the attack target ytto determine if the adversarial example success-fully reconstructed to the target class. If the precision and recall of fclass are sufficiently high onyt,fclass can be used to filter out most of the failed adversarial examples while keeping most of thegood ones.We derive two metrics from classifier predictions after one reconstruction feedback loop. The firstmetric isASignoretarget , the attack success rate ignoring targeting, i.e., without requiring the out-put class of the adversarial example to match the target class:ASignoretarget =1NNXi=11^yi6=yi (6)Nis the total number of reconstructed adversarial examples; 1^yi6=yiis1when ^yi, the classificationof the reconstruction for image i, does not equal yi, the ground truth classification of the originalimage, and 0otherwise. The second metric is AStarget , the attack success rate including targeting(i.e., requiring the output class of the adversarial example to match the target class), which we definesimilarly as:AStarget =1NNXi=11^yi=yit: (7)Both metrics are expected to be higher for more successful attacks. Note that AStargetASignoretarget . When computing these metrics, we exclude input examples that have the sameground truth class as the target class.5 E VALUATIONWe evaluate the three attacks on MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011) andCelebA (Liu et al., 2015), using the standard training and validation set splits. The V AE and V AE-GAN architectures are implemented in TensorFlow (Abadi & et al., 2015). We optimized usingAdam with learning rate 0:001and other parameters set to default values for both the generativemodel and the classifier. For the V AE, we use two architectures: a simple architecture with a singlefully-connected hidden layer with 512 units and ReLU activation function; and a convolutional ar-chitecture taken from the original V AE-GAN paper Larsen et al. (2015) (but trained with only theV AE loss). We use the same architecture trained with the additional GAN loss for the V AE-GANmodel, as described in that work. For both V AE and V AE-GAN we use a 50-dimensional latent rep-resentation on MNIST, a 1024-dimensional latent representation on SVHN and 2048-dimensionallatent representation on CelebA.6Note that fclass here is being used in a different manner than when we use it to generate adversarialexamples. However, the network itself is identical, so we don’t distinguish between the two uses in the notation.7Under review as a conference paper at ICLR 2017Figure 4: Results for the L2optimization latent attack on the V AE-GAN, targeting the mean latentvector for 0. Shown are the first 12 non-zero images from the test MNIST data set. The columnsare, in order: the original image, the reconstruction of the original image, the adversarial example,the predicted class of the adversarial example, the reconstruction of the adversarial example, thepredicted class of the reconstructed adversarial example, the reconstruction of the reconstructedadversarial example (see Section 4.5), and the predicted class of that reconstruction.In this section we only show results where no sampling from latent space has been performed.Instead we use the mean vector as the latent representation z. As sampling can have an effect onthe resulting reconstructions, we evaluated it separately. We show the results with different numberof samples in Figure 22 in the Appendix. On most examples, the visible change is small and ingeneral the attack is still successful.5.1 MNISTBoth V AE and V AE-GAN by themselves reconstruct the original inputs well as show in Figure 9,although the quality from the V AE-GAN is noticeably better. As a control, we also generate randomnoise of the same magnitude as used for the adversarial examples (see Figure 13), to show that ran-dom noise does not cause the reconstructed noisy images to change in any significant way. Althoughwe ran experiments on both V AEs and V AE-GANs, we only show results for the V AE-GAN as itgenerates much higher quality reconstructions than the corresponding V AE.5.1.1 C LASSIFIER ATTACKWe use a simple classifier architecture to help generate attacks on the V AE and V AE-GAN models.The classifier consists of two fully-connected hidden layers with 512 units each, using the ReLUactivation function. The output layer is a 10 dimensional softmax. The input to the classifier isthe 50 dimensional latent representation produced by the V AE/V AE-GAN encoder. The classifierachieves 98:05% accuracy on the validation set after training for 100 epochs.To see if there are differences between classes, we generate targeted adversarial examples for eachMNIST class and present the results per-class. For the targeted attacks we used the optimizationmethod with lambda 0:001, where Adam-based optimization was performed for 1000 epochs witha learning rate of 0:1. The mean L2norm of the difference between original images and generatedadversarial examples using the classifier attack is 3:36, while the mean RMSD is 0:120.Numerical results in Table 2 show that the targeted classifier attack successfully fools the classifier.Classifier accuracy is reduced to 0%, while the matching rate (the ratio between the number ofpredictions matching the target class and the number of incorrectly classified images) is 100% , whichmeans that all incorrect predictions match the target class. However, what we are interested in (asper the attack definition from Section 3.2) is how the generative model reconstructs the adversarialexamples. If we look at the images generated by the V AE-GAN for class 0, shown in Figure 4, thetargeted attack is successful on some reconstructed images (e.g. one, four, five, six and nine arereconstructed as zeroes). But even when the classifier accuracy is 0%and matching rate is 100% ,an incorrect classification does not always result in a reconstruction to the target class, which showsthat the classifier is fooled by an adversarial example more easily than the generative model.Reconstruction feedback loop. The reconstruction feedback loop described in Section 4.5 canbe used to measure how well a targeted attack succeeds in making the generative model change the8Under review as a conference paper at ICLR 2017Figure 5: Left: representative adversarial examples with a target class of 0on the first 100non-zero images from the MNIST validation set. These were produced using the L2optimization latentattack (Section 4.3). Middle: V AE-GAN reconstructions from adversarial examples produced usingtheL2optimization classifier attack on the same set of 100validation images (those adversariesare not shown, but are qualitatively similiar, see Section 4.1). Right: V AE-GAN reconstructionsfrom the adversarial examples in the left column. Many of the classifier adversarial examples fail toreconstruct as zeros, whereas almost every adversarial example from the latent attack reconstructsas zero.reconstructed classes. Table 4 in the Appendix shows ASignoretarget andAStarget for all sourceand target class pairs. A higher value signifies a more successful attack for that pair of classes. Itis interesting to observe that attacking some source/target pairs is much easier than others (e.g. pair(4;0)vs.(0;8)) and that the results are not symmetric over source/target pairs. Also, some pairs dowell inASignoretarget , but do poorly in AStarget (e.g., all source digits when targeting 4). As canbe seen in Figure 11, the classifier adversarial examples targeting 4consistently fail to reconstructinto something easily recognizable as a 4. Most of the reconstructions look like 5, but the adversarialexample reconstructions of source 5s instead look like 0or3.5.1.2LVAE ATTACKFor generating adversarial examples using the LVAE attack, we used the optimization method with= 1:0, where Adam-based optimization was performed for 1000 epochs with a learning rate of 0:1.The meanL2norm of the difference between original images and generated adversarial exampleswith this approach is 3:68, while the mean RMSD is 0:131.We showASignoretarget andAStarget of theLVAE attack in Table 5 in the Appendix. Comparingwith the numerical evaluation results of the latent attack (below), we can see that both methodsachieve similar results on MNIST.5.1.3 L ATENT ATTACKTo generate adversarial examples using the latent attack, we used the optimization method with= 1:0, where Adam-based optimization was performed for 1000 epochs with a learning rateof0:1. The mean L2norm of the difference between original images and generated adversarialexamples using this approach is 2:96, while the mean RMSD is 0:105.Table 3 shows ASignoretarget andAStarget for all source and target class pairs. Comparing withthe numerical evaluation results of the classifier attack we can see that the latent attack performsmuch better. This result remains true when visually comparing the reconstructed images, shown inFigure 5.We also tried an untargeted version of the latent attack, where we change Equation 2 to maximizethe distance in latent space between the encoding of the original image and the encoding of theadversarial example. In this case the loss we are trying to minimize is unbounded, since the L2distance can always grow larger, so the attack normally fails to generate a reasonable adversarialexample.9Under review as a conference paper at ICLR 2017Figure 6: Left: V AE-GAN reconstructions of adversarial examples generated using the L2optimiza-tionLVAE attack (single image target). Right: V AE-GAN reconstructions of adversarial examplesgenerated using the L2optimization latent attack (single image target). Approximately 85out of100images are convincing zeros for the L2latent attack, whereas only about 5out of 100could bemistaken for zeros with the LVAE attack.Additionally, we also experimented with targeting latent representations of specific images from thetraining set instead of taking the mean, as described in Section 4.3. We show the numerical resultsin Table 3 and the generated reconstructions in Figure 15 (in the Appendix). It is also interestingto compare the results with LVAE, by choosing the same image as the target. Results for LVAE forthe same target images as in Table 3 are shown in Table 6 in the Appendix. The results are identicalbetween the two attacks, which is expected as the target image is the same – only the loss functiondiffers between the methods.5.2 SVHNThe SVHN dataset consists of cropped street number images and is much less clean than MNIST.Due to the way the images have been processed, each image may contain more than one digit; thetarget digit is roughly in the center. V AE-GAN produces high-quality reconstructions of the originalimages as shown in Figure 17 in the Appendix.For the classifier attack, we set = 105after testing a range of values, although we were unable tofind an effective value for this attack against SVHN. For the latent and LVAE attacks we set = 10 .In Table 10 we show ASignoretarget andAStarget for theL2optimization latent attack. The eval-uation metrics are less strong on SVHN than on MNIST, but it is still straightforward for an attackerto find a successful attack for almost all source/target pairs. Figure 2 supports this evaluation. Visualinspection shows that 11out of the 12adversarial examples reconstructed as 0, the target digit. Itis worth noting that 2out of the 12adversarial examples look like zeros (rows 1and11), and twoothers look like both the original digit and zero, depending on whether the viewer focuses on thelight or dark areas of the image (rows 4and7). TheL2optimization latent attack achieves muchbetter results than the LVAE attack (see Table 11 and Figure 6) on SVHN, while both attacks workequally well on MNIST.5.3 C ELEB AThe CelebA dataset consists of more than 200,000 cropped faces of celebrities, each annotatedwith 40 different attributes. For our experiments, we further scale the images to 64x64 and ignorethe attribute annotations. V AE-GAN reconstructions of original images after training are shown inFigure 19 in the Appendix.Since faces don’t have natural classes, we only evaluated the latent and LVAE attacks. We triedlambdas ranging from 0:1to0:75for both attacks. Figure 20 shows adversarial examples generated10Under review as a conference paper at ICLR 2017MNIST SVHNMethod MeanL2 Mean RMSD Time to attack MeanL2 Mean RMSD Time to attackL2Optimization Classifier Attack 3:36 0:120 277 1:77 0:032 274L2Optimization LVAE Attack 3:68 0:131 734 2:36 0:043 895L2Optimization Latent Attack 2:96 0:105 236 2:80 0:051 242Table 1: Comparison of mean L2norm and RMSD between the original images and the generatedadversarial examples for the different attacks. Time to attack is the mean number of seconds it takesto generate 1000 adversarial examples using the given attack method (with the same number ofoptimization iterations for each attack).using the latent attack and a lambda value of 0:5(L2norm between original images and generatedadversarial examples 9:78, RMSD 0:088) and the corresponding V AE-GAN reconstructions. Mostof the reconstructions reflect the target image very well. We get even better results with the LVAEattack, using a lambda value of 0:75(L2norm between original images and generated adversarialexamples 8:98, RMSD 0:081) as shown in Figure 21.Figure 7: Summary of different attacks on CelebA dataset: reconstructions of original images (top),reconstructions of adversarial examples generated using the latent attack (middle) and LVAE attack(bottom). Target reconstruction is shown on the right. Full results are in the Appendix.5.4 S UMMARY OF DIFFERENT ATTACK METHODSTable 1 shows a comparison of the mean distances between original images and generated adver-sarial examples for the three different attack methods. The larger the distance between the originalimage and the adversarial perturbation, the more noticeable the perturbation will tend to be, and themore likely a human observer will no longer recognize the original input, so effective attacks keepthese distances small while still achieving their goal. The latent attack consistently gives the bestresults in our experiments, and the classifier attack performs the worst.We also measure the time it takes to generate 1000 adversarial examples using the given attackmethod. TheLVAE attack is by far the slowest of the three, due to the fact that it requires computingfull reconstructions at each step of the optimizer when generating the adversarial examples. Theother two attacks do not need to run the reconstruction step during optimization of the adversarialexamples.6 C ONCLUSIONWe explored generating adversarial examples against generative models such as V AEs and V AE-GANs. These models are also vulnerable to adversaries that convince them to turn inputs intosurprisingly different outputs. We have also motivated why an attacker might want to attack gen-erative models. Our work adds further support to the hypothesis that adversarial examples are ageneral phenomenon for current neural network architectures, given our successful application ofadversarial attacks to popular generative models. In this work, we are helping to lay the foundationsfor understanding how to build more robust networks. Future work will explore defense and robusti-fication in greater depth as well as attacks on generative models trained using natural image datasetssuch as CIFAR-10 and ImageNet.ACKNOWLEDGMENTSThis material is in part based upon work supported by the National Science Foundation under GrantNo. TWC-1409915. Any opinions, findings, and conclusions or recommendations expressed in this11Under review as a conference paper at ICLR 2017material are those of the author(s) and do not necessarily reflect the views of the National ScienceFoundation.REFERENCESMart ́ın Abadi and Ashish Agarwal et al. TensorFlow: Large-scale machine learning on heteroge-neous systems, 2015. URL http://tensorflow.org/ . Software available from tensor-flow.org.Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. arXivpreprint arXiv:1608.04644 , 2016.Alexey Dosovitskiy, Jost Springenberg, Maxim Tatarchenko, and Thomas Brox. Learning to gen-erate chairs, tables and cars with convolutional networks. IEEE Transactions on Pattern Analy-sis and Machine Intelligence , PP(99):1–1, 2016. ISSN 0162-8828. doi: 10.1109/TPAMI.2016.2567384.I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, andY . Bengio. Generative Adversarial Networks. ArXiv e-prints , June 2014.Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarialexamples. arXiv preprint arXiv:1412.6572 , 2014.Ruitong Huang, Bing Xu, Dale Schuurmans, and Csaba Szepesv ́ari. Learning with a strong adver-sary. CoRR , abs/1511.03034, 2015.Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, AlexGraves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527 , 2016.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. 2015.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervisedlearning with deep generative models. In Advances in Neural Information Processing Systems ,pp. 3581–3589, 2014.Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutionalinverse graphics network. In Advances in Neural Information Processing Systems , pp. 2539–2547,2015.Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial examples in the physical world.CoRR , abs/1607.02533, 2016.Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, and Ole Winther. Autoencoding beyondpixels using a learned similarity metric. arXiv preprint arXiv:1512.09300 , 2015.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild.InProceedings of International Conference on Computer Vision (ICCV) , 2015.Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple andaccurate method to fool deep neural networks. 2016.Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Readingdigits in natural images with unsupervised feature learning. 2011.Anh Mai Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: Highconfidence predictions for unrecognizable images. CoRR , abs/1412.1897, 2014.12Under review as a conference paper at ICLR 2017Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko-ray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv preprintarXiv:1606.05328 , 2016.Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and AnanthramSwami. The limitations of deep learning in adversarial settings. In Proceedings of the 1st IEEEEuropean Symposium on Security and Privacy , 2015.Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and AnanthramSwami. Practical black-box attacks against deep learning systems using adversarial examples.arXiv preprint arXiv:1602.02697 , 2016.Sara Sabour, Yanshuai Cao, Fartash Faghri, and David J. Fleet. Adversarial manipulation of deeprepresentations. CoRR , abs/1511.05122, 2015. URL http://arxiv.org/abs/1511.05122 .Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow,and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 , 2013.P. Tabacof, J. Tavares, and E. Valle. Adversarial Images for Variational Autoencoders. ArXiv e-prints , December 2016.George Toderici, Sean M O’Malley, Sung Jin Hwang, Damien Vincent, David Minnen, ShumeetBaluja, Michele Covell, and Rahul Sukthankar. Variable rate image compression with recurrentneural networks. arXiv preprint arXiv:1511.06085 , 2015.George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minnen, Joel Shor, andMichele Covell. Full resolution image compression with recurrent neural networks. arXiv preprintarXiv:1608.05148 , 2016.A ̈aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves,Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. Wavenet: A generative model forraw audio. CoRR , abs/1609.03499, 2016. URL http://arxiv.org/abs/1609.03499 .A A PPENDIXA.1 M EAN LATENT VECTOR TARGETED ATTACKA variant of the single latent vector targeted attack described in Section 4.3, that was not explored inprevious work to our knowledge is to take the mean latent vector of many target images and use thatvector as xt. This variant is more flexible, in that the attacker can choose different latent propertiesto target without needing to find the ideal input. For example, in MNIST, the attacker may wish tohave a particular line thickness or slant in the reconstructed digit, but may not have such an imageavailable. In that case, by choosing some images of the target class with thinner lines or less slant,and some with thicker lines or more slant, the attacker can find a target latent vector that closelymatches the desired properties.In this case, the attack starts by using fencto produce the target latent vector, zt, from the chosentarget images, x(t).zt=1jx(t)jjx(t)jXi=0fenc(xi(t)): (8)In this work, we choose to reconstruct “ideal” MNIST digits by taking the mean latent vector of allof the training digits of each class, and using those vectors as xt. Given a target class yt, a set ofexamplesXand their corresponding ground truth labels y, we create a subset x(t)as follows:x(t)=fxijxi2X^yi=ytg: (9)Both variants of this attack appear to be similarly effective, as shown in Figure 15 and Figure 5. Thetrade-off between the two in these experiments is between the simplicity of the first attack and theflexibility of the second attack.13Under review as a conference paper at ICLR 2017xEncoderfenczDecoderfdecxFigure 8: Variational autoencoder architecture.A.2 E VALUATION RESULTSFigure 9: Original Inputs and Reconstructions: The first 100 images from the validation setreconstructed by the V AE (left) and the V AE-GAN (right).Figure 10: Untargeted FGSLVAE Attack: V AE reconstructions (left) and V AE-GAN reconstruc-tions (right). Note the difference in reconstructions compared to Figure 9. Careful visual inspectionreveals that none of the V AE reconstructions change class, and only two of the V AE-GAN recon-structions change class (a 6to a0in the next-to-last row, and a 9to a4in the last row). CombiningFGS withLVAE does not seem to give an effective attack.14Under review as a conference paper at ICLR 2017Target 0 1 2 3 4 5 6 7 8 9Classifier accuracy 1:98% 0:00% 0:00% 0:00% 0:00% 0:00% 0:00% 0:00% 0:00% 0:00%Matching rate 95:06% 100:00% 100:00% 100:00% 100:00% 100:00% 100:00% 100:00% 100:00% 99:89%Table 2:L2Optimization Classifier Attack on MNIST: fclass accuracy on adversarial examplesagainst the V AE-GAN for each target class (middle row) and the matching rate between the predic-tionsfclass made and the adversarial target class (bottom row). The adversarial examples success-fully foolfclass into predicting the target class almost 100% of the time, which makes this attackseem like a strong attack, but the attack actually fails to generate good reconstructions in many cases.Reconstructions for target classes 0and4can be seen in Figure 4 and Figure 11.Source Target 0 Target 1 Target 2 Target 3 Target 4 Target 5 Target 6 Target 7 Target 8 Target 90 -85.54%(34.94%)100.00%(100.00%)100.00%(13.25%)75.90%(75.90%)96.39%(92.77%)100.00%(100.00%)96.39%(91.57%)0.00%(0.00%)100.00%(83.13%)1100.00%(100.00%)-100.00%(100.00%)100.00%(0.00%)100.00%(93.60%)100.00%(100.00%)100.00%(100.00%)100.00%(100.00%)100.00%(0.00%)100.00%(98.40%)2100.00%(100.00%)97.37%(55.26%)-100.00%(55.26%)97.37%(88.60%)95.61%(74.56%)100.00%(100.00%)99.12%(94.74%)100.00%(0.00%)100.00%(92.98%)3100.00%(100.00%)90.65%(89.72%)100.00%(100.00%)-100.00%(91.59%)94.39%(94.39%)100.00%(100.00%)85.05%(84.11%)100.00%(0.00%)90.65%(88.79%)4100.00%(100.00%)97.27%(67.27%)100.00%(100.00%)100.00%(18.18%)-100.00%(100.00%)100.00%(100.00%)100.00%(100.00%)100.00%(0.00%)100.00%(100.00%)5100.00%(100.00%)96.55%(80.46%)100.00%(100.00%)2.30%(2.30%)100.00%(96.55%)-100.00%(100.00%)98.85%(89.66%)100.00%(0.00%)95.40%(94.25%)6100.00%(100.00%)87.36%(80.46%)100.00%(100.00%)100.00%(11.49%)100.00%(97.70%)100.00%(100.00%)-100.00%(98.85%)100.00%(0.00%)100.00%(96.55%)7100.00%(100.00%)90.91%(82.83%)100.00%(100.00%)100.00%(16.16%)100.00%(79.80%)100.00%(98.99%)100.00%(100.00%)-100.00%(0.00%)100.00%(100.00%)8100.00%(100.00%)89.77%(71.59%)100.00%(100.00%)100.00%(35.23%)100.00%(97.73%)89.77%(62.50%)100.00%(100.00%)98.86%(92.05%)-98.86%(96.59%)9100.00%(100.00%)95.65%(75.00%)100.00%(100.00%)100.00%(18.48%)100.00%(97.83%)100.00%(95.65%)100.00%(100.00%)100.00%(100.00%)100.00%(0.00%)-Table 3: L2Optimization Latent Attack on MNIST (single latent vector target):ASignoretarget (AStarget in parentheses) after one reconstruction loop for different source andtarget class pairs on the V AE-GAN model. The latent representation of a random image from thetarget class is used to generate the target latent vector. Higher values indicate more successful attacksagainst the generative model.Source Target 0 Target 1 Target 2 Target 3 Target 4 Target 5 Target 6 Target 7 Target 8 Target 90 -40.96%(1.20%)6.02%(4.82%)10.84%(7.23%)75.90%(0.00%)6.02%(3.61%)28.92%(28.92%)37.35%(20.48%)6.02%(1.20%)10.84%(3.61%)199.20%(77.60%)-7.20%(5.60%)1.60%(1.60%)85.60%(0.00%)8.00%(5.60%)28.80%(28.00%)8.80%(7.20%)3.20%(1.60%)69.60%(0.80%)285.96%(80.70%)3.51%(2.63%)-29.82%(23.68%)78.95%(0.00%)72.81%(20.18%)72.81%(46.49%)35.09%(8.77%)41.23%(12.28%)68.42%(2.63%)393.46%(83.18%)26.17%(12.15%)27.10%(16.82%)-67.29%(0.00%)66.36%(62.62%)87.85%(22.43%)50.47%(27.10%)23.36%(8.41%)33.64%(8.41%)4100.00%(82.73%)70.00%(48.18%)28.18%(10.91%)84.55%(17.27%)-66.36%(31.82%)95.45%(71.82%)62.73%(37.27%)20.91%(0.91%)51.82%(44.55%)593.10%(89.66%)21.84%(1.15%)68.97%(11.49%)28.74%(18.39%)3.45%(0.00%)-20.69%(19.54%)80.46%(41.38%)22.99%(2.30%)44.83%(12.64%)629.89%(28.74%)44.83%(1.15%)24.14%(3.45%)59.77%(11.49%)77.01%(0.00%)10.34%(8.05%)-62.07%(8.05%)8.05%(0.00%)75.86%(4.60%)779.80%(65.66%)77.78%(26.26%)20.20%(8.08%)8.08%(4.04%)100.00%(0.00%)56.57%(23.23%)97.98%(17.17%)-38.38%(1.01%)17.17%(10.10%)894.32%(84.09%)96.59%(18.18%)60.23%(42.05%)57.95%(43.18%)100.00%(0.00%)93.18%(80.68%)100.00%(57.95%)100.00%(34.09%)-87.50%(26.14%)998.91%(79.35%)97.83%(33.70%)26.09%(1.09%)17.39%(2.17%)100.00%(0.00%)22.83%(21.74%)100.00%(30.43%)47.83%(43.48%)31.52%(4.35%)-Table 4:L2Optimization Classifier Attack on MNIST: ASignoretarget (AStarget in parenthe-ses) for all source and target class pairs using adversarial examples generated on the V AE-GANmodel. Higher values indicate more successful attacks against the generative model.15Under review as a conference paper at ICLR 2017Figure 11:L2Optimization Classifier Attack: Reconstructions of the first 100adversarial exam-ples targeting 4, demonstrating why the AStarget metric is 0for all source digits.Source Target 0 Target 1 Target 2 Target 3 Target 4 Target 5 Target 6 Target 7 Target 8 Target 90 -90.36%(14.46%)100.00%(100.00%)100.00%(98.80%)100.00%(61.45%)91.57%(90.36%)100.00%(96.39%)68.67%(50.60%)100.00%(91.57%)98.80%(37.35%)1100.00%(100.00%)-100.00%(100.00%)100.00%(100.00%)100.00%(99.20%)100.00%(100.00%)100.00%(97.60%)100.00%(96.00%)100.00%(100.00%)100.00%(96.00%)2100.00%(100.00%)84.21%(60.53%)-100.00%(100.00%)90.35%(71.93%)100.00%(85.96%)88.60%(88.60%)97.37%(76.32%)94.74%(94.74%)97.37%(35.09%)3100.00%(100.00%)75.70%(66.36%)100.00%(100.00%)-94.39%(52.34%)99.07%(99.07%)98.13%(82.24%)64.49%(53.27%)100.00%(96.26%)67.29%(31.78%)4100.00%(100.00%)100.00%(52.73%)100.00%(100.00%)100.00%(100.00%)-100.00%(97.27%)100.00%(100.00%)100.00%(99.09%)100.00%(100.00%)85.45%(83.64%)5100.00%(100.00%)96.55%(40.23%)100.00%(100.00%)100.00%(100.00%)93.10%(59.77%)-100.00%(95.40%)93.10%(71.26%)96.55%(96.55%)83.91%(51.72%)6100.00%(100.00%)97.70%(70.11%)100.00%(100.00%)100.00%(100.00%)100.00%(91.95%)100.00%(100.00%)-97.70%(67.82%)100.00%(98.85%)95.40%(50.57%)7100.00%(100.00%)85.86%(58.59%)100.00%(100.00%)100.00%(100.00%)100.00%(98.99%)100.00%(97.98%)100.00%(79.80%)-100.00%(98.99%)100.00%(96.97%)8100.00%(100.00%)69.32%(44.32%)100.00%(100.00%)100.00%(100.00%)54.55%(53.41%)96.59%(96.59%)95.45%(92.05%)73.86%(52.27%)-42.05%(29.55%)9100.00%(100.00%)100.00%(44.57%)100.00%(100.00%)100.00%(100.00%)96.74%(95.65%)100.00%(97.83%)100.00%(100.00%)100.00%(97.83%)100.00%(100.00%)-Table 5:L2OptimizationLVAE Attack on MNIST (single image target): ASignoretarget(AStarget in parentheses) for different source and target class pairs using adversarial examplesgenerated on the V AE-GAN model. Higher values indicate more successful attacks against thegenerative model.16Under review as a conference paper at ICLR 2017Figure 12: Untargeted FGS Classifer Attack: Adversarial examples (left) and their reconstruc-tions by the generative model (right) for the first 100 images from the MNIST validation set. Topresults are for V AE, while bottom results are for V AE-GAN. Note the difference in quality of thereconstructed adversarial examples.17Under review as a conference paper at ICLR 2017Figure 13: Original images with random noise added (top) and their reconstructions by V AE (bottomleft) and V AE-GAN (bottom right). The magnitude of the random noise is the same as for thegenerated adversarial noise shown in Figure 12. Random noise does not cause the reconstructedimages to change in a significant way.18Under review as a conference paper at ICLR 2017Source Target 0 Target 1 Target 2 Target 3 Target 4 Target 5 Target 6 Target 7 Target 8 Target 90 -85.54%(34.94%)100.00%(100.00%)100.00%(13.25%)75.90%(75.90%)96.39%(92.77%)100.00%(100.00%)96.39%(91.57%)0.00%(0.00%)100.00%(83.13%)1100.00%(100.00%)-100.00%(100.00%)100.00%(0.00%)100.00%(93.60%)100.00%(100.00%)100.00%(100.00%)100.00%(100.00%)100.00%(0.00%)100.00%(98.40%)2100.00%(100.00%)97.37%(55.26%)-100.00%(55.26%)97.37%(88.60%)95.61%(74.56%)100.00%(100.00%)99.12%(94.74%)100.00%(0.00%)100.00%(92.98%)3100.00%(100.00%)90.65%(89.72%)100.00%(100.00%)-100.00%(91.59%)94.39%(94.39%)100.00%(100.00%)85.05%(84.11%)100.00%(0.00%)90.65%(88.79%)4100.00%(100.00%)97.27%(67.27%)100.00%(100.00%)100.00%(18.18%)-100.00%(100.00%)100.00%(100.00%)100.00%(100.00%)100.00%(0.00%)100.00%(100.00%)5100.00%(100.00%)96.55%(80.46%)100.00%(100.00%)2.30%(2.30%)100.00%(96.55%)-100.00%(100.00%)98.85%(89.66%)100.00%(0.00%)95.40%(94.25%)6100.00%(100.00%)87.36%(80.46%)100.00%(100.00%)100.00%(11.49%)100.00%(97.70%)100.00%(100.00%)-100.00%(98.85%)100.00%(0.00%)100.00%(96.55%)7100.00%(100.00%)90.91%(82.83%)100.00%(100.00%)100.00%(16.16%)100.00%(79.80%)100.00%(98.99%)100.00%(100.00%)-100.00%(0.00%)100.00%(100.00%)8100.00%(100.00%)89.77%(71.59%)100.00%(100.00%)100.00%(35.23%)100.00%(97.73%)89.77%(62.50%)100.00%(100.00%)98.86%(92.05%)-98.86%(96.59%)9100.00%(100.00%)95.65%(75.00%)100.00%(100.00%)100.00%(18.48%)100.00%(97.83%)100.00%(95.65%)100.00%(100.00%)100.00%(100.00%)100.00%(0.00%)-Table 6:L2OptimizationLVAE Attack (mean reconstruction target): ASignoretarget(AStarget in parentheses) for all source and target class pairs using adversarial examples gener-ated on the V AE-GAN model. The mean reconstruction image for each target class (over all of theimages of that class in the training set) is used as the target reconstruction. Higher values indicatemore successful attacks against the generative model.Source Target 0 Target 1 Target 2 Target 3 Target 4 Target 5 Target 6 Target 7 Target 8 Target 90 -40.96%(10.84%)65.06%(65.06%)53.01%(46.99%)62.65%(54.22%)36.14%(36.14%)59.04%(59.04%)46.99%(46.99%)13.25%(12.05%)44.58%(27.71%)1100.00%(100.00%)-100.00%(100.00%)100.00%(100.00%)100.00%(100.00%)100.00%(100.00%)100.00%(100.00%)100.00%(100.00%)100.00%(100.00%)100.00%(96.80%)296.49%(96.49%)60.53%(59.65%)-95.61%(95.61%)78.07%(75.44%)98.25%(71.05%)94.74%(90.35%)71.05%(69.30%)52.63%(50.88%)75.44%(42.98%)3100.00%(100.00%)87.85%(66.36%)90.65%(90.65%)-85.98%(73.83%)95.33%(95.33%)79.44%(53.27%)65.42%(64.49%)59.81%(46.73%)70.09%(58.88%)499.09%(99.09%)67.27%(66.36%)96.36%(96.36%)100.00%(81.82%)-100.00%(98.18%)93.64%(93.64%)98.18%(95.45%)97.27%(92.73%)39.09%(39.09%)5100.00%(100.00%)79.31%(51.72%)100.00%(83.91%)70.11%(70.11%)80.46%(72.41%)-73.56%(73.56%)87.36%(73.56%)55.17%(52.87%)75.86%(65.52%)697.70%(97.70%)68.97%(50.57%)96.55%(96.55%)95.40%(71.26%)73.56%(73.56%)87.36%(77.01%)-88.51%(72.41%)90.80%(55.17%)91.95%(35.63%)7100.00%(97.98%)83.84%(83.84%)100.00%(100.00%)100.00%(100.00%)93.94%(90.91%)98.99%(96.97%)88.89%(81.82%)-100.00%(86.87%)50.51%(50.51%)8100.00%(100.00%)96.59%(78.41%)100.00%(100.00%)98.86%(95.45%)94.32%(86.36%)98.86%(98.86%)98.86%(93.18%)98.86%(73.86%)-87.50%(78.41%)9100.00%(100.00%)100.00%(76.09%)100.00%(100.00%)98.91%(96.74%)100.00%(100.00%)100.00%(98.91%)97.83%(97.83%)98.91%(98.91%)97.83%(94.57%)-Table 7:L2Optimization Latent Attack (mean latent vector target): ASignoretarget (AStargetin parentheses) for all source and target class pairs using adversarial examples generated on theV AE-GAN model. The mean latent vector for each target class (over all of the images of that classin the training set) is used as the target latent vector. Higher values indicate more successful attacksagainst the generative model.19Under review as a conference paper at ICLR 2017Figure 14:L2Optimization Latent Attack (mean latent vector targets): V AE-GAN reconstruc-tions of adversarial examples with target classes from 1through 9. Original examples which alreadybelong to the target class are excluded.Source Target 0 Target 1 Target 2 Target 3 Target 4 Target 5 Target 6 Target 7 Target 8 Target 90 -95.18%(9.64%)100.00%(100.00%)98.80%(93.98%)100.00%(48.19%)91.57%(89.16%)100.00%(89.16%)73.49%(43.37%)100.00%(87.95%)100.00%(25.30%)1100.00%(100.00%)-100.00%(100.00%)100.00%(100.00%)100.00%(92.80%)100.00%(97.60%)100.00%(98.40%)100.00%(76.00%)100.00%(100.00%)100.00%(90.40%)298.25%(98.25%)83.33%(48.25%)-100.00%(100.00%)88.60%(43.86%)99.12%(63.16%)74.56%(71.93%)99.12%(63.16%)93.86%(92.98%)99.12%(21.05%)399.07%(98.13%)57.01%(42.99%)99.07%(99.07%)-82.24%(36.45%)89.72%(88.79%)99.07%(61.68%)57.01%(37.38%)98.13%(92.52%)67.29%(18.69%)4100.00%(100.00%)100.00%(37.27%)100.00%(100.00%)100.00%(99.09%)-100.00%(80.00%)98.18%(93.64%)100.00%(94.55%)100.00%(99.09%)86.36%(80.00%)5100.00%(100.00%)97.70%(19.54%)100.00%(98.85%)98.85%(98.85%)85.06%(44.83%)-95.40%(88.51%)93.10%(45.98%)96.55%(96.55%)87.36%(34.48%)6100.00%(100.00%)96.55%(58.62%)100.00%(98.85%)100.00%(98.85%)100.00%(86.21%)100.00%(97.70%)-100.00%(56.32%)100.00%(96.55%)95.40%(43.68%)7100.00%(100.00%)80.81%(40.40%)100.00%(100.00%)100.00%(98.99%)100.00%(92.93%)100.00%(87.88%)100.00%(62.63%)-100.00%(97.98%)100.00%(88.89%)8100.00%(100.00%)44.32%(18.18%)100.00%(100.00%)100.00%(100.00%)30.68%(28.41%)78.41%(76.14%)89.77%(81.82%)75.00%(38.64%)-22.73%(15.91%)9100.00%(100.00%)98.91%(17.39%)100.00%(100.00%)100.00%(100.00%)97.83%(92.39%)100.00%(89.13%)100.00%(92.39%)98.91%(94.57%)100.00%(100.00%)-Table 8:L2OptimizationLVAE Attack (mean reconstruction target): ASignoretarget(AStarget in parentheses) for all source and target class pairs using adversarial examples gener-ated on the V AE-GAN model. The mean image for each target class (over all of the images of thatclass in the training set) is used as the target. Higher values indicate more successful attacks againstthe generative model.20Under review as a conference paper at ICLR 2017Figure 15:L2Optimization Latent Attack (single latent vector target): V AE-GAN reconstruc-tions of adversarial examples generated using the latent attack with target classes 0and7using tworandom targets in latent space per target class. Original examples which already belong to the targetclass are excluded. The stylistic differences in the reconstructions are clearly visible.7210414959069015973496654074013134727121174235124463556041957893746430702917329776278473613693141769605499219487397444925476790585665781016467317182029955156 03446546545144723271818185089250111090316423611139529459390365572271284173388792241598723044241957728268577918180301994182129759264158292040028471240274330031965259293042071 12153397865613810513155618517946225065637208854114033761621928619525442838245031775797192142920491481845988376003026649 333239126805666388275 896184125 919754089910523789406395213136574226326548971303831934464218254884002327708744796909804606354833933378082170654380963809968685 78602402231975 10846267932982292735918020521376712580372409186774349195173976913783367285851144310770794485 5408210845040617326726931462542062173410543117499 48402451164719424155383 14568941538032512834408833 1735963261360721714242 17961124817748073131077035527669283522560829288 887493066321 322930057814460291474739884712122323239174035586326766327811756495 133478911691445406223151203812671623901220899025197810 4179642681375441813812580621171534695092248217249440392233835735812446495106959597380371367859796963744535478780768873319527351121474754540836960274444664793455873727024116692872015091706086818033723 6216113790805402872984095851213174572098 8625419215870244368824050 44793415973588053366016035441291469939844313188794887971456052221552496277221128 37241717678273175826225650924339766804158291806721055202202498099465491834991228196409483860251962940960625423845503853586576339611 290433695737778798307279454932140237578850114839000662378477924145249918409848770788604882476664718823630037697995433612373320338436350209074693519614545059521291994084529212173688491985751186 5244323568 86231058929670487174109720091787847204603113396741 530873969350274517580881503031403727180704319877149932179020337692337700752987442661968290831163511131 230201355748969683668514244511902495718856987116763220892510819579690615583826507461347323425271726415786018257769358424088349275 8656086736494663241014629110639565658464391341917119354073617553301 57586510423467981849286270 0675860937135433556302342309947284706285285730823282557646848274520394 67256112367876489486383106225695814178461 84312808591420270902576794262448044580689856904871345809133698710571752791852494722 349192179441672788197117533513761387599002882371303443892397117049659170200467071464549 9179533823622111 11698437164504742407019886004968223 848221754404397310125921018916838936283221042924379152490385360946250274668668691725990727670652472099229442 3321707641387459251873715509140633604 97516895579383815350553867773705902553177865938953791700372581862957578625148458306273321073403932 890380765473908625610044012327785257691416424354395015389197955274601 1 10447630043061961381256273601976689295831007662169318690 6000635934558 53040296823121156980665538621454378509351104470170161456657844725370779642857839589986289236118934079641413493147747293088840441528349528153794256359359319530698404929010316581533035592870491977552091862396219135503833766014069812995973780130461025844115466069262717940038223160577926797868846841281394037323373406208153541715757322737378545256536741715236314267438062165391932184465869778697394 054641230026657 0864790734218859271888276012710836053628701421144447162990018843420616122212378100216601625174821438399483472757043326760067705581070281508803277264755529284686500876171127400 77638642094057827471136629194836959624677066948353490052507111676 79664143112241087634006330 717113109975414895351982339901029393362498374 0478498 99759282202238468682467933943144705960 44446123 36459685658641865284554770782237018071987559175491221667114074240647695346501882835780857110137850711011452762302859697213641824 0510226443961657920260143528 808890967639347774906484272810078333137 613166574759584991650137034822025151488912135109448325976620005871523851820499623356480928367572949128607091167599195925041089089894257989809968995985103352165028156230226435517216919955162286714604033223689853854520563283995794671373660901992880169753474994363117 6918411994368160 41377495100116219840364 90716575251854706702581045718519006073183970089598327297211 3753198222885738988682397562928816887918017207519020986239380211114297251121999102021146 4154977156222806961977148534349750748815395976903639822128685 53949251514414435912233029009960932841997279959511835195354959319097549201051493361525220926601203025579550895032590884588454854922126887036643 8872200939919866426928545799921834078393465623926006128798204775056 4674307507 420899424678769413730887693922921832968401284527811 303570319363177308482652973909964297211674759 6821445761325993611469721514638110316849073029066 6367728608302983253880019513960141712379749939282718091017796999216135719764576699636298122552372101045282835178112978403078847785849813 803 17955165749354712081607347396086487793869723402183557246728308789084458566309376893495891288681379011470817457121139621 280766937052805438466279513243619 447654199278013613411156070723252294981 2161278000822922799275134 94185628312849937077232403998410609686119892355942194396 0406012 34789012347890123456789834786340971938473091454620621111724752945842970075117666822774024218961059698030839 6301234567012345678901 23456785487477398831582742154558644418755189136332269965533816568197683747090037930201010401047962622990123456789012345678901234567898056608023794719171400 417571333169743025260894354815906436338147572200177959896882361298952624846501567890123456789012345678974209015880278446104539420501329160118047763607354241835670671258193828767146293012345670123450128914095080771129367238129887171103426474274910685553597485969303891816001234567890123456789 012345678935329321455232139721289188781007787506157 461250799038448186590003716426604541386399593785647622094 0123456789012356012345687132807599609413212383265682748180539419219679046173872965839 057161093344062542346002014567890123456780123456789871375280759909115 88632183265674105319219604617387296583571610962542344600201234567890123456789012345678986506894195304891405521540760170689517986081771323142007846493847256369632246902551339787225798213130123456 7890123456789012345678912653070 41436723121296 01302757629190606020615 84301544857578348852971381075969477993443862012345678901234567890123456789 08395526849171235969111 2956812077582989046713456036870427475434281512025643000335706 488634699827710123456789 0123456780123456782 1725080278836027661 28877477374543384119743733025566352 59984106096885611989235594219392060 400123478901237890123478973031876402683281207104458062315185940758838926253173919960392814352925895012456012345671234510456634428106497233920933915237784 024024780706932860 57510816729795862628175011384918689012345678901234789017899898417733766619017632171391768414369614472440123456789012345 690123478135177214834439741235 9160100287114047368037406926586904061920 951376930220123456789012345678901234567892172508027883060276612887747737454338454119743733025563152599841060968856119892355942194913920 604060123456789012 345678901 23456789380710755690100834315009534937692457264949412258132943822128651672139387570748 85066376 99 484106601234567890123456789012 3456789 740401795142894378244336 99586706826393286174889033905 294103758778297126425236650028161043161901456789123456701234567898400724386632633014780319019127013829276559982913234319093687010582770123456789012 34567890123456789174815657286338654091729151322 3064376904814 0612692235510779629470234008885137498890989026567475413531234561234601245678172414149684537843356706168701508501584239769190671239245537531822302949702749925983867001234567890123456789012345678900726553786666438830190541912701382927426559911576829431909368 70105827701234567890123458901234567892121399853707757994703415814841866460553357259692621208383087495097004609162768352183861021401234567890123456789012345678976476234878698322848565020112968210652975393718381955011982604503 1867599303144049012356780123 5678901235678997090158809327846104942050169329160118776360724170 671258182876871629301234567890123456789012 34567898957031684156427813434720501923235578499711907834863809621010623890723455285466679182153479400012345678901234567890123456901315124924680119266874297021036012345 6789012345 678901234567898659702343851523012132653072746405 99895317476 54006620637744392896095388714048523901915174862168801234789 012346789012347891453309543084670771691362382389588717110342647427429279210653485969063081600123456701234789 012347251643990971643620986570017432413764777984382835805471317962091733916439821864155 6501234567890123456789012 345678969702343851 301213207264059989531747006663742898714048523901915176121680123456789012345678012356781045663442810649729209339152 31673784024024780706932486057510816729795652628175573501138494518689012345678901234567890123456789353293214552321397212891887810067787506157461250 799034484186590003716460454138639959378 5647622094012345678901234567890123456789642647554729393820956010653538003415308306278171385420976741626719806949962371922537801234789012347890178989261354826434592039497387449858266231327319 011350781514600 49166 907611012 34723456701278639719396172445700166827724216106983963012345678901234567890123456789168990124437444038758217538 525116213864262550280 68179192676687492133055803797027917803536012345 6789012 3456789012 347896426 4789293930010 42635303415308306178092671969499671253780124567890134567801347897551997100597172236832006175862948871087758534611550723641241542048619025693636 012345678901234567890123567810957518690419384470192878259606553 3 3981106100 62113277887846 0207036871599372494362253255941720123456789012345678901234567891012753440069665723449140795723 1440996183373988477621987887223933550795651411282615012 3456 7890123456789012345678806002379 47191714001757133316971 30260894354815906638147 520 0178968823612952012345 6789 01234567890123456 7897461409937847585322058 603810304749 29 5 71716656287 649953743046611321001 23478901234567801 23478908395526841712356911121 2077582986734687 0427754342815102335706863998 27710178901234567801234789786419384470192878260653339140 61006211778460703687152494364172650Figure 16:L2Optimization Latent Attack (single latent vector target): t-SNE plot of the latentspace, with the addition of green circles representing the adversarial examples for target class 0. Inthis plot, it appears that the adversarial examples cluster around 6(yellow) and 0(red).21Under review as a conference paper at ICLR 2017Source Target 0 Target 1 Target 2 Target 3 Target 4 Target 5 Target 6 Target 7 Target 8 Target 90 -92.77%(38.55%)100.00%(100.00%)100.00%(66.27%)100.00%(34.94%)100.00%(22.89%)100.00%(100.00%)79.52%(63.86%)97.59%(90.36%)100.00%(62.65%)1100.00%(100.00%)-100.00%(100.00%)100.00%(99.20%)100.00%(90.40%)100.00%(0.80%)100.00%(100.00%)100.00%(100.00%)100.00%(100.00%)100.00%(100.00%)297.37%(97.37%)97.37%(57.02%)-100.00%(87.72%)98.25%(42.11%)100.00%(50.88%)100.00%(99.12%)97.37%(89.47%)89.47%(89.47%)100.00%(81.58%)3100.00%(100.00%)89.72%(85.05%)100.00%(100.00%)-62.62%(48.60%)91.59%(45.79%)100.00%(99.07%)95.33%(90.65%)97.20%(94.39%)90.65%(79.44%)4100.00%(100.00%)95.45%(67.27%)100.00%(100.00%)100.00%(73.64%)-100.00%(30.00%)100.00%(100.00%)100.00%(99.09%)100.00%(99.09%)99.09%(99.09%)5100.00%(100.00%)98.85%(79.31%)100.00%(100.00%)73.56%(73.56%)83.91%(34.48%)-100.00%(100.00%)90.80%(87.36%)100.00%(100.00%)87.36%(82.76%)6100.00%(100.00%)86.21%(79.31%)100.00%(100.00%)100.00%(88.51%)95.40%(71.26%)10.34%(10.34%)-100.00%(83.91%)100.00%(97.70%)100.00%(70.11%)7100.00%(100.00%)91.92%(79.80%)100.00%(100.00%)100.00%(87.88%)100.00%(63.64%)100.00%(58.59%)100.00%(100.00%)-100.00%(100.00%)100.00%(100.00%)8100.00%(100.00%)88.64%(73.86%)100.00%(100.00%)100.00%(46.59%)95.45%(44.32%)96.59%(31.82%)100.00%(100.00%)96.59%(94.32%)-95.45%(79.55%)9100.00%(100.00%)96.74%(72.83%)100.00%(100.00%)100.00%(59.78%)66.30%(63.04%)100.00%(28.26%)100.00%(100.00%)98.91%(98.91%)100.00%(100.00%)-Table 9:L2OptimizationLVAE Attack on MNIST (single image target): ASignoretarget(AStarget in parentheses) for different source and target class pairs using adversarial examplesgenerated on the V AE-GAN model. Higher values indicate more successful attacks against thegenerative model.Figure 17: Original Inputs and Reconstructions: The first 100images from the SVHN validationset (left) reconstructed by V AE-GAN (right).Source Target 0 Target 1 Target 2 Target 3 Target 4 Target 5 Target 6 Target 7 Target 8 Target 90 -64.29%(40.00%)78.57%(61.43%)92.86%(80.00%)84.29%(57.14%)98.57%(98.57%)94.29%(38.57%)88.57%(54.29%)95.71%(11.43%)95.71%(25.71%)176.80%(70.72%)-74.59%(67.40%)93.37%(88.95%)75.69%(65.19%)98.34%(97.79%)86.74%(24.86%)46.96%(36.46%)96.13%(4.97%)96.13%(28.73%)282.93%(65.85%)57.93%(42.68%)-90.24%(86.59%)53.66%(46.34%)99.39%(98.17%)82.93%(14.02%)71.34%(57.32%)71.34%(6.71%)24.39%(23.17%)392.17%(64.35%)58.26%(41.74%)83.48%(68.70%)-84.35%(49.57%)96.52%(95.65%)53.91%(23.48%)90.43%(56.52%)93.04%(5.22%)93.91%(33.91%)474.44%(55.56%)47.78%(43.33%)70.00%(61.11%)86.67%(77.78%)-100.00%(98.89%)93.33%(35.56%)90.00%(36.67%)85.56%(14.44%)94.44%(27.78%)575.31%(50.62%)59.26%(43.21%)88.89%(58.02%)97.53%(88.89%)72.84%(53.09%)-37.04%(18.52%)80.25%(41.98%)32.10%(6.17%)92.59%(30.86%)667.44%(47.67%)56.98%(27.91%)84.88%(55.81%)86.05%(79.07%)65.12%(39.53%)94.19%(94.19%)-90.70%(33.72%)58.14%(10.47%)87.21%(22.09%)787.34%(63.29%)55.70%(48.10%)79.75%(74.68%)92.41%(79.75%)69.62%(41.77%)97.47%(89.87%)93.67%(18.99%)-91.14%(7.59%)97.47%(17.72%)898.33%(63.33%)78.33%(38.33%)80.00%(63.33%)100.00%(88.33%)93.33%(48.33%)98.33%(96.67%)96.67%(35.00%)96.67%(50.00%)-95.00%(31.67%)987.88%(66.67%)72.73%(43.94%)92.42%(80.30%)93.94%(86.36%)80.30%(51.52%)95.45%(93.94%)98.48%(27.27%)92.42%(62.12%)93.94%(9.09%)-Table 10: L2Optimization Latent Attack on SVHN (single latent vector target):ASignoretarget (AStarget in parentheses) after one reconstruction loop for different source andtarget class pairs on the V AE-GAN model. The latent representation of a random image from thetarget class is used to generate the target latent vector. Higher values indicate more successful attacksagainst the generative model.22Under review as a conference paper at ICLR 2017Source Target 0 Target 1 Target 2 Target 3 Target 4 Target 5 Target 6 Target 7 Target 8 Target 90 -30.00%(12.86%)32.86%(5.71%)34.29%(5.71%)28.57%(0.00%)30.00%(1.43%)30.00%(5.71%)30.00%(0.00%)30.00%(1.43%)31.43%(0.00%)113.26%(1.10%)-7.73%(1.66%)18.78%(4.97%)13.26%(3.31%)12.15%(0.00%)11.60%(0.55%)9.94%(1.10%)10.50%(1.10%)16.02%(0.55%)223.17%(0.61%)13.41%(3.66%)-17.07%(3.05%)14.63%(1.83%)14.63%(2.44%)15.24%(0.00%)15.24%(1.22%)14.02%(0.61%)15.24%(1.22%)330.43%(0.87%)26.09%(7.83%)30.43%(2.61%)-30.43%(0.00%)29.57%(6.96%)27.83%(0.00%)27.83%(1.74%)28.70%(2.61%)33.91%(6.09%)421.11%(0.00%)15.56%(5.56%)16.67%(2.22%)25.56%(4.44%)-16.67%(1.11%)18.89%(0.00%)16.67%(1.11%)18.89%(2.22%)22.22%(0.00%)532.10%(0.00%)28.40%(3.70%)27.16%(3.70%)32.10%(8.64%)24.69%(2.47%)-28.40%(6.17%)23.46%(0.00%)27.16%(3.70%)27.16%(0.00%)627.91%(4.65%)25.58%(4.65%)26.74%(0.00%)33.72%(3.49%)30.23%(2.33%)20.93%(4.65%)-31.40%(0.00%)24.42%(3.49%)32.56%(0.00%)730.38%(0.00%)27.85%(12.66%)26.58%(10.13%)31.65%(5.06%)31.65%(0.00%)30.38%(0.00%)32.91%(0.00%)-30.38%(0.00%)34.18%(1.27%)840.00%(5.00%)35.00%(0.00%)33.33%(3.33%)43.33%(6.67%)40.00%(3.33%)35.00%(1.67%)41.67%(11.67%)38.33%(0.00%)-36.67%(0.00%)934.85%(6.06%)33.33%(12.12%)33.33%(9.09%)40.91%(4.55%)31.82%(3.03%)31.82%(0.00%)33.33%(0.00%)34.85%(0.00%)31.82%(1.52%)-Table 11:L2OptimizationLVAE Attack on SVHN (single image target): ASignoretarget(AStarget in parentheses) after one reconstruction loop for different source and target class pairson the V AE-GAN model. The latent representation of a random image from the target class isused to generate the target latent vector. Higher values indicate more successful attacks against thegenerative model.Figure 18:L2Optimization Latent Attack (single latent vector target): Nearest neighbors inlatent space for generated adversarial examples (target class 0) on the first 100images from theMNIST (left) and SVHN (right) validation sets.Figure 19: Original images in the CelebA dataset (left) and their V AE-GAN reconstructions (right).23Under review as a conference paper at ICLR 2017Figure 20:L2Optimization Latent Attack on CelebA Dataset (single latent vector target):Adversarial examples generated for 100images from the CelebA dataset (left) and their V AE-GANreconstructions (right).Figure 21:L2OptimizationLVAE Attack on CelebA Dataset (single image target): Adversarialexamples generated for 100images from the CelebA dataset (left) and their V AE-GAN reconstruc-tions (right).24Under review as a conference paper at ICLR 2017Orig Mean 1 Smp 12 Smp 50 Smp L2Adv Mean 1 Smp 12 Smp 50 SmpFigure 22: Effect of sampling on adversarial reconstructions. Columns in order: original image,reconstruction of the original image (no sampling, just the mean), reconstruction of the originalimage (1 sample), reconstruction of the original image (12 samples), reconstruction of the originalimage (50 samples), adversarial example (latent attack), reconstruction of the adversarial example(no sampling, just the mean), reconstruction of the adversarial example (1 sample), reconstructionof the adversarial example (12 samples), reconstruction of the adversarial example (50 samples).25
Byx5BTilg
Under review as a conference paper at ICLR 2017EXPLORING THE APPLICATION OF DEEP LEARNINGFOR SUPERVISED LEARNING PROBLEMSJose RozanecUniversidad de Buenos AiresGilad Katz, Eui Chul Richard Shin & Dawn SongUniversity of California, BerkeleyABSTRACTOne of the main difficulties in applying deep neural nets (DNNs) to new domainsis the need to explore multiple architectures in order to discover ones that performwell. We analyze a large set of DNNs across multiple domains and derive insightsregarding their effectiveness. We also analyze the characteristics of various DNNsand the general effect they may have on performance. Finally, we explore the ap-plication of meta-learning to the problem of architecture ranking. We demonstratethat by using topological features and modeling the changes in its weights, biasesand activation functions layers of the initial training steps, we are able to rankarchitectures based on their predicted performance. We consider this work to bea first step in the important and challenging direction of exploring the space ofdifferent neural network architectures.1 I NTRODUCTIONRecent advances in deep neural networks (DNNs) have led to breakthroughs in fields such as imageclassification (He et al., 2015; Krizhevsky et al., 2012) and speech recognition (Yu et al., 2010; Dahlet al., 2012). One reason for the effectiveness of DNNs is their ability to integrate low, mid and high-level features in a natural way (Zeiler & Fergus, 2014). While recent work such as (Simonyan &Zisserman, 2014) suggests that in many cases the depth of the architecture is crucial, the emergenceof more complex architectures (He et al., 2015; Szegedy et al., 2015) demonstrates that depth aloneoften does not suffice.While DNNs have been highly effective in several domains, their application in additional fieldsis yet to become widespread. We argue that this is the case due to two challenges. The first isthe difficulty of designing effective architectures for domains in which there is little or no previousknowledge on the application of deep learning. Moreover, since designing DNN architectures isnot intuitive for most people, this task is likely to fall to experts whose time is in high demand.The second challenge, which is strongly coupled with the first, is the large amounts of computingpower and time required to evaluate multiple DNNs. These traits constrain the number of DNNarchitectures that can be evaluated, thus further limiting one’s ability to explore new architectures orrespond to changing circumstances.In this study we explore the possibility of applying architectures that are effective for one domain toanother. We do so by generating a large number of architectures and evaluate their performance onmultiple tabular datasets in order to determine whether the architectures are transferable. We alsoexplore the feasibility of architectures with parallel layers and compare their effectiveness to thatof their “linear” counterparts. Our results show that while architectures do not perform well acrossmultiple datasets, parallel architectures are surprisingly effective.When attempting to apply DNNs to an unknown domain, one way of approaching the problemwould be to randomly “sample” various architectures and analyze their performance distribution.The top-performing architectures found in the sampling can form the base for future explorationwhile the variance in performance can assist in determining the number of architectures that need tobe sampled. We explore a meta-learning approach that may improve the efficiency of this process byranking the architectures based on their expected performance. Our approach models the topologyof the DNN as well as the changes in weights, biases and activation function layers throughout theinitial training steps and uses this information to rank the architectures by their relative performance.Preliminary results are encouraging.1Under review as a conference paper at ICLR 2017While we consider this study to be an important first step, we feel obliged to point out that workis done in a limited setting. To enable the generation of multiple DNN architectures with diversetopologies, we applied uniform and fixed parameters such as layer sizes and learning rates. As aresult, the architecture space we explore is limited. Validating our results on a more diverse set ofarchitectures with multiple hyperparameter configuration will require additional experimentation.We plan to address these issues in future work.Our contributions are as follows:We explore DNNs across multiple datasets, evaluate their effectiveness and analyze if someperform best across datasets.We systematically evaluate a large number of architectures over multiple supervised-classification datasets and derive insights regarding the design and application of DNNswith parallel layers for general classification problems.We present a novel meta learning-based ranking method that utilizes both topological fea-tures as well as weights, biases and activation function layers of the various components ofthe DNN architecture during the initial training phase. To the best of our knowledge, this isthe first time these characteristics have been used in a meta-learning scheme. Preliminaryresults of this approach are promising.2 R ELATED WORKWe review two areas of research whose aim is to better understand and improve the performanceof DNN architectures. The first is area of research focuses on the exploration and analysis of DNNarchitectures. The second area of research is automatic parameter tuning.2.1 E XPLORATION AND ANALYSIS OF DNN ARCHITECTURESDespite their remarkable success in various domains, the inner-workings of DNNs remain to somedegree a “black box”. Multiple studies attempted to provide insight into this matter. In Jarrett et al.(2009), the authors analyze convolutional neural networks (CNNs) and derive insights regarding thearchitecture design and the contribution of its different components. Another work aimed at betterunderstanding CNNs is presented in Shang et al. (2016). The authors analyze widely used CNNarchitectures and derive insights into their possible shortcomings. To address these shortcomings,they propose a new version of the popular ReLU activation scheme.The exploration of DNN architectures has also taken place for recurrent neural networks (RNNs).In Zaremba (2015), the authors explore various modifications to LSTM architectures to improvetheir performance, and propose several enhancements to the architecture. Another study Wu & King(2016) aims to determine the reasons for the effectiveness of LSTMs and identify the contributionof its different elements. Based on their conclusions, the authors proposed a simplified version ofLSTM.2.2 A UTOMATIC DNN PARAMETER TUNINGThe ability to automatically tune the hyperparameters of a DNN architecture is important not onlybecause of its ability to improve performance, but also due to the considerable time it can poten-tially save. In Maclaurin et al. (2015) the authors demonstrate how information extracted from thestochastic gradient descent can efficiently tune multiple parameters in the architecture. An addi-tional work that analyzes the gradient is presented in Duvenaud et al. (2016), where the informationis used to determine when to terminate the training of the architecture to avoid over-fitting. A dif-ferent optimization approach is presented in Mendoza et al., where the authors define a large setof hyperparameters (batch size, learning rate, activation types, etc.) and apply Bayesian optimiza-tion on top-performing configurations. The approach is only applied to feed-forward networks andoutperforms human experts by 10%, using the AUC measure.Additional types of optimization have also been proposed in recent years. In Jin et al. (2016), theauthors focus on setting the size of hidden layers in RNNs. They accomplish this by convertingthe optimization problem into a subset selection problem. An important aspect of this approach is2Under review as a conference paper at ICLR 2017that it takes time constraints into account, thus enabling solutions that are feasible given availableresources. Another approach, in which one long-short term memory network (LSTM) is used tooptimize another, was proposed by Andrychowicz et al. (2016). The two networks have shared pa-rameters but separate hidden states and the optimizer network is both modifying its own weights andthose of the optimized simultaneously. Finally, an approach that automatically adjusts the learningrates of the neural net was presented in Schaul et al. (2013). The approach has been shown to beeffective both on convex and non-convex learning tasks.Recent work by Li et al. (2016) proposes an exploration/exploitation scheme for hyperparametertuning. The authors apply a multi-arm bandits algorithm, with each arm representing a parameterconfiguration. A process of successive halving (Jamieson & Talwalkar, 2015), in which a certainpercentage of the lowest-performing configurations are dropped every nsteps enables the frameworkto explore promising directions. We consider this approach complementary to our proposed meta-learning approach, as the former enables exploration of a large number of configurations while thelatter can reduce time required to assess their performance.3 P ROBLEM DEFINITIONAs mentioned in Section 1, one of the challenges in applying deep learning to a new field is the needto design and test multiple DNN architectures. Only by iterative testing can practitioners discoverthe capabilities and limitations of deep learning in the domain. Even with ever-increasing computingpower, the high computational cost of this process currently presents a significant barrier for mostpractitioners.This limitation leads us to explore the following questions:1. Would DNN architectures that perform well on one general supervised classification prob-lem also be effective when applied to dataset in other domains?2. What types of architectures are effective for general supervised learning problems? Shouldpractitioners consider other types architectures besides “deep”?3. Can DNN architectures outperform “conventional” machine learning classifiers in generalsupervised problems?4. Is it possible to identify top-performing networks in the early stages of the training? ifpossible, such a technique could preserve valuable computing resources.We attempt to begin addressing these questions in the subsequent sections of this study. We itera-tively evaluate a large number of DNN architectures on a set of supervised classification problems.These datasets differ from those of image and speech classification in that they consist of tabulardata with both numeric and discrete features. These differences make it unclear what types of archi-tectures are likely to perform well on these domains. The datasets we analyze were selected becauseof their diversity in terms of size and feature number and composition. These traits also enable us tobetter understand the difficulties in applying DNN architectures across multiple domains.In order to provide meaningful results, the set of architectures we evaluate is also diverse. Wetherefore automatically generate a diverse set of architecture with various topological traits. Becauselittle information is available on the application of deep learning to general supervised classificationproblems, we choose to explore not only architectures that are linear but also architectures withparallel layers. While the generate set is diverse, additional work is required in order to modeladditional types of architectures. We elaborate on these points further in the subsequent section.4 G ENERATING MULTIPLE DNN ARCHITECTURESIn order to effectively explore the architecture space, we require a large and diverse set. We createthis set by automatically generating a large number of architectures and training each of them on alltraining set datasets. Our generation algorithm, presented in Algorithm 1, generates both “deep” and“wide” architectures with parallel layers (see Figure 1(b)). Next we describe the generation process.We consider DNN architectures to consist of components . We define a component as any partof an architecture, be it a layer, normalization or activation function. In this study we consider3Under review as a conference paper at ICLR 2017InputHidden Layer 2Hidden Layer 1OutputInputOutputHidden Layer 1 Hidden Layer 3ConcatOriginal (a) (b)InputHidden Layer 2Hidden Layer 1OutputHidden Layer 3Hidden Layer 2Figure 1: An example of the architectures that can be derived from an existing one.the following components: fully-connected layers, softmax, batch normalization, dropout and theReLU, sigmoid and tanh activation functions.We begin the generation process with a “basic” architecture consisting only of two components:a fully-connected input layer and an output softmax layer. We then expand the set of possiblearchitectures by iteratively applying the following steps:1. For each pair of components in the architecture, identify all component that could be in-serted between them (Figure 1(a)).2. For each pair of components in the architecture, identify all component that could be in-serted in parallel to one of them (Figure 1(b)).3. For each of the components identified in the previous steps, generate a new copy of thearchitecture and perform the corresponding insertion.Our proposed architecture generation approach enables us to generate the topological representationof every possible neural networks that consist of the predefined components. However, we do notgenerate multiple hyperparameter configurations for each topology and use fixed parameters foreach component. We plan to address this limitation in future work, possibly by using an approachsimilar to the one presented in Li et al. (2016). It is also important to point out that we currently donot support weight-sharing and therefore do not consider CNN and RNN architectures. Given thecharacteristics of the analyzed data, we do not consider these architecture types likely to producemeaningful results.Another important aspect of the our architecture generation approach is that we generate architec-tures with connections between layers of various depths. An example of this is shown in Figure 1(b),where we connect layers of depths 1 and 2. This setting enables us to systematically explore morecomplex designs than those commonly used. We analyze these architectures further in Section 6.As the number of possible architectures grows exponentially, we limit the total number of architec-tures that we generate by constraining the maximal number of components in a architecture and thenumber of parallel layers an architecture may contain. The specific settings used in our experimentsare presented in Section 6.1. These settings were chosen in order to ensure a diverse set of both deepand wide architectures given the time and computing-power constraints, and we plan to change themin future work to further diversify the set of generated architectures. To select the architectures fromwhich additional ones will be generated, we apply a priority queue. We first sort the architectures bythe number of their activation layers (in a descending order) with a secondary sorting based on thetotal number of components (in an ascending order). This setting prioritizes the creation of deeperarchitectures with multiple activation layers. For each architecture in the final set, we generate the4Under review as a conference paper at ICLR 2017meta-features described in Section 5. The algorithm for the architecture generation is presented inAlgorithm 1.Algorithm 1 Automatic architecture generation1:procedure ARCHITECTURE GENERATION (arcQueue ,initArc )2: architecturesSet initArc3: architecturesQueue initArc4: while (architecturesQueue 6=;)do5: newarchitectures ;6: architecture arcQueue:pop ()7: for each P(ci; cj)i6=j2fc1; c2; :::; c ngdo8: candidateComponents proposeInsertBetweenCandidates (P(ci; cj))9: for each candidate2candidateComponents do10: newarchitecture insertBetween (architecture; P (ci; cj); candidate )11: newarchitectures newarchitectures [newarchitecture12: candidateComponents proposeInsertAsideCandidates (P(ci; cj))13: for each candidate2candidateComponents do14: newarchitecture insertAside (architecture; P (ci; cj); candidate )15: newarchitectures newarchitectures [newarchitecture16: newarchitectures filter (newarchitectures )17: arcQueue arcQueue[newarchitectures18: architecturesSet architecturesSet [newarchitectures19: return architecturesSet5 M ETA-LEARNING FOR ARCHITECTURE RANKINGOur goal is to determine whether by analyzing the topology of DNN architecture as well as thetransformations it undergoes in its early training iterations could be used to predict its performance.To this end we develop a novel machine learning-based approach that generates a set of featuresfor each analyzed architecture. Once the features are generated, we use a ranking classifier to as-sign a score to each architecture. The classifier is trained on a large corpus of datasets (additionalinformation is provided in Section 6.1).We apply meta-learning (Vilalta & Drissi, 2002) to predict the performance of the DNN architec-tures. Meta-learning is a branch of machine learning in which an algorithm “learns how to learn” byextracting information on the learning process of another algorithm. The features extracted in thisprocess are called meta-features. We generate three types of meta-features: dataset-based ,topology-based andtraining-based . We hypothesize that these groups represent the elements that affect theperformance of the DNN architecture - the data on which it is trained, the structure of the networkand the changes in its weights, biases and activation functions during throughout the training pro-cess. We provide a full overview of the meta-features groups below and detailed information inAppendix A .Dataset-based meta-features. As explained in Section 3, the datasets we use in the evaluationvary significantly in size and feature composition. These meta-features attempt to represent themultiple characteristics that may affect the performance of deep learning algorithms. We generatethree types of meta-features:1.General information: general statistics on the analyzed dataset: number of instances andclasses, number and type of features and statistics on the correlations among various fea-tures.2.Entropy-based measures: we partition the dataset’s features based on their type (discrete,numeric, etc.) and calculate statistics on the Information Gain (IG) of the features in eachgroup.3.Feature diversity: we partition the dataset into type-based groups and use the chi-squaredand paired-t test to calculate the similarity of each pair in each group. We then generatemeta-features using the tests’ statistic values.5Under review as a conference paper at ICLR 2017Topology-based meta-features. Our generated architectures vary significantly in size, depth andwidth. Since these traits are likely to affect their performance, we use the meta-features of this groupto quantify and model them. The meta-features can be partitioned into two groups:1.Architecture composition: general statistics on the number and types of layers and func-tions that make up the architecture, statistics on layer composition as a function of depthetc.2.Connectivity-based measures: for each layer in the architectures, we calculate variousmeasures that are frequently used for graph-analysis. These measures include statistics onthe number and ratio of incoming and outgoing edges (overall, per depth and per type) andnode-centrality evaluation measures.Training-based meta-features. The goal of these meta-features is to model the transformationsundergone by the DNN during the course of its training. These meta-features consist of statisticson the weights, biases and activation function layers of the various components in the architecture.These meta-features can be partitioned into two groups:1.Static evaluation: general statistics on the distribution of the various values across differ-ent depths and layer types. These features provide “snapshot” information on the trainingstatus of the architecture in multiple training steps.2.Time series-based evaluation: We compare the values obtained in the various trainingiterations to those obtained earlier, calculate ratios and modeling the changes in valuesdistribution over time.A full description of all meta-features is provided in Appendix A.6 E XPERIMENTS AND ANALYSIS6.1 E XPERIMENTAL SETUPWe conduct our experiments on 13 supervised classification datasets in a tabular form. We se-lected these datasets since they represent common supervised-learning problems that are not oftenaddressed by deep learning. In addition, their feature composition consists of both numeric and dis-crete features, a trait that makes them different from image and speech classification datasets. Thedatasets vary significantly in size, number and type of features (some contain only numerical featureswhile others also contain discrete features) and class imbalance - traits we hypothesize will makelearning across domains more challenging. All datasets are available on the OpenML repository andtheir properties are represented in Appendix B.We use the following settings:For each dataset, we train the same set of 11,170 architectures, generated as described inSection 4. The maximal width (number of parallel layers) allowed for an architecture wasset to 4, and we terminated the generation process upon reaching the predefined numberof architectures. This deepest architectures generated by this approach have 8 activationlayers and 14 components overall.For architectures training, all datasets were randomly partitioned into training, validationand test sets. 80% of the data points was used by the training and the remaining two setsassigned 10% each. The same split was used for all the architectures explored for eachdataset. Original class ratios were maintained in all sets.All generated architectures were trained until convergence, with the time of terminationdetermined by performance on the validation set.The training-based meta-features were only extracted for the following steps: 20, 40, 60,80 and 100.We used a leave-one-out (LOO) cross-validation approach to train the ranking classifier:for each evaluated dataset di, we train the ranking classifier using the meta-features fromdj2Dwhere i6=j. This setting enables to test whether a meta-model trained on onedataset could be effectively applied on another.6Under review as a conference paper at ICLR 2017We randomly split the generated architectures into two groups. The first group, consistingof 70% of the architectures, is used for training. We use the remaining 30% to evaluate theperformance of our approach on each dataset.6.2 A NALYSISWe begin by analyzing the accuracy distribution of the generated architectures across the datasets.We found that the distribution of accuracies varies significantly across the different datasets, withsome datasets with ranges of [45%-90%] accuracy while others are in the range [89%-95%]. Thisdifference has significant impact on one’s ability to apply architectures that are effective in onedomain to another, as we confirm with the next experiment. An example of accuracies distributionsis presented in figures 2 and 3 and plots for all datasets are presented in Appendix D.Figure 2: Accuracies plot for the datasetAileronsFigure 3: Accuracies plot for the dataset Con-traceptiveAnalyzing the performance differences of “parent–child” architectures. In order to determinewhether our architecture generation method is effective, we analyzed the differences in accuracybetween every architecture and its descendant. Our reason for performing this analysis is as fol-lows: if making incremental additions to an existing architecture does not significantly change itsperformance, then we simply generate a large number of architecture which are nearly identical inperformance.The results of our analysis are presented in Table 1. For every ”parent–child“ pair we calculatethe difference in accuracy on the test set. We then calculate the maximal and average changes inaccuracy for each dataset. It is clear from the results that the changes in accuracy are significant,especially given the fact that changes are accumulated over time (deeper architectures are a result ofmultiple modifications).Next we analyze the “parent–child” architectures with the maximal differences in order to determinewhether the addition of particular component is most likely to induce large changes in accuracy. Ourresults, presented in Table 2, show that no one component type can be consistently attributed withinducing large changes.Applying architectures across datasets. We attempt to determine whether it is possible to findarchitectures that perform well across multiple datasets. For each of the generated architectures, wecalculate its performance-based ranking (i.e. position in a list ordered by the accuracy measure) oneach of the datasets. Then, for each dataset we test the performance of the architecture with thebest average ranking on the remaining datasets. We compare the performance of this architecture tothat of the best evaluated architecture and to that of the best architecture found by our meta-learningmodel (described in the following section). The results, presented in Table 3, show significantdifferences in performance and lead us to conclude that in most cases DNN architectures do notperform well across multiple datasets.7Under review as a conference paper at ICLR 2017Table 1: Analyzing the differences in accuracy for the different architecture parent–child pairs foreach dataset.Dataset Max difference Average differenceContraceptive 5% 1.8%Seismic bumps 4.9% 1.1%Page Blocks 7.4% 1.4%Wind 35% 3.2%Puma 32 19.2% 1.8%CPU act 40% 3.3%Delta elevators 39.5% 2.7%Mammography 3% 1.1%Ailerons 17.4% 5.7%Bank marketing 3.5% 0.8%German Credit 5% 1%Space 11.5% 2.5%Cardiography 11.5% 1%Table 2: Analyzing the differences in accuracy for the different architecture parent–child pairs foreach dataset.Component type Number of appearancesDropout 2Sigmoid 3TanH 2Fully connected 2ReLU 1Batchnorm 3Comparing the performance of DNN architectures to those of “conventional classifiers”. Asa point of reference to “classical” machine learning approaches for classifying tabular data, in Table3 we also presents the performance of the Random Forest algorithm (using the Weka Hall et al.(2009) implementation with the default parameters ). It is clear that neither Random Forest nor theDNN architectures consistently outperform the other. We intend to explore the factors that causethese differences in performance in future work.Table 3: Comparison of the accuracy performance of the best average-ranking architectures to thetop-ranking architecture found by our approach for each dataset.Dataset Best architecture Top ranked (bestfound by model)Architecture with bestaverage rankingRandom ForestContraceptive 84.5% 84% 79.7% 76.4%Seismic bumps 95% 94.1% 92.1% 93.4%Page Blocks 97% 95.2% 89.6% 97.9%Wind 88% 84.3% 54% 86.5%Puma 32 70% 67% 50.7% 88.1%CPU act 91% 87.7% 70% 93.7%Delta elevators 90% 88.7% 79.2% 87.7%Mammography 99% 98.9% 97% 98.8%Ailerons 89% 86.2% 59% 88.6%Bank marketing 96% 95% 94% 90.5%German Credit 77.1% 73.6% 68.2% 76.9%Space 69.6% 66.8% 56.5% 84%Cardiography 94.5% 93.7 86.4% 95.5%Analyzing the performance of architectures with parallel layers. Next we explore whetherarchitectures with parallel layers outperform similar non-parallel architectures. We analyze the 100top-performing architectures of each dataset and calculate the percentage of architectures with par-allel layers. The results, presented in Appendix C, show that this type of architecture consists onaverage of 62% of the top-performing architectures.8Under review as a conference paper at ICLR 2017To determine whether the benefit of applying parallel layers is significant, we randomly choose oneof our datasets (Ailerons) and identify the 100 top-performing architectures with parallel layers.From this set we randomly sample 10 architectures and compare the performance of each of themto those of allof their possible serial counterparts, created by iteratively removing all but one ofthe different parallel layers. Our results, presented in Table 4, show that architectures with parallellayers significantly outperform allof their serial counterparts.Considering the same sample of parallel architectures, we analyze whether architectures perfor-mance can be improved by adding a batch normalization before, after or before and after eachactivation function. As shown by the results in Table 4, we did not find evidence that the additionof batch normalization improves the performance of architectures with parallel layers. We find thisfact surprising and intend to explore this further in future work. An example of one of the parallelarchitectures is presented in Figure 4 in Appendix C.Finally, we also analyze the component composition of the 100 top-performing architectures foreach dataset. The most interesting conclusion found in this analysis is the fact that a relativelyshallow architectures ( 4 fully-connected layers) seem to yield the best performance on average forall datasets. The full analysis of the architecture components is presented in Table 12 in AppendixC.Table 4: Comparison of the performance of parallel architectures to their serial counterparts.Parallel Ar-chitecturesSerialversionsParallel withbatchnorm –beforeParallel withbatchnorm –afterParallel withbatchnorm –before & after)Average 87.6% 71.8% 70.4% 77.4% 76.5%Standard Deviation 0.39% 7.8% 9.9% 4.2% 3.6%6.3 E VALUATING THE META -LEARNING APPRAOCHWe analyze the performance of our meta-learning model as a classifier to rank architectures basedon their performance. For these experiments, we use the following settings:We define the 5% of the top-performing architectures of each dataset as “good” and labelthe remaining as “bad”. We use this setting due to the large variance in the performance ofthe DNN architectures on the different datasets (see Appendix D for full details). We alsointend to experiment with other labeling methods in future work.We use the precision@X measure as the evaluation metric. We calculate it by ranking allarchitectures according with the confidence of the meta-classifier (i.e. the classifier trainedon the meta-features) in them being “good”. Then, for the Xtop-ranking architectures wecalculate the actual percentage of “good” architectures is X.We conduct a separate evaluation on the training-based meta-features and the dataset-basedand topological meta-features. Since the training-based features are more computationallyexpensive to compute, we find it interesting to compare their performance to the othertypes of meta-features. In our experiments we denote the full set as ML full, the training-based meta-features as ML train and the topological and dataset-based meta-features asML data+top.We use the Random Forest algorithm for the training of the meta-model.The results of our evaluation are presented in Table 5. We show that we are able to identify multiplearchitectures in the top-ranking spots in a much higher ratio than their share of the population. It isalso clear that the joint set of all meta-features outperforms both of the examined subsets.Next we conduct a random sampling over architectures, and compare the performance of the sam-pled architectures to those obtained by ranking all architectures using the proposed meta-classifier.Our goal is to determine the probability that Nrandomly-sampled architectures will consist of atleast one architecture that outperforms all the top Mitems ranked by the meta-classifier. We conductthe experiment as follows: for each dataset, we randomly sample a fixed number of architecturesand identify the one with the highest performance among those sampled. We then check if this9Under review as a conference paper at ICLR 2017architecture outperforms all those in the ranked list provided by the meta-learning model. We re-peat this process 50,000 for each dataset and calculate the probability of this scenario. The results,presented in Table 6, show that our model outperforms random sampling for all datasets, often by alarge margin. However, further experimentation is required to fully determine the effectiveness ofthe meta-learning approach.Finally, we analyze the results in order to determine the effectiveness of the different meta-featuresused by our model. The analysis was carried out by running LASSO logistic regression and analyz-ing the weights assigned to the various meta-features. Based on this analysis we reach the followingconclusions:The dataset-based meta-features had the smallest contribution to the performance. Whileit is somewhat surprising given the fact that DNNs perform very differently on datasetwith different characteristics, we conclude that the model is focused on the way in thearchitecture is trained on the data (i.e. weights and activations).The topological meta-features that had the largest contribution were those modeling thedepth of the network, the number of parallel layers and those counting the number of vari-ous components.The ranking model uses a large number of training-based meta-features and from all typesdescribed in Appendix A. However, the model includes only weight and activation-basedmeta-features among the training-based meta-features. The biases-based meta-features arealmost never used.Table 5: The evaluation results of different approaches using the precision@X metric. full,trainandd+tdenote ML full (all meta-features), ML train (training-based meta-features only) andML data+top(dataset-based and topological meta-features) respectively. Best results are in bold.Datasetprecision@5 precision@10 precision@20 precision@50full train d+tfull train d+tfull train d+tfull train d+tContraceptive 20% 20% 0% 20% 10% 20% 20% 5% 15% 20% 10% 8%Seismic Bumps 20% 40% 20% 20% 20% 10% 25% 20% 15% 12% 16% 12%Page Blocks 40% 20% 0% 30% 20% 0% 20% 15% 0% 16% 14% 14%Wind 40% 0% 40% 20% 20% 30% 10% 15% 25% 12% 16% 20%Puma32 20% 20% 0% 10% 20% 20% 15% 20% 10% 16% 10% 10%CPU Act 40% 20% 20% 30% 20% 20% 30% 15% 10% 22% 12% 16%Delta Elevators 20% 20% 20% 20% 20% 10% 15% 25% 20% 20% 20% 12%Mammography 20% 0% 0% 20% 20% 0% 20% 15% 5% 20% 10% 12%Ailerons 40% 40% 40% 30% 30% 20% 30% 20% 20% 28% 22% 26%Bank Marketing 20% 0% 20% 30% 10% 20% 20% 10% 10% 10% 14% 10%German Credit 40% 20% 20% 40% 10% 10% 20% 10% 10% 14% 10% 10%Space 20% 0% 0% 10% 10% 0% 15% 10% 10% 18% 14% 10%Cardiography 20% 0% 20% 20% 10% 10% 20% 15% 20% 18% 14% 16%7 C ONCLUSIONS AND FUTURE WORKIn this study we have explored several aspects of applying DNNs to supervised classification prob-lems. Our results demonstrate the difficulty in using DNN architectures that are effective in onedomain to another. We also systematically compare the performance of architectures with parallellayers to those of similar linear architectures and demonstrate that the former outperforms the latterin many cases. We present a novel approach for predicting the performance of a DNN architectureby analyzing its topology and the changes in its weights, biases and activation function values duringearly phases of training. Our aim is that this work can lay the foundation for a better understandingof the DNN architectures space.For future work we consider several directions. First, we plan to add additional components to theones currently used in our automatic architecture generation method in order to enable further ex-ploration. In addition, we will seek to enhance our approach adding automatic parameter tuningmethods. This will enable us to efficiently explore multiple configurations and possibly identifyhigher-performing architectures. We are also considering the use of an exploration/exploitation10Under review as a conference paper at ICLR 2017Table 6: The probabilities of finding an architecture that outperforms all those in the ranked list whenrandomly sampling a set of architectures. The size of the ranked list by our algorithm is always 10(i.e. for sample size 20 we test a set two times the size of the ranked list.)Dataset Sample size - 10 Sample size - 20Contraceptive 1.7% 3.2%Seismic bumps 11.5 22%Page Blocks 14.8% 27.7%Wind 24.3% 41.5%Puma 32 20.7% 36.5%CPU act 3.4% 6.7%Delta elevators 33.3% 55.5%Mammography 7.5% 14.3%Ailerons 13.9% 25.5%Bank marketing 5.6% 10.4%German Credit 11.9% 22.9%Space 20.2% 36.3%Cardiography 5.6% 11.2%scheme along the lines presented in Li et al. (2016) to enable us to efficiently explore larger archi-tecture spaces.Another approach we plan to explore is to make the search over network architectures a fully-differentiable problem, by encoding the problem only using mechanisms that enable such a search.As an example, let us imagine that we want to decide the best number of internal hidden layers touse in a multi-layer fully-connected neural net. For this, we could create multiple parallel stacksof layers with the same input at the bottom (e.g. the features for each data point) and the samekind of output at the end (e.g. probabilities over the possible classes) and then use a softmax totake a weighted sum of the outputs from each of the parallel stacks. By using a penalty on thenegative entropy of this weighted sum, and increasing the penalty over time, the network shouldlearn to produce the output using only one of the parallel stacks which we can then use at inferencetime. We can also train multiple models simultaneously using this method, and introduce additionalpenalties to ensure that the multiple models explore different architectures during training, to enablea more diverse search.11Under review as a conference paper at ICLR 2017REFERENCESMarcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul,and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprintarXiv:1606.04474 , 2016.George E Dahl, Dong Yu, Li Deng, and Alex Acero. Context-dependent pre-trained deep neuralnetworks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech, andLanguage Processing , 20(1):30–42, 2012.David Duvenaud, Dougal Maclaurin, and Ryan P Adams. Early stopping as nonparametric varia-tional inference. In Proceedings of the 19th International Conference on Artificial Intelligenceand Statistics , pp. 1070–1077, 2016.Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten.The weka data mining software: an update. ACM SIGKDD explorations newsletter , 11(1):10–18,2009.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. arXiv preprint arXiv:1512.03385 , 2015.Kevin Jamieson and Ameet Talwalkar. Non-stochastic best arm identification and hyperparameteroptimization. Preprint available a t , 2015.Kevin Jarrett, Koray Kavukcuoglu, Yann Lecun, et al. What is the best multi-stage architecturefor object recognition? In 2009 IEEE 12th International Conference on Computer Vision , pp.2146–2153. IEEE, 2009.Junqi Jin, Ziang Yan, Kun Fu, Nan Jiang, and Changshui Zhang. Optimizing recurrent neuralnetworks architectures under time constraints. arXiv preprint arXiv:1608.07892 , 2016.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-lutional neural networks. In Advances in neural information processing systems , pp. 1097–1105,2012.Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Ef-ficient hyperparameter optimization and infinitely many armed bandits. arXiv preprintarXiv:1603.06560 , 2016.Dougal Maclaurin, David Duvenaud, and Ryan P Adams. Gradient-based hyperparameter optimiza-tion through reversible learning. In Proceedings of the 32nd International Conference on MachineLearning , 2015.Hector Mendoza, Aaron Klein, Matthias Feurer, Jost Tobias Springenberg, and Frank Hutter. To-wards automatically-tuned neural networks.Tom Schaul, Sixin Zhang, and Yann LeCun. No more pesky learning rates. ICML (3) , 28:343–351,2013.Wenling Shang, Kihyuk Sohn, Diogo Almeida, and Honglak Lee. Understanding and im-proving convolutional neural networks via concatenated rectified linear units. arXiv preprintarXiv:1603.05201 , 2016.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. arXiv preprint arXiv:1409.1556 , 2014.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 1–9, 2015.Ricardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. ArtificialIntelligence Review , 18(2):77–95, 2002.12Under review as a conference paper at ICLR 2017Zhizheng Wu and Simon King. Investigating gated recurrent networks for speech synthesis. In2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pp.5140–5144. IEEE, 2016.Dong Yu, Li Deng, and George Dahl. Roles of pre-training and fine-tuning in context-dependentdbn-hmms for real-world speech recognition. In Proc. NIPS Workshop on Deep Learning andUnsupervised Feature Learning , 2010.Wojciech Zaremba. An empirical exploration of recurrent network architectures. 2015.Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. InEuropean Conference on Computer Vision , pp. 818–833. Springer, 2014.A T HE META -FEATURES USED BY OUR APPROACHWe extract three types of meta-features for the training: dataset-based, topological and training-based. We now provide a complete description of the features used. Note that because calculate themeta-features in Table 9 three times: for the weights, biases and activation functions.Table 7: Description of the dataset-based meta-features used by our approach.Features Name DescriptionnumOfInstances The number of instances in the dataset.numOfClasses The number of classes in the dataset.numOfFeatures The number of features in the dataset.numOfNumericFeatures The number of numeric (continuous) features in the dataset.numOfDiscreteFeatures The number of discrete (non-numeric) features in the dataset.ratioNumericFeatures The percentage of the numeric features of all features.ratioDiscreteFeatures The percentage of the discrete features of all features.fmax,min, avg, stdev gDiscFeatVals Statistics on the number of possible values for a discrete feature.fmax,min, avg, stdev gIGVal For every feature we calculate the information gain. We thengenerate statistics on the set of values.fmax,min, avg, stdev gNumericIGVal Same as the previous meta-feature, but calculated only for nu-meric features.fmax,min, avg, stdev gDiscreteIGVal Same as the previous meta-feature, but calculated only for dis-crete features.fmax,min, avg, stdev gPairedTT For every pair of numeric features we calculate the statistic of apaired-t test. We then generate statistics on the values.fmax,min, avg, stdev gChiSquareAll For every pair of features we calculate the statistic of a Chi-Square test. We then generate statistics on the values.fmax,min, avg, stdev gChiSquareDisc For every pair of discrete features we calculate the statistic of aChi-Square test. We then generate statistics on the values.13Under review as a conference paper at ICLR 2017Table 8: Description of the topological meta-features used by our approach.Features Name DescriptionnumOfVertices The number of vertices in the architecturenumOfEdges The number of edges in the architecturefmax,min,avg,stdevgIncomingEdges Statistics on the number of incoming edges, calculated over allcomponents.fmax,min,avg,stdevgOutgoingEdges Statistics on the number of outgoing edges, calculated over allcomponents.fmax,min,avg,stdevgDepthsPerVertex Because of the parallel layers, a vertex may have multipledepths. We calculate statistics on these values across all com-ponents.fmax,min,avg,stdevgVerticesPerDepth For each depth in the architecture, we count the number of com-ponents that are in the said depth. We then calculate statisticsacross all depths.fmax,min,avg,stdevgBetweenness For every component in the architecture, we calculate its be-tweenness centrality measure. We then calculate statisticsacross all components.fmax,min,avg,stdevgBetweennessNorm Same as the previous set of meta-features, but the betweennessvalues are normalized.fnum,ratiogMaxPool The number of MaxPool Layers in the architecture and theirratio of the overall number of components.fnum,ratiogConcat The number of Concatination layers in the architecture and theirratio of the overall number of components.fmax,min,avg,stdevgFCSize Statistics on the size of the fully-connected layers in the archi-tecture.14Under review as a conference paper at ICLR 2017Table 9: Description of the training-based meta-features used by our approach.Features Name Descriptionfmax,min,avg,stdevgGlobalMax For every component, get the maximal value of the analyzedtrait. We then calculate statistics for all componentsfmax,min,avg,stdevgGlobalMaxRatio For each meta-feature in the previous line, divide its value inthe value of same meta-feature calculated at initializationfmax,min,avg,stdevgGlobalMin For every component, get the minimal value of the analyzedtrait. We then calculate statistics for all componentsfmax,min,avg,stdevgGlobalMinRatio For each meta-feature in the previous line, divide its value inthe value of same meta-feature calculated at initializationfmax,min,avg,stdevgGlobalAvg For every component, get the average of the values of the an-alyzed trait. We then calculate statistics for all componentsfmax,min,avg,stdevgGlobalAvgRatio For each meta-feature in the previous line, divide its value inthe value of same meta-feature calculated at initializationfmax,min,avg,stdevgGlobelStdev For every component, get the standard deviation of the val-ues of the analyzed trait. We then calculate statistics for allcomponentsfmax,min,avg,stdevgGlobalStdevRatio For each meta-feature in the previous line, divide its value inthe value of same meta-feature calculated at initializationfmax,min,avg,stdevgByTypeMax For each type of components, get the maximal value of theanalyzed trait. We generate separate meta-features for eachcomponent type (i.e. multiple sets of features are generated).fmax,min,avg,stdevgByTypeMaxRatio For each meta-feature in the previous line, divide its value inthe value of same meta-feature calculated at initializationfmax,min,avg,stdevgByTypeMin For each type of components, get the minimal value of the an-alyzed trait. We generate separate meta-features (i.e. multiplesets of features are generated).fmax,min,avg,stdevgByTypeMinRatio For each meta-feature in the previous line, divide its value inthe value of same meta-feature calculated at initializationfmax,min,avg,stdevgByTypeAvg For each type of components, get the average of values of theanalyzed trait. We generate separate meta-features (i.e. multi-ple sets of features are generated).fmax,min,avg,stdevgByTypeAvgRatio For each meta-feature in the previous line, divide its value inthe value of same meta-feature calculated at initializationfmax,min,avg,stdevgByTypeStdev For each type of components, get the standard deviation ofthe values of the analyzed trait. We generate separate meta-features (i.e. multiple sets of features are generated).fmax,min,avg,stdevgByTypeStdevRatio For each meta-feature in the previous line, divide its value inthe value of same meta-feature calculated at initializationfmax,min,avg,stdevgByDepthMax For all components at a given depth, identify the maximalvalue. Then, generate the statistics across all depths.fmax,min,avg,stdevgByDepthMaxRatio For each meta-feature in the previous line, divide its value inthe value of same meta-feature calculated at initializationfmax,min,avg,stdevgByDepthMin For all components at a given depth, identify the minimalvalue. Then, generate the statistics across all depths.fmax,min,avg,stdevgByDepthMinRatio For each meta-feature in the previous line, divide its value inthe value of same meta-feature calculated at initializationfmax,min,avg,stdevgByDepthAvg For all components at a given depth, identify the averagevalue. Then, generate the statistics across all depths.fmax,min,avg,stdevgByDepthAvgRatio For each meta-feature in the previous line, divide its value inthe value of same meta-feature calculated at initializationfmax,min,avg,stdevgByDepthStdev For all components at a given depth, identify the standard de-viation value. Then, generate the statistics across all depths.fmax,min,avg,stdevgByDepthStdevRatio For each meta-feature in the previous line, divide its value inthe value of same meta-feature calculated at initialization15Under review as a conference paper at ICLR 2017B F ULL INFORMATION ON THE DATASETS USED IN THE EVALUATIONTable 10: The characteristics of the datasets used in the experimentsName Num of DataPoints% of MinorityClassNum of Features % of Numeric Fea-turesGerman Credit 1,000 30% 20 30%Contraceptive 1,473 22.6% 9 66.6%Cardiography 2,126 22.1% 22 100%Seismic bumps 2,584 6.5% 18 77%Space 3,107 49.5% 6 100%Page Blocks 5,473 9.3% 10 100%Wind 6,574 46.7% 14 100%Puma 32 8,192 49.6% 32 100%CPU act 8,192 30.2% 21 100%Delta elevators 9,517 49.7% 6 100%Mammography 11,183 2.3% 6 100%Ailerons 13,750 42.3% 40 100%Bank marketing 45211 11.6% 16 43.75%C A NALYSIS OF THE PERFORMANCE PARALLEL LAYERSFor each dataset, we analyze the 100 top-performing architectures and determine the percentage ofarchitectures with parallel layers. The results, presented in Table 11, show that the percentage issignificant. In table 12 we analyze the component composition of these architectures. The mostinteresting point (in our view) is that the number of fully-connected layers is about half of thepossible maximum. We take this as an indication that the creation of very deep DNNs may not berequired for tabular datasets of the type analyzed in this work. In Figure 4 we present an example ofan architecture with parallel layers that was among the 100 top-performing on the Ailerons dataset.Table 11: The percentage of architectures with parallel layers in the 100 top-performing architecturesfor each dataset.Dataset % of architectures with parallel layersContraceptive 61%Seismic bumps 60%Page Blocks 65%Wind 61%Puma 32 59%CPU act 64%Delta elevators 73%Mammography 61%Ailerons 61%Bank marketing 62%German Credit 59%Space 49.6%Cardiography 64%Average 61%16Under review as a conference paper at ICLR 2017Table 12: The average number of component types per architecture for 100 top-performing archi-tectures of each dataset.Dataset Concat FC Batchnorm Dropout ReLU Sigmoid Tanh SoftmaxContraceptive 0.53 3.54 2.57 0.91 1.71 0.51 0.43 1Seismic bumps 0.47 3.53 1.75 1.62 1.68 0.64 0.48 1Page Blocks 0.67 3.46 1.59 0.35 1.22 0.56 0.6 1Wind 0.57 3.65 2.47 0.3 1.67 0.34 0.52 1Puma 32 0.55 3.56 1.84 0.94 1.6 0.46 0.57 1CPU act 0.65 3.95 3.23 0.2 1.91 0.3 0.63 1Delta elevators 0.64 3.4 1.81 0.49 1.34 0.38 0.59 1Mammography 0.69 3.6 3 0.22 1.63 0.36 0.49 1Ailerons 0.62 3.68 2.34 0.69 1.58 0.39 0.55 1Bank marketing 0.52 3.41 2.23 1.19 1.82 0.53 0.35 1German Credit 0.5 3.66 2.39 1.1 2 0.42 0.31 1Space 0.63 3.8 3.32 0.28 2.29 0.35 0.35 1Cardiography 0.55 3.61 3.29 0.1 2.34 0.26 0.35 1InputBatchnormFully ConnectedOutputReLUFully ConnectedSigmoidReLUConcatDropoutFully ConnectedFigure 4: An example of an architecture with parallel layers.17Under review as a conference paper at ICLR 2017D A CCURACY DISTRIBUTION OF THE GENERATED ARCHITECTURES ACROSSTHE EVALUATED DATASETSFigure 5: Ailerons Figure 6: ContraceptiveFigure 7: Delta elevators Figure 8: Page blocksFigure 9: Seismic bumps Figure 10: Bank marketing18Under review as a conference paper at ICLR 2017Figure 11: CPU Figure 12: MammographyFigure 13: Puma 32NH Figure 14: Wind19
rJM69B5xx
Under review as a conference paper at ICLR 2017FINDING A JACK-OF-ALL-TRADES :ANEXAMINATION OF SEMI-SUPERVISED LEARNINGINREADING COMPREHENSIONRudolf Kadlec, Ondrej Bajgar, Peter Hrincar & Jan KleindienstIBM WatsonV Parku 4, 140 00 Prague, Czech Republicfrudolf kadlec,obajgar,phrincar,jankle g@cz.ibm.comABSTRACTDeep learning has proven useful on many NLP tasks including reading comprehen-sion. However, it requires large amounts of training data which are not available insome domains of application. Hence we examine the possibility of using data-richdomains to pre-train models and then apply them in domains where training dataare harder to get. Specifically, we train a neural-network-based model on twocontext-question-answer datasets, the BookTest and CNN/Daily Mail, and wemonitor transfer to subsets of bAbI, a set of artificial tasks designed to test specificreasoning abilities, and of SQuAD, a question-answering dataset which is muchcloser to real-world applications. Our experiments show very limited transfer ifthe model is not shown any training examples from the target domain howeverthe results are encouraging if the model is shown at least a few target-domainexamples. Furthermore we show that the effect of pre-training is not limited toword embeddings.1 I NTRODUCTIONMachine intelligence has had some notable successes, however often in narrow domains which aresometimes of little practical use to humans – for instance games like chess (Campbell et al., 2002)or Go (Silver et al., 2016). If we aimed to build a general AI that would be able to efficiently assisthumans in a wide range of settings, we would want it to have a much larger set of skills – amongthem would be an ability to understand human language, to perform common-sense reasoning and tobe able to generalize its abilities to new situations like humans do.If we want to achieve this goal through Machine Learning, we need data to learn from. A lot of dataif the task at hand is complex – which is the case for many useful tasks. One way to achieve wideapplicability would be to provide training data for each specific task we would like the machine toperform. However it is unrealistic to obtain a sufficient amount of training data for some domains – itmay for instance require expensive human annotation or all domains of application may be difficultto predict in advance – while the amount of training data in other domains is practically unlimited,(e.g. in language modelling or Cloze-style question answering).The way to bridge this gap – and to achieve the aforementioned adaptability – is transfer learning (Pan& Yang, 2010) and closely related semi-supervised learning (Zhu & Goldberg, 2009) which allowthe system to acquire a set of skills on domains where data are abundant and then use these skills tosucceed on previously unseen domains. Despite how important generalization is for general AI, a lotof research keeps focusing on solving narrow tasks.In this paper we would like to examine transfer of learnt skills and knowledge within the domain of textcomprehension, a field that has lately attracted a lot of attention within the NLP community (Hermannet al., 2015; Hill et al., 2015; Kobayashi et al., 2016; Kadlec et al., 2016b; Chen et al., 2016; Sordoniet al., 2016; Dhingra et al., 2016; Trischler et al., 2016; Weissenborn, 2016; Cui et al., 2016b;a;These authors contributed equally to this work.1Under review as a conference paper at ICLR 2017Li et al., 2016; Shen et al., 2016). Specifically, we would like to address the following researchquestions:1.Whether we could train models on natural-language tasks where data are abundant andtransfer the learnt skills to tasks where in-domain training data may be difficult to obtain.We will first look into what reasoning abilities a model learns from two large-scale reading-comprehension datasets using artificial tasks, and then check whether it can transfer its skillsto real world tasks. Spoiler: both these transfers are very poor if we allow no training at allon the target task.2.Whether pre-training on large-scale datasets does help if we allow the model to train on asmall sample of examples from the target tasks. Here the results are much more positive.3.Finally we examine whether the benefits of pre-training are concentrated in any particularpart of the model - namely the word-embedding part or the context encoder (the reasoningpart). It turns out that pre-training is useful for both components.Although our results do not improve current state of the art in any of the studied tasks, they show aclear positive effect of large-dataset pre-training on the performance of our baseline machine-learningmodel. Previous studies of transfer learning and semi-supervised learning in NLP focused on textclassification (Dai & Le, 2015; Mou et al., 2016) and various parsing tasks (Collobert et al., 2011;Hashimoto et al., 2016). To our knowledge this work is the first study of transfer learning in readingcomprehension, and we hope it will stimulate further work in this important area.We will first briefly introduce the datasets we will be using on the pre-training and target sides,then our baseline model and afterwards in turn describe the method and results of each of the threeexperiments.2 D ATASETS2.1 P RE-TRAINING DATASETSWe have mentioned that for the model pre-training we would want to use a task where training dataare abundant. An example of such task is context-dependent cloze-style-question answering since thetraining data for this task can be generated automatically from a suitable corpus. We will use twosuch pre-training datasets in our experiments: the BookTest (Bajgar et al., 2016) and the CNN/DailyMail (CNN/DM) news dataset (Hermann et al., 2015).The task associated with both datasets is to answer a cloze-style question (i.e. fill in a blank ina sentence) the answer to which needs to be inferred from a context document provided with thequestion.2.1.1 B OOK TESTIn the BookTest dataset, the context document is formed from 20 consecutive sentences from a book.The question is then formed by omitting a common noun or a named entity from the subsequent 21stsentence. Among datasets of this kind, the BookTest is among the largest with more than 14 milliontraining examples coming from 3555 copyright-free books avalable thanks to Project Gutenberg.2.1.2 CNN/D AILY MAILIn the CNN/DM dataset the context document is formed from a news article while the cloze-stylequestion is formed by removing a named entity from one of the short summary sentences which oftenappear at the top of the article.To stop the model from using world knowledge from outside the context article (and hence truly testthe comprehension of the article), all named entities were replaced by anonymous tags, which arefurther shuffled for each example. This may make the comprehension more difficult; however, sincethe answer is always one of the anonymized entities, it also reduces the number of possible answersmaking guessing easier.2Under review as a conference paper at ICLR 20172.2 T ARGET DATASETS2.2.1 BABIThe first target dataset are the bAbI tasks (Weston et al., 2016) – a set of artificial tasks each ofwhich is designed to test a specific kind of reasoning. This toy dataset will allow us to observe whatparticular skills the model may be learning from each of the three training datasets.For our experiments we will be using an architecture designed to select one word from the contextdocument as the answer. Hence we have selected Tasks 1,2,3,4,5,11,12,13,14 and 16 which fulfillthis requirement and added task 15 which required a slight modification. Furthermore because bothpre-training datasets are cloze-style we converted also the bAbI task questions into cloze style (e.g.”Where is John?” to ”John is in the XXXXX.”).For the models pre-trained on CNN/DM we also anonymized the tasks in a way similar to thepre-training dataset - i.e. we replaced all names of characters and also all words that can appear asanswers for the given task by anonymous tags in the style of CNN/DM. This gives even models thathave not seen any training examples from the target domain a chance to answer the questions.Full details about these alterations can be found in Appendix A.2.2.2 SQ UADSecondly, we will look on transfer to the SQuAD dataset (Rajpurkar et al., 2016); here the associatedtask may be already useful in the real world. Although cloze-style questions have the huge advantagein the possibility of being automatically generated from a suitable corpus – the path taken by CNN/DMand the BookTest – in practice humans would use a proper question, not its cloze-style substitute.This brings us to the need of transfer from the data-rich cloze-style training to the domain of properquestions where data are much scarcer due to the necessary human annotation.The SQuAD dataset is a great target dataset to use for this. As opposed to the bAbI tasks, the goalof this dataset is actually a problem whose solving would be useful to humans - answering naturalquestions based on an natural language encyclopedic knowledge base.For our experiments we selected only a subset of the SQuAD training and development exampleswhere the answer is only a single word, since this is an inherent assumption of our machine learningmodel. This way we extracted 28,346 training examples out of the original 100,000 examples and3,233 development examples out of 10,570.3 M ACHINE LEARNING MODEL : AS R EADERWe perform our experiments using the Attention Sum Reader (AS Reader) (Kadlec et al., 2016b)model. The AS Reader is simple to implement while it achieves strong performance on several textcomprehension tasks (Kadlec et al., 2016b; Bajgar et al., 2016; Chu et al., 2016). Since the AS Readeris a building block of many recent text-comprehension models (Trischler et al., 2016; Sordoni et al.,2016; Dhingra et al., 2016; Cui et al., 2016b;a; Shen et al., 2016; Munkhdalai & Yu, 2016) it is agood representative of current research in this field.A high level structure of the AS Reader is shown in Figure 1. The words from the document and thequestion are first converted into vector embeddings using a look-up matrix. The document is thenread by a bidirectional Gated Recurrent Unit (GRU) network (Cho et al., 2014). A concatenationof the hidden states of the forward and backward GRUs at each word is then used as a contextualembedding of this word, intuitively representing the context in which the word is appearing. We canalso understand it as representing the set of questions to which this word may be an answer.Similarly the question is read by a bidirectional GRU but in this case only the final hidden states areconcatenated to form the question embedding .The attention over each word in the context is then calculated as the dot product of its contextualembedding with the question embedding. This attention is then normalized by the softmax functionand summed across all occurrences of each answer candidate. The candidate with most accumulatedattention is selected as the final answer.3Under review as a conference paper at ICLR 2017For a more detailed description of the model including equations check Kadlec et al. (2016b).QuestionencoderDocumentencoderDocumentQuestionPObamaquestion,document.....ObamaandPutin......saidObamainPragueXXXXXvisitedPrague.......Wordembeddings(Bidir-GRU)(Matrix)(Bidir-GRU)............Figure 1: Structure of the AS Reader model.4 E XPERIMENTS : TRANSFER LEARNING IN TEXT COMPREHENSIONNow let us turn in more detail to the three kinds of experiments that we performed.4.1 P RE-TRAINED WITHOUT TARGET ADJUSTMENTIn the first experiment we tested how a model trained on one of the large-scale pre-training datasetsperforms on the bAbI tasks without any opportunity to train on bAbI. Since the BookTest andCNN/DM tasks involve only cloze-style questions, we can’t expect a model trained on them to answernatural ?-style questions. Hence we did not study the transfer to SQuAD in this case, only the transferto the (cloze-converted) bAbI tasks.4.1.1 M ETHODFirst we tested how the AS Reader architecture (Kadlec et al., 2016b) can handle the tasks if traineddirectly on the bAbI training data for each task. Then we tested the degree of transfer from theBookTest and CNN/DM data to the 11 selected bAbI tasks.In the first part of the experiment we trained a separate instance of the AS Reader on the 10,000-example version of the bAbI training data for each of the 11 tasks (for more details see Appendix B.1).On 8 of them the architecture was able to learn the task with accuracy at least 95%1(results for eachtask can be found in Table 4 in Appendix C). Hence if given appropriate training the AS Readeris capable of the reasoning needed to solve most of the selected bAbI tasks. Now when we knowthat the AS Reader is powerful enough to learn the target tasks we can turn to transfer from the twolarge-scale datasets.The main part of this first experiment was then straightforward: we pre-trained multiple models onthe BookTest and CNN/DM datasets and then simply evaluated them on the test datasets of the 11selected bAbI tasks.4.1.2 R ESULTSTable 1 summarizes the results of this experiment. Both the models trained on the BookTest and thosetrained on the CNN/DM dataset perform quite poorly on bAbI and achieve much lower accuracy than1It should be noted that there are several machine learning models that perform better than the AS Readerin the 10k weakly supervised setting, e.g. (Sukhbaatar et al., 2015; Xiong et al., 2016; Graves et al., 2016),however they often need significant fine-tuning. On the other hand we trained plain AS Reader model withoutany modifications. Hyperparameter and feature fine-tuning could probably further increase its performanceon individual tasks however it goes directly against the idea of generality that is at the heart of this work. Forcomparison with state of the art we include results of DMN+ (Xiong et al., 2016) in Table 1 which had the bestaverage performance over the original 20 tasks.4Under review as a conference paper at ICLR 2017Table 1: The mean performance across 11 bAbI tasks. The first two columns show a random baseline2and a baseline that selects the most frequent word from the context which also appears as an answerin the training data for the task. The following three columns show performance of the AS Readertrained on different datasets, the last column shows the results of DMN+ (Xiong et al., 2016), thestate-of-the-art-model on the bAbI 10k dataset. For more detailed results listing per task accuraciessee Appendix C.Model Rnd. Most freq. cand. AS Reader DMN+Train datasetnottrainedbAbI10kBookTest14MCNN/DM1.2MbAbI10kbAbI10kbAbI mean (11 tasks) 6.1 29.9 34.8 38.1 92.7 95.7the models trained directly on each individual bAbI task. However there is some transfer betweenthe tasks since the AS Reader trained on either the BookTest or CNN/DM outperforms a randombaseline2and even an improved baseline which selects the most frequent word from the context thatalso appears as an answer in the training data for this task.The results also show that the models trained on CNN/DM perform somewhat better on most tasksthan the BookTest models. This may be due to the fact that bAbI tasks generally require the model tosummarize information from the context document, which is also what the CNN/DM dataset is testing.On the other hand, the BookTest requires prediction of a possible continuation of a story, wherethe required kind of reasoning is much less clear but certainly different from pure summarization.Another explanation for better performance of CNN/DM models might be that they solve slightlysimpler task since the candidate answers were already pre-selected in the entity anonymization step.Readers interested in how the training-dataset size affects this kind of transfer can check (Kadlecet al., 2016a) where we show that the target-task performance is a bit better if we use the largeBookTest as opposed to its smaller subset, the Children’s Book Test (CBT) (Hill et al., 2015).Conclusions from this experiment are that the skills learned from two large-scale datasets generalizesurprisingly poorly to even simple toy tasks. This may make us ask whether most teams’ focus onsolving narrow tasks is truly beneficial if the skills learnt on these tasks are hard to apply elsewhere.However it also brings us to our next experiment, where we try to provide some help to the strugglingpre-trained models.4.2 P RE-TRAINED WITH TARGET ADJUSTMENTAfter showing that the skills learnt from the BookTest and CNN/DM datasets are by themselvesinsufficient for solving the toy tasks, the next natural question is whether they are useful if helped bytraining on a small sample of examples from the target task. We call this additional phase of trainingtarget adjustment . For this experiment we again use the bAbI tasks, however we also test transferto a subset of the SQuAD dataset, which is much closer to real-world natural-language questionanswering.The results presented in this and the following section are based on training 3701 model instances.4.2.1 M ETHODCommon to bAbI and SQuAD datasets. In this experiment we started with a pre-trained modelwhich we used in the previous experiment. However, after it finished training on one of the largepre-training datasets, we allowed it to train on a subset of training examples from the target dataset.We tried subsets of various sizes ranging from a single example to thousands. We tried training fourdifferent pre-trained models and also, for comparison, four randomly-initialized models with thesame hyperparameters (see Appendix B.2 for details). The experiment with each task-model couplewas run on 4 different data samples of each size which were randomly drawn from the training dataset2The random baseline selects randomly uniformly between all unique words contained in the contextdocument.5Under review as a conference paper at ICLR 20170.000.250.500.751.001 10 100 5001000 5000# training examplesTest accuracyModel typeBookTest Pre−trainedBookTest RandomCNN/DM Pre−trainedCNN/DM RandomMean of best−validation test accuracy for the 11 bAbI tasks(a)●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.00.20.41 10 100 5001000 500010000 28127# training examplesTest accuracyModel type●Fully pre−trainedPre−trained embeddingsPre−trained encodersRandomly initializedAccuracies on SQuAD (b)Figure 2: Sub-figure (a) shows the average across the 11 bAbI tasks of the best-validation model’stest accuracy. (b) shows the test accuracy on SQuAD of each model we trained (the points) and thelines join the accuracies of the best-validation models for each training size.of the task to account for variations between these random samples – which may be substantial giventhe small sample size.3bAbI. For each of these models we observed the test accuracy at the best-validation epoch andcompared this number between the randomly initialized and pre-trained models. Validation was doneusing 100 examples which were set aside from the task’s original 10k training data.4We perform theexperiment with models pre-trained on the BookTest and also on CNN/DM.SQuAD subset. In the SQuAD experiment, we trained the model on a subset of the original trainingdataset where answers were only single words and its sub-subsets. We report the best-validationaccuracy on a development set filtered in the same way. This experiment was performed only withthe models pre-trained on BookTest.4.2.2 R ESULTSThe results of these experiments are summarized in Figures 2 and 3.●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 10005000# training examplesTest accuracyTask 1●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●● ● ●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●● ●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 10005000# training examplesTest accuracyTask 4●●● ●●●●●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●● ●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 10005000# training examplesTest accuracyTask 5Model type●●●●BookTest pre−trainedBookTest randomCNN/DM pre−trainedCNN/DM randomFigure 3: Example of 3 bAbI tasks where pre-training seems to help. Note that the task may be easierfor the CNN/DM models due to answer anonymization which restricts the choice of possible answers.3We are planning to release the split training datasets soon.4The other models trained on the full 10k dataset usually use 1000 validation examples (Sukhbaatar et al.,2015; Xiong et al., 2016), however we wanted to focus on low data regime thus we used 10 times less examples.6Under review as a conference paper at ICLR 2017bAbI. Sub-figure 2a shows mean test accuracy of the models that achieved the best validation resultfor each single task. The results for both BookTest and CNN/DM experiments confirm positive effectof pre-training compared to randomly initialized baseline. Figure 3 shows performance on selectedbAbI tasks where pre-training has clearly positive effect, such plot for each of the target tasks isprovided in Appendix C.2 (Figure 4).Note that the CNN/DM models cannot be directly compared to BookTest results due to entityanonymization that seems to simplify the task when the model is trained on smaller datasets.Since our evaluation methodology with different training set sizes is novel, we can compare ourresult only to MemN2N (Sukhbaatar et al., 2015) trained on a 1k dataset. MemN2N is the onlyweakly supervised model that reports accuracy when trained on less than 10k examples. MemN2Nachieves average accuracy 93.2%5on the eleven selected tasks. This is substantially better than bothour random baseline (78.0%) and the BookTest-pre-trained model (79.5%), however our model isnot tuned in any way towards this particular task. One important conceptual difference is that theAS Reader processes the whole context as one sequence of words, whereas MemN2N receives thecontext split into single sentences, which simplifies the task for the network.SQuAD subset. The results of SQuAD experiment also confirm positive effect of pre-training, seeSub-figure 2b, for now compare just lines showing performance of the fully pre-trained model andthe randomly initialized model – the meaning of the remaining two lines shall become clear in thenext section.More detailed statistics about the results of this experiment can be found in Appendix D.We should note that performance of our model is not competitive with the state of the art models onthis dataset. For instance the DCR model (Yu et al., 2016) trained on our SQuAD subset achievesvalidation accuracy 74.9% in this task which is better than our randomly initialized (35.4%) andpre-trained (51.6%) models6. However, the DCR model is designed specifically for the SQuAD task,for instance it utilizes features that are not used by our model.4.3 P ARTIALLY PRE -TRAINED MODELSince our previous experiment confirmed positive effect of pre-training if followed by target-domainadjustment, we wondered which part of the model contains the knowledge transferable to newdomains. To examine this we performed the following experiment.4.3.1 M ETHODOur machine learning model, the AS Reader, consists of two main parts: the word-embedding look-upand the bidirectional GRUs used to encode the document and question (see Figure 1). Therefore anatural question was what the contribution of each of these parts is.To test this we created two models out of each pre-trained model used in the previous experiment.The first model variant uses the pre-trained word embeddings from the original model while the GRUencoders are randomly initialized. We say that this model has pre-trained embeddings . The secondmodel variant uses the opposite setting where the word embeddings are randomly initialized whilethe encoders are taken form a pre-trained model. We call this pre-trained encoders .bAbI. For this experiment we selected only a subset of tasks with training set of 100 examples wherethere was significant difference in accuracy between randomly-initialized and pre-trained models. Forevaluation we use the same methodology as in the previous experiment, that is, we report accuracy ofthe best-validation model averaged over 4 training splits.SQuAD subset. We evaluated both model variants on all training sets from the previous SQuADexperiment using the same methodology.5MemN2N trained on each single task with PE LS RN features, see (Sukhbaatar et al., 2015) for details.6We would like to thank Yu et al. (2016) for training their system on our dataset.7Under review as a conference paper at ICLR 2017Table 2: The effect of pre-training different components of the model for selected tasks. The first rowshows performance (average test accuracy across all trained model instances in each category) of arandomly initialized baseline model. The following three rows show increase in accuracy (measuredin percent absolute) when the model is initialized with weights pre-trained on the BookTest. The lastline shows results for models initialized with Google News word2vec word embeddings (Mikolovet al., 2013).`````````````Model variantTask bAbI task (100 ex.) SQuAD1. 5. 11. 14. (28k ex.)Random init 53% 66% 71% 33% 31%Pre-trained encoders +6 +25 +4 +2 +4Pre-trained embeddings +17 +6 +8 +8 +10Pre-trained full +34 +22 +14 +13 +17Pre-trained word2vec -2 +5 +1 -1 +54.3.2 R ESULTSbAbI. Table 2 shows improvement of pre-trained models over a randomly initialized baseline. Inmost cases (all except Task 5) the fully pre-trained model achieved the best accuracy.SQuAD subset. The accuracies of the four model variants are plotted in Figure 2b together withresults of the previous SQuAD experiment. The graph shows that both pre-trained embeddings andpre-trained encoders alone improve performance over the randomly initialized baseline, however thefully pre-trained model is always the best.The overall result of this experiment is that both pre-training of the word embeddings and pre-trainingof the encoder parameters are important since the fully pre-trained model outperforms both partiallypre-trained variants.5 C ONCLUSIONOur experiments show that transfer from two large cloze-style question-answering datasets to ourtwo target tasks is suprisingly poor, if the models aren’t provided with any examples from the targetdomain. However we show that models that pre-trained models perform significantly better than arandomly initialized model if they are shown at least a few training examples from the target domain.The usefulness of pre-trained word embeddings is well known in the NLP community however weshow that the power of our pre-trained model does not lie just in the embeddings. This suggests thatonce the text-comprehension community agrees on sufficiently versatile model, much larger parts ofthe model could start being reused than just the word-embeddings.The generalization of skills from a training domain to new tasks is an important ingredient of anysystem we would want to call intelligent. This work is an early step to explore this direction.REFERENCESOndrej Bajgar, Rudolf Kadlec, and Jan Kleindienst. Embracing data abundance: BookTest Datasetfor Reading Comprehension. arXiv preprint arXiv:1610.00956 , 2016.Murray Campbell, A Joseph Hoane, and Feng-hsiung Hsu. Deep blue. Artificial intelligence , 134(1):57–83, 2002.Danqi Chen, Jason Bolton, and Christopher D. Manning. A Thorough Examination of the CNN /Daily Mail Reading Comprehension Task. In Association for Computational Linguistics (ACL) ,2016.Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, HolgerSchwenk, and Yoshua Bengio. Learning Phrase Representations using RNN Encoder-Decoder for8Under review as a conference paper at ICLR 2017Statistical Machine Translation. Empirical Methods in Natural Language Processing (EMNLP) ,2014. URL http://arxiv.org/abs/1406.1078v3 .Zewei Chu, Hai Wang, Kevin Gimpel, and David Mcallester. Broad Context Language Modeling asReading Comprehension. 2016.Ronan Collobert, Jason Weston, L ́eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa.Natural Language Processing ( Almost ) from Scratch. Journal ofMachine Learning Research 12 ,12:2461–2505, 2011.Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over-Attention Neural Networks for Reading Comprehension. 2016a. URL http://arxiv.org/abs/1607.04423 .Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. Consensus Attention-basedNeural Networks for Chinese Reading Comprehension. 2016b.Andrew M. Dai and Quoc V . Le. Semi-supervised Sequence Learning. NIPS , 2015. ISSN 10495258.URLhttp://arxiv.org/abs/1511.01432 .Bhuwan Dhingra, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. Gated-AttentionReaders for Text Comprehension. 2016. URL http://arxiv.org/abs/1606.01549 .Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi ́nska, Sergio G ́omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou,Adri `a Puigdom `enech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain,Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis.Hybrid Computing Using a Neural Network with Dynamic External Memory. Nature , 2016. ISSN0028-0836. doi: 10.1038/nature20101.Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. A JOINT MANY-TASK MODEL: GROWING A NEURAL NETWORK FOR MULTIPLE NLP TASKS. submittedto ICLR 2017 , 2016.Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, MustafaSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in NeuralInformation Processing Systems , pp. 1684–1692, 2015.Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Readingchildren’s books with explicit memory representations. arXiv preprint arXiv:1511.02301 , 2015.Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. From Particular to General : A Preliminary CaseStudy of Transfer Learning in Reading Comprehension. MAIN Workshop at NIPS , 2016a.Rudolf Kadlec, Martin Schmid, Ondej Bajgar, and Jan Kleindienst. Neural Text Understanding withAttention Sum Reader. Proceedings of ACL , 2016b.Sosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. Dynamic Entity Representationwith Max-pooling Improves Machine Reading. Proceedings of the North American Chapter ofthe Association for Computational Linguistics and Human Language Technologies (NAACL-HLT) ,2016.Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. Dataset and NeuralRecurrent Sequence Labeling Model for Open-Domain Factoid Question Answering. 2016. URLhttps://arxiv.org/abs/1607.06275 .Tomas Mikolov, Greg Corrado, Kai Chen, and Jeffrey Dean. Efficient Estimation of Word Rep-resentations in Vector Space. Proceedings of the International Conference on Learning Rep-resentations (ICLR 2013) , 2013. ISSN 15324435. doi: 10.1162/153244303322533223. URLhttp://arxiv.org/pdf/1301.3781v3.pdf .Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. How Transferable are NeuralNetworks in NLP Applications? EMNLP , 2016.9Under review as a conference paper at ICLR 2017Tsendsuren Munkhdalai and Hong Yu. Reasoning with Memory Augmented Neural Networks forLanguage Comprehension. 2016. URL https://arxiv.org/abs/1610.06454v1 .Sinno Jialin Pan and Qiang Yang. A Survey on Transfer Learning. IEEE Transactions on Knowledgeand Data Engineering , 22(10):1345–1359, oct 2010. ISSN 1041-4347. doi: 10.1109/TKDE.2009.191. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5288526 .Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ Questions forMachine Comprehension of Text. (ii), 2016. URL http://arxiv.org/abs/1606.05250 .Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. ReasoNet: Learning to Stop Readingin Machine Comprehension. 2016. URL http://arxiv.org/abs/1609.05284 .David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche,Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman,Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, MadeleineLeach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go withdeep neural networks and tree search. Nature , 529(7587):484–489, 2016. ISSN 0028-0836. doi:10.1038/nature16961. URL http://dx.doi.org/10.1038/nature16961 .Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative Alternating Neural Attention forMachine Reading. 2016.Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-To-End Memory Networks.pp. 1–11, 2015. URL http://arxiv.org/abs/1503.08895 .Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural Language Comprehensionwith the EpiReader. 2016. URL http://arxiv.org/abs/1606.02270 .Dirk Weissenborn. Separating Answers from Queries for Neural Reading Comprehension. 2016.URLhttp://arxiv.org/abs/1607.03316 .Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart Van Merri, Armand Joulin,and Tomas Mikolov. Towards AI-complete Question Answering: A Set of Prerequisite Toy Tasks.2016. URL https://arxiv.org/abs/1502.05698 .Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic Memory Networks for Visual andTextual Question Answering. ICML , 2016. URL http://arxiv.org/abs/1603.01417 .Yang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-End ReadingComprehension with Dynamic Answer Chunk Ranking. (1), 2016. URL http://arxiv.org/abs/1610.09996 .Xiaojin Zhu and Andrew B Goldberg. Introduction to semi-supervised learning. Synthesis lectureson artificial intelligence and machine learning , 3(1):1–130, 2009.A C LOZE STYLE B ABI DATASETSince our AS Reader architecture is designed to select a single word from the context document asan answer (the task of CBT and BookTest), we selected 10 bAbI tasks that fulfill this requirementout of the original 20. These tasks are: 1. single supporting fact, 2. two supporting facts, 3. threesupporting facts, 4. two argument relations, 5. three argument relations, 11. basic coreference, 12.conjunction, 13. compound coreference, 14. time reasoning and16. basic induction .Task 15 needed a slight modification to satisfy this requirement: we converted the answers into plural(e.g. ”Q: What is Gertrude afraid of? A: wolf.” was converted into ”A: wolves” which also seems tobe the more natural way to formulate the answer to such a question.).Also since CBT and BookTest train the model for Cloze-style question answering, we modify theoriginal bAbI dataset by reformulating the questions into Cloze-style. For example we translate aquestion ”Where is John ?” to ”John is in the XXXXX .”10Under review as a conference paper at ICLR 2017For the models pre-trained on CNN/DM we also replace two kinds of words by anonymized tags(e.g. ”@entity56”) in a style similar to the pre-training dataset. Specifically we replace two (largelyoverlapping) categories of words:1. Proper names of story characters (e.g. John, Sandra)2.Any word that can appear as an answer for the particular task (e.g. kitchen, garden if thetask is asking about locations).B M ETHOD DETAILSB.1 D IRECT TRAINING ON B ABI –METHODHere we give a more detailed description of the method we used to arrive to our results. We highlightonly facts particular to this experiment. A more detailed general description of training the AS Readeris given in (Kadlec et al., 2016b).The results given for AS Reader trained on bAbI are each for a single model with 64 hidden units ineach direction of the GRU context encoder and embedding dimension 32 trained on the 10k trainingdata provided with that particular task.The results for AS Reader trained on the BookTest and the CNN/DM are for a greedy ensembleconsisting of 4 models whose predictions were simply averaged. The models and ensemble were allvalidated on the validation set corresponding to the training dataset. The performance on the bAbItasks oscillated notably during training however the ensemble averaging does somewhat mitigate thisto get more representative numbers.B.2 H YPERPARAMETERS FOR THE TARGET -ADJUSTMENT EXPERIMENTSTable 3 lists hyperparameters of the pre-trained AS Reader instances used in our experiments withtarget adjustment.Table 3: Hyperparameters for both the randomly initialized and the pre-trained models.Dataset Hid. Units Emb. L. rate DropoutBookTest 768 256 0.0001 0BookTest 384 384 0.0005 0.2BookTest 384 384 0.0005 0.4BookTest 512 384 0.0001 0CNN/DM 128 128 0.001 0CNN/DM 256 128 0.001 0CNN/DM 384 128 0.001 0CNN/DM 384 384 0.001 0C D ETAILED RESULTSC.1 E XPERIMENTS WITHOUT TARGET ADJUSTMENTTable 4 shows detailed results for the experiments on models which were just pre-trained on one ofthe pre-training datasets without any target-adjustment. It also shows several baselines and results ofa state-of-the-art model.C.2 T ARGET -ADJUSTMENT EXPERIMENTSC.2.1 R ESULTS FOR ALL B ABITASKSFigure 4 shows the test accuracies of all models that we trained in the target-adjustment experimentsas well as lines joining the accuracies of the best-validation models.11Under review as a conference paper at ICLR 2017Table 4: Performance of the AS Reader when trained on the bAbI 10k, BookTest and CNN/DM datasets and then evaluated on bAbI test data. The DynamicMemory Network (DMN+) is the state-of-the-art model in a weakly supervised setting on the bAbI 10k dataset. Its results are taken from (Xiong et al., 2016).MemN2N (Sukhbaatar et al., 2015) is the state-of-the-art model on the 1k training dataset; for completeness we also include its results with the 10k training.Model: Random Rnd cand.MemN2N(single)(PE LS RN)MemN2N(single)(PE LS LW RN)DMN+(single)ASReaderaaaaaaaaaaTest datasetTrain datasetnottrainedbAbI10kbAbI1kbAbI10kbAbI10kbAbI10kBookTest14MDM+CNN1.2M1Single supporting fact 7.80 31.20 100.00 100.00 100.00 100.00 37.30 51.502Two supporting facts 4.40 26.96 91.70 99.70 99.70 91.90 25.80 28.903Three supporting facts 3.40 19.14 59.70 97.90 98.90 86.00 22.20 27.404Two-argument relations 10.50 33.58 97.20 100.00 100.00 100.00 50.30 54.905Three-argument relations 4.40 21.42 86.90 99.20 99.50 99.80 67.60 68.1011 Basic coreference 6.20 30.42 99.10 99.90 100.00 100.00 33.00 20.8012 Conjunction 6.70 27.25 99.80 100.00 100.00 100.00 30.40 37.7013 Compound coreference 5.60 27.73 99.60 100.00 100.00 100.00 33.80 14.0014 Time reasoning 5.00 27.82 98.30 99.90 99.80 95.00 27.60 50.5015 Basic deduction 5.20 37.20 100.00 100.00 100.00 96.70 39.90 17.6016 Basic induction 7.50 45.65 98.70 48.20 54.70 50.30 15.10 48.00bAbI mean (11 tasks) 6.06 29.85 93.73 94.98 95.69 92.70 34.82 38.1312Under review as a conference paper at ICLR 2017●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 1000 10000# training examplesTest accuracyTask 1●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 1000 10000# training examplesTest accuracyTask 2●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 1000 10000# training examplesTest accuracyTask 3●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●● ● ●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●● ●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 1000 10000# training examplesTest accuracyTask 4●●● ●●●●●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●● ●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 1000 10000# training examplesTest accuracyTask 5●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 1000 10000# training examplesTest accuracyTask 11●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●● ●●●●●●●●● ●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●● ●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●● ●●●●●●●●●● ●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 1000 10000# training examplesTest accuracyTask 12●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●● ●●● ●●●●● ●● ●●●●●●● ●● ●●●●●●●●● ●●●●●●●●●● ● ●●●●●●● ●●●●●● ●●● ●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 1000 10000# training examplesTest accuracyTask 13●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 1000 10000# training examplesTest accuracyTask 14●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●● ●●●●● ●●●●●●●●●●●●●●●● ●●● ● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●● ●● ●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 1000 10000# training examplesTest accuracyTask 15●●●●●●●●● ●●●●●● ●●●●●●●●●●●●●●●●●●●●●● ●●●●●● ●●●●●●●●●●●●●●● ●●●●●● ●● ●●●●●●●●●●●●●●● ●●●●● ●● ●●●●●●●●●●●●●●●●●●●●● ●●●● ●●●● ●●● ●●●● ● ●● ●●●●● ●● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ●●●●●●●● ●●●●●●●●●● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●0.000.250.500.751.001 10 100 1000 10000# training examplesTest accuracyTask 16Model type●●●●BookTest pre−trainedBookTest randomCNN/DM pre−trainedCNN/DM randomFigure 4: The test accuracies of all models that we trained in the target-adjustment experiments. Theline joins the test accuracies of the best-validation models of each model type.C.2.2 A VERAGE OVER ALL MODELS TRAINED ON B ABITASKSFigure 5 plots mean accuracy of all models trained in our experiments. This suggests that pre-traininghelped all models, not only the top performing ones selected by validation as already shown inFigure 2a.13Under review as a conference paper at ICLR 20170.000.250.500.751.001 10 100 5001000 5000# training examplesTest accuracyModel typeBookTest Pre−trainedBookTest RandomCNN/DM Pre−trainedCNN/DM RandomMean accuracy accross the 11 bAbI tasksFigure 5: The average of the mean test accuracies across the 11 bAbI tasks. For the average of thebest validation results see Figure 2a.D M EANS ,STANDARD DEVIATIONS AND P -VALUES BY EXPERIMENTTable 5 shows the mean accuracy across all models trained for each combination of task, pre-trainingdataset and target-adjustment dataset size. Table 6 shows the corresponding standard deviations.Table 7 then shows the p-value that whether the expected accuracy of pre-trained models is greaterthan the expected accuracy of randomly initialized models. This shows that the pre-trained modelsare statistically significantly better for all target-adjustment set sizes on the SQuAD dataset. OnbAbI the BookTest pre-trained models perform convincingly better especially for target-adjustmentdataset sizes 100,500and1000 , with Task 16 being the main exception to this because the AS Readerstruggles to learn it in any setting. For the CNN+DM pre-training the results are not conclusive.14Under review as a conference paper at ICLR 2017Task Pretrain. set Model Target-adjustment set size0 1 10 100 500 1000 5000 10000 28174SQuAD BookTest pre-trained 0.025 0.027 0.049 0.122 NA 0.245 NA 0.388 0.484SQuAD BookTest rand. init. 0.004 0.006 0.018 0.042 NA 0.107 NA 0.214 0.315Task 1 BookTest pre-trained 0.356 0.383 0.459 0.870 0.992 0.995 0.999 NA NATask 1 BookTest rand. init. 0.010 0.327 0.431 0.529 0.888 0.916 0.976 NA NATask 1 CNN+DM pre-trained 0.295 0.385 0.519 0.689 0.969 0.985 0.990 NA NATask 1 CNN+DM rand. init. 0.100 0.354 0.450 0.582 0.954 0.941 0.977 NA NATask 2 BookTest pre-trained 0.206 0.295 0.318 0.339 0.398 0.410 0.755 0.783 NATask 2 BookTest rand. init. 0.003 0.225 0.290 0.332 0.358 0.361 0.528 0.645 NATask 2 CNN+DM pre-trained 0.177 0.265 0.288 0.359 0.410 0.398 0.539 0.586 NATask 2 CNN+DM rand. init. 0.005 0.280 0.320 0.380 0.371 0.396 0.478 0.469 NATask 3 BookTest pre-trained 0.159 0.192 0.227 0.314 0.440 0.508 0.759 0.857 NATask 3 BookTest rand. init. 0.005 0.135 0.182 0.219 0.370 0.419 0.542 0.482 NATask 3 CNN+DM pre-trained 0.164 0.213 0.222 0.303 0.450 0.489 0.585 0.687 NATask 3 CNN+DM rand. init. 0.001 0.175 0.227 0.272 0.385 0.429 0.551 0.563 NATask 4 BookTest pre-trained 0.452 0.490 0.545 0.631 0.986 0.989 1.000 NA NATask 4 BookTest rand. init. 0.032 0.532 0.556 0.582 0.846 0.982 0.993 NA NATask 4 CNN+DM pre-trained 0.323 0.413 0.596 0.766 0.946 0.986 0.992 NA NATask 4 CNN+DM rand. init. 0.234 0.536 0.554 0.593 0.926 0.990 0.986 NA NATask 5 BookTest pre-trained 0.601 0.604 0.632 0.877 0.983 0.982 0.991 NA NATask 5 BookTest rand. init. 0.013 0.162 0.295 0.635 0.964 0.973 0.989 NA NATask 5 CNN+DM pre-trained 0.448 0.492 0.581 0.842 0.969 0.984 0.989 NA NATask 5 CNN+DM rand. init. 0.185 0.252 0.350 0.844 0.982 0.984 0.988 NA NATask 11 BookTest pre-trained 0.334 0.415 0.620 0.847 0.986 0.988 0.998 NA NATask 11 BookTest rand. init. 0.008 0.540 0.692 0.711 0.922 0.951 0.974 NA NATask 11 CNN+DM pre-trained 0.119 0.492 0.671 0.762 0.820 0.972 0.977 NA NATask 11 CNN+DM rand. init. 0.207 0.679 0.737 0.734 0.853 0.934 0.980 NA NATask 12 BookTest pre-trained 0.307 0.429 0.694 0.786 0.988 0.991 0.999 NA NATask 12 BookTest rand. init. 0.006 0.499 0.705 0.721 0.917 0.966 0.962 NA NATask 12 CNN+DM pre-trained 0.236 0.518 0.650 0.779 0.866 0.968 0.970 NA NATask 12 CNN+DM rand. init. 0.009 0.661 0.765 0.735 0.855 0.921 0.965 NA NATask 13 BookTest pre-trained 0.330 0.505 0.793 0.944 0.959 0.976 0.998 NA NATask 13 BookTest rand. init. 0.004 0.617 0.920 0.937 0.950 0.966 0.992 NA NATask 13 CNN+DM pre-trained 0.114 0.612 0.830 0.942 0.949 0.946 0.975 NA NATask 13 CNN+DM rand. init. 0.094 0.828 0.941 0.944 0.951 0.961 0.971 NA NATask 14 BookTest pre-trained 0.270 0.266 0.273 0.465 0.775 0.807 0.896 0.912 NATask 14 BookTest rand. init. 0.007 0.228 0.277 0.328 0.597 0.675 0.852 0.905 NATask 14 CNN+DM pre-trained 0.280 0.314 0.351 0.458 0.677 0.790 0.840 0.904 NATask 14 CNN+DM rand. init. 0.054 0.247 0.297 0.337 0.543 0.788 0.901 0.929 NATask 15 BookTest pre-trained 0.085 0.417 0.436 0.491 0.544 0.546 0.689 0.853 NATask 15 BookTest rand. init. 0.003 0.414 0.430 0.496 0.517 0.523 0.584 0.834 NATask 15 CNN+DM pre-trained 0.563 0.604 0.591 0.608 0.611 0.635 0.644 0.597 NATask 15 CNN+DM rand. init. 0.392 0.469 0.534 0.587 0.623 0.630 0.656 0.658 NATask 16 BookTest pre-trained 0.036 0.456 0.451 0.465 0.469 0.474 0.528 0.566 NATask 16 BookTest rand. init. 0.001 0.363 0.449 0.460 0.469 0.475 0.489 0.519 NATask 16 CNN+DM pre-trained 0.444 0.467 0.468 0.474 0.480 0.505 0.519 0.547 NATask 16 CNN+DM rand. init. 0.280 0.428 0.480 0.476 0.483 0.489 0.489 0.496 NATable 5: Mean test accuracy for each combination of task, model type and target-adjustment set size.15Under review as a conference paper at ICLR 2017Task Pretrain. set Model Target-adjustment set size0 1 10 100 500 1000 5000 10000 28174SQuAD BookTest pre-trained 0.025 0.027 0.049 0.122 NA 0.245 NA 0.388 0.484SQuAD BookTest rand. init. 0.004 0.006 0.018 0.042 NA 0.107 NA 0.214 0.315Taks 1 BookTest pre-trained 0.356 0.383 0.459 0.870 0.992 0.995 0.999 NA NATaks 1 BookTest rand. init. 0.010 0.327 0.431 0.529 0.888 0.916 0.976 NA NATaks 1 CNN+DM pre-trained 0.295 0.385 0.519 0.689 0.969 0.985 0.990 NA NATaks 1 CNN+DM rand. init. 0.100 0.354 0.450 0.582 0.954 0.941 0.977 NA NATaks 2 BookTest pre-trained 0.206 0.295 0.318 0.339 0.398 0.410 0.755 0.783 NATaks 2 BookTest rand. init. 0.003 0.225 0.290 0.332 0.358 0.361 0.528 0.645 NATaks 2 CNN+DM pre-trained 0.177 0.265 0.288 0.359 0.410 0.398 0.539 0.586 NATaks 2 CNN+DM rand. init. 0.005 0.280 0.320 0.380 0.371 0.396 0.478 0.469 NATaks 3 BookTest pre-trained 0.159 0.192 0.227 0.314 0.440 0.508 0.759 0.857 NATaks 3 BookTest rand. init. 0.005 0.135 0.182 0.219 0.370 0.419 0.542 0.482 NATaks 3 CNN+DM pre-trained 0.164 0.213 0.222 0.303 0.450 0.489 0.585 0.687 NATaks 3 CNN+DM rand. init. 0.001 0.175 0.227 0.272 0.385 0.429 0.551 0.563 NATaks 4 BookTest pre-trained 0.452 0.490 0.545 0.631 0.986 0.989 1.000 NA NATaks 4 BookTest rand. init. 0.032 0.532 0.556 0.582 0.846 0.982 0.993 NA NATaks 4 CNN+DM pre-trained 0.323 0.413 0.596 0.766 0.946 0.986 0.992 NA NATaks 4 CNN+DM rand. init. 0.234 0.536 0.554 0.593 0.926 0.990 0.986 NA NATaks 5 BookTest pre-trained 0.601 0.604 0.632 0.877 0.983 0.982 0.991 NA NATaks 5 BookTest rand. init. 0.013 0.162 0.295 0.635 0.964 0.973 0.989 NA NATaks 5 CNN+DM pre-trained 0.448 0.492 0.581 0.842 0.969 0.984 0.989 NA NATaks 5 CNN+DM rand. init. 0.185 0.252 0.350 0.844 0.982 0.984 0.988 NA NATaks 11 BookTest pre-trained 0.334 0.415 0.620 0.847 0.986 0.988 0.998 NA NATaks 11 BookTest rand. init. 0.008 0.540 0.692 0.711 0.922 0.951 0.974 NA NATaks 11 CNN+DM pre-trained 0.119 0.492 0.671 0.762 0.820 0.972 0.977 NA NATaks 11 CNN+DM rand. init. 0.207 0.679 0.737 0.734 0.853 0.934 0.980 NA NATaks 12 BookTest pre-trained 0.307 0.429 0.694 0.786 0.988 0.991 0.999 NA NATaks 12 BookTest rand. init. 0.006 0.499 0.705 0.721 0.917 0.966 0.962 NA NATaks 12 CNN+DM pre-trained 0.236 0.518 0.650 0.779 0.866 0.968 0.970 NA NATaks 12 CNN+DM rand. init. 0.009 0.661 0.765 0.735 0.855 0.921 0.965 NA NATaks 13 BookTest pre-trained 0.330 0.505 0.793 0.944 0.959 0.976 0.998 NA NATaks 13 BookTest rand. init. 0.004 0.617 0.920 0.937 0.950 0.966 0.992 NA NATaks 13 CNN+DM pre-trained 0.114 0.612 0.830 0.942 0.949 0.946 0.975 NA NATaks 13 CNN+DM rand. init. 0.094 0.828 0.941 0.944 0.951 0.961 0.971 NA NATaks 14 BookTest pre-trained 0.270 0.266 0.273 0.465 0.775 0.807 0.896 0.912 NATaks 14 BookTest rand. init. 0.007 0.228 0.277 0.328 0.597 0.675 0.852 0.905 NATaks 14 CNN+DM pre-trained 0.280 0.314 0.351 0.458 0.677 0.790 0.840 0.904 NATaks 14 CNN+DM rand. init. 0.054 0.247 0.297 0.337 0.543 0.788 0.901 0.929 NATaks 15 BookTest pre-trained 0.085 0.417 0.436 0.491 0.544 0.546 0.689 0.853 NATaks 15 BookTest rand. init. 0.003 0.414 0.430 0.496 0.517 0.523 0.584 0.834 NATaks 15 CNN+DM pre-trained 0.563 0.604 0.591 0.608 0.611 0.635 0.644 0.597 NATaks 15 CNN+DM rand. init. 0.392 0.469 0.534 0.587 0.623 0.630 0.656 0.658 NATaks 16 BookTest pre-trained 0.036 0.456 0.451 0.465 0.469 0.474 0.528 0.566 NATaks 16 BookTest rand. init. 0.001 0.363 0.449 0.460 0.469 0.475 0.489 0.519 NATaks 16 CNN+DM pre-trained 0.444 0.467 0.468 0.474 0.480 0.505 0.519 0.547 NATaks 16 CNN+DM rand. init. 0.280 0.428 0.480 0.476 0.483 0.489 0.489 0.496 NATable 6: Standard deviation in accuracies for each combination of task, model type and target-adjustment set size.16Under review as a conference paper at ICLR 2017Task Pretraining Target-adjustment set size0 1 10 100 500 1000 5000 10000 28174SQuAD BookTest 1.01e-45 4.07e-05 7.40e-05 7.82e-08 NA 5.17e-08 NA 3.93e-08 8.52e-03Task 1 BookTest 3.34e-83 1.81e-03 1.33e-01 2.35e-19 9.41e-04 1.67e-02 1.32e-01 NA NATask 2 BookTest 1.24e-34 3.86e-07 7.29e-03 2.59e-01 1.39e-08 2.63e-06 7.54e-09 2.04e-01 NATask 3 BookTest 9.84e-55 1.27e-05 7.66e-03 1.48e-03 3.18e-04 2.18e-03 2.16e-04 1.03e-01 NATask 4 BookTest 7.25e-78 9.50e-01 9.71e-01 1.04e-05 6.38e-03 1.70e-02 1.81e-02 NA NATask 5 BookTest 6.55e-115 9.88e-22 8.87e-19 5.25e-05 3.66e-03 8.61e-02 5.65e-03 NA NATask 11 BookTest 6.78e-152 1.00e+00 9.94e-01 4.07e-09 2.50e-04 2.28e-02 6.37e-02 NA NATask 12 BookTest 2.27e-90 9.10e-01 6.46e-01 1.89e-05 2.78e-04 1.43e-02 2.36e-02 NA NATask 13 BookTest 5.30e-91 9.75e-01 9.99e-01 2.88e-02 2.74e-02 1.03e-01 7.06e-02 NA NATask 14 BookTest 1.97e-200 1.01e-03 6.79e-01 2.22e-14 3.40e-05 2.93e-03 3.66e-06 3.97e-01 NATask 15 BookTest 3.64e-09 4.75e-01 4.12e-01 6.70e-01 1.68e-03 3.70e-03 1.03e-05 4.54e-01 NATask 16 BookTest 1.81e-05 8.28e-04 4.38e-01 2.72e-01 4.89e-01 5.71e-01 7.40e-03 NA NATask 1 CNN+DM 9.43e-09 2.99e-01 1.11e-01 1.05e-01 9.54e-02 1.45e-01 3.97e-03 NA NATask 2 CNN+DM 9.38e-17 6.93e-01 9.02e-01 9.15e-01 1.05e-03 4.20e-01 2.64e-03 8.49e-02 NATask 3 CNN+DM 2.42e-16 4.95e-02 6.30e-01 1.75e-01 2.13e-03 6.59e-04 4.68e-02 1.24e-01 NATask 4 CNN+DM 5.84e-03 9.70e-01 1.37e-01 4.83e-03 3.33e-01 8.84e-01 1.08e-01 NA NATask 5 CNN+DM 1.17e-10 7.00e-03 7.93e-04 5.20e-01 9.70e-01 5.66e-01 1.83e-01 NA NATask 11 CNN+DM 1.00e+00 9.84e-01 9.73e-01 2.58e-01 7.17e-01 1.45e-01 6.95e-01 NA NATask 12 CNN+DM 1.93e-14 9.32e-01 9.92e-01 2.57e-02 4.06e-01 6.65e-02 2.09e-01 NA NATask 13 CNN+DM 8.69e-02 9.61e-01 9.72e-01 9.89e-01 6.22e-01 9.44e-01 2.83e-01 NA NATask 14 CNN+DM 2.17e-12 6.64e-02 1.11e-01 2.05e-02 3.66e-02 4.52e-01 9.10e-01 8.24e-01 NATask 15 CNN+DM 1.36e-52 5.30e-03 3.48e-02 7.21e-02 8.36e-01 3.09e-01 8.47e-01 9.84e-01 NATask 16 CNN+DM 6.39e-35 4.56e-02 9.66e-01 5.95e-01 7.19e-01 4.09e-02 2.51e-02 2.22e-03 NATable 7: One-sided p-value whether the mean accuracy of pre-trained models is greater than the accuracy of the randomly initialized ones for each combination oftask pre-training dataset. p-values below 0.05 are marked in green.17
S1J0E-71l
Under review as a conference paper at ICLR 2017SURPRISAL -DRIVEN FEEDBACK IN RECURRENT NET-WORKSKamil RockiIBM ResearchSan Jose, CA 95120, USAkmrocki@us.ibm.comABSTRACTRecurrent neural nets are widely used for predicting temporal data. Their inher-ent deep feedforward structure allows learning complex sequential patterns. It isbelieved that top-down feedback might be an important missing ingredient whichin theory could help disambiguate similar patterns depending on broader context.In this paper, we introduce surprisal-driven recurrent networks, which take intoaccount past error information when making new predictions. This is achievedby continuously monitoring the discrepancy between most recent predictions andthe actual observations. Furthermore, we show that it outperforms other stochas-tic and fully deterministic approaches on enwik8 character level prediction taskachieving 1.37 BPC.1 I NTRODUCTIONBased on human performance on the same task, it is believed that an important ingredient which ismissing in state-of-the-art variants of recurrent networks is top-down feedback. Despite evidenceof its existence, it is not entirely clear how mammalian brain might implement such a mechanism.It is important to understand what kind of top-down interaction contributes to improved predictioncapability in order to tackle more challenging AI problems requiring interpretation of deeper con-textual information. Furthermore, it might provide clues as what makes human cognitive abilitiesso unique. Existing approaches which consider top-down feedback in neural networks are primar-ily focused on stacked layers of neurons, where higher-level representations constitute a top-downsignal source. In this paper, we propose that the discrepancy between most recent predictions andobservations might be effectively used as a feedback signal affecting further predictions. It is verycommon to use such a discrepancy during learning phase as the error which is subject to minimiza-tion, but not during inference. We show that is also possible to use such top-down signal withoutlosing generality of the algorithm and that it improves generalization capabilities when applied toLong-Short Term Memory (Hochreiter & Schmidhuber, 1997) architecture. It is important to pointout that the feedback idea presented here applies only to temporal data.1.1 S UMMARY OF CONTRIBUTIONSThe main contributions of this work are:the introduction of a novel way of incorporating most recent misprediction measure as anadditional input signalextending state-of-the-art performance on character-level text modeling using HutterWikipedia dataset.1.2 R ELATED WORKThere exist other approaches which attempted to introduce top-down input for improving predic-tions. One such architecture is Gated-Feedback RNN (Chung et al., 2015). An important differencebetween architecture proposed here and theirs is the source of the feedback signal. In GF-RNN it isassumed that there exist higher level representation layers and they constitute the feedback source.1Under review as a conference paper at ICLR 2017On the other hand, here, feedback depends directly on the discrepancy between past predictions andcurrent observation and operates even within a single layer. Another related concept is Ladder Net-works (Rasmus et al., 2015), where top-down connections contribute to improved semi-supervisedlearning performance.2 F EEDBACK : M ISPREDICTION -DRIVEN PREDICTION nd the supervision of taxation by regional authorities. The federal government controls more than 9051 to Lucius Tarrutius of Firmum [[Romulus and Remus]] were conceived in the womb on the 23rd day of t05102 Northern Ireland]] == External links == * [http://www.enniskillen.com Enniskillen.Com] * [http:/05103 by]] *[[Alan Colmes]] *[[Janice Dean]] *[[Laurie Dhue]] *[[Steve Doocy]] *[[Donna Fiducia]] | *[[Da05104 sity]] #[[Denison University]] #[[Des Moines Area Community College]] ([[Des Moines, Iowa|Des Moine05105 } [[af:4 Augustus]] [[ar:4 Ø£ØoØ3Ø·Ø3]] [[an:4 d'agosto]] [[ast:4 d'agostu]] [[bg:4 аÐ2Ð3Ñ#Ñ#Ñ#]]0246 : [http://images.google.nl/images?q=Herman+Brood+art&amp;hl=nl&amp;lr=&amp;ie=UTF-8&amp;sa=N&amp;t010207 >Church of England</title> <id>5955</id> <revision> <id>42087195</id> <timestam0248 ding=&quot;4&quot; cellspacing=&quot;0&quot; style=&quot;margin: 1em 1em 1em 0; background: #f9f9f90129 n> <id>35151715</id> <timestamp>2006-01-14T15:11:15Z</timestamp> <contributor> 02410 ' ([[1992 in film|1992]]) *''Guncrazy'' ([[1992 in film|1992]]) *''No Place to Hide'' ([[1993 in fi051011 . *[[1975]] - [[Barbara Walters]] signs a five-year $5 million contract with the American Broadcast0102012 th the [[theory of evolution]] by [[natural selection]]. This conflict is most prevalent in the [[U051013 y known as the Gibraltar Housewives Association, and subsequently, in the early eighties it was cha051014 ;&amp;#1489;&amp;#1500;&amp;#1512;]] [[id:Assembler]] [[lt:Asembleris]] [[pl:Asembler]] [[ru:&amp;#051015 at, sandy soils. Granites sometimes occur in circular depressions surrounded by a range of hills, f0516Figure 1: Illustration of stsignal on a typical batch of 16 sequences of length 100 from enwik8dataset.y-axis is negative log probability in bits. Intuitively surprise signal is low when a textfragment is highly predictable (i.e. in the < timestamp > part - sequence no 10, the tag itselfis highly predictable, whereas the exact date cannot be predicted and should not be the focus ofattention). The main idea presented in this paper is that feedback signal stshould be able to help indistinguishing predictable and inherently unpredictable parts during the inference phase.2Under review as a conference paper at ICLR 20172.1 N OTATIONThe following notation is used throughout the section:x- inputsh- hidden unitsy- outputsp- output probabilities (normalized y)s- surprisalt- time stepW- feedforward x!hconnection matrixU- recurrenth!hconnection matrixV- feedbacks!hconnection matrixS- truncated BPTT lengthM- number of inputsN- number of hidden unitsdenotes matrix multiplicationdenotes elementwise multiplication();tanh ()- elementwise nonlinearitiesx=@E@xIn case of LSTM, the following concatenated representations are used:gt=264itftotut375b=264bibfbobu375U=264UiUfUoUu375W=264WiWfWoWu375V=264ViVfVoVu375 (1)2.2 S IMPLE RNN WITHOUT FEEDBACKFirst, we show a simple recurrent neural network architecture without feedback which serves as abasis for demonstrating our approach. It is illustrated in Fig. 2 and formulated as follows:ht=tanh(Wxt+Uht1+b) (2):::ht1yt1xthtytxt+1::: tanhinternal statefeedforward inputWUWyFigure 2: Simple RNN; h- internal (hidden) states; xare inputs,yare optional outputs to be emitted3Under review as a conference paper at ICLR 20172.3 F EEDBACK AUGMENTED RECURRENT NETWORKS:::ht1yt1stxthtyt::: tanhinternal statefeedforward inputerror feedbackpredictionWUVWyFigure 3: Surprisal-Feedback RNN; strepresents surprisal (in information theory sense) - the dis-crepancy between prediction at time step t1and the actual observation at time step t; it constitutesadditional input signal to be considered when making a prediction for the next time step.Figure 3 presents the main idea of surprisal-driven feedback in recurrent networks. In addition tofeedforward and recurrent connections WandU, we added one additional matrix V. One moreinput signal, namely Vstis being considered when updating hidden states of the network. Wepropose that the discrepancy stbetween most recent predictions pt1and observations xtmightbe effectively used as a feedback signal affecting further predictions. Such information is usuallyused during learning phase as an error signal, but not during inference. Our hypothesis is that itrepresents an important source of information which can be used during the inference phase, shouldbe used and that it bring benefits in the form of improved generalization capability. Figure 1 presentsexamples of feedback signal being considered. Intuitively, when surprisal is near zero, the sum ofinput signals is the same as in a typical RNN. Next subsections provide mathematical descriptionof the feedback architecture in terms of forward and backward passes for the Back PropagationThrough Time (BPTT) (Werbos, 1990) algorithm.2.4 F ORWARD PASSSeth0,c0to zero andp0to uniform distribution or carry over the last state to emulate full BPTT.8i;pi0=1M;i2f0;1;::;M1g;t= 0 (3)for t = 1:1:S-1I. Surprisal partst=iXlogpit1xit (4)IIa. Computing hidden activities, Simple RNNht=tanh(Wxt+Uht1+Vst+b) (5)IIb. Computing hidden activities, LSTM (to be used instead of IIa)ft=(Wfxt+Ufht1+Vfst+bf) (6)it=(Wixt+Uiht1+Vist+bi) (7)4Under review as a conference paper at ICLR 2017ot=(Woxt+Uoht1+Vost+bo) (8)ut=tanh(Wuxt+Uuht1+Vust+bu) (9)ct= (1ft)ct1+itut (10)^ct=tanh(ct) (11)ht=ot^ct (12)III. Outputsyit=Wyht+by (13)Softmax normalizationpit=eyitPieyit(14)2.5 B ACKWARD PASSfor t = S-1:-1:1I. Backprop through predictionsBackprop through softmax, cross-entropy error, accumulate@Et@yt=@Et@yt+pt1xt (15)y!Wy;by@E@Wy=@E@Wy+hTt@Et@yt(16)@E@by=@E@by+MXi=1@Eit@yit(17)y!h@Et@ht=@Et@ht+@Et@ytWTy (18)IIa. Backprop through hidden nonlinearity (simple RNN version)@Et@ht=@Et@ht+@Et@httanh0(ht) (19)@Et@gt=@Et@ht(20)IIb. Backprop through c;h;g (LSTM version)5Under review as a conference paper at ICLR 2017Backprop through memory cells, (keep gradients from the previous iteration)@Et@ct=@Et@ct+@Et@htottanh0(^ct) (21)Carry error over to@Et@ct1@Et@ct1=@Et@ct1+@Et@ct(1ft) (22)Propagate error through the gates@Et@ot=@Et@ht^ct0(ot) (23)@Et@it=@Et@ctut0(it) (24)@Et@ft=@Et@ctct10(ft) (25)@Et@ut=@Et@ctittanh0(ut) (26)Carry error over to@Et@ht1@Et@ht1=@Et@gtUT(27)III. Backprop through linearities@Et@b=@Et@b+NXi=1@Et@git(28)@E@U=@E@U+hTt1@Et@gt(29)@E@W=@E@W+xTt@Et@gt(30)@E@x=@E@x+@Et@gtWT(31)IV . Surprisal part@E@V=@E@V+sTt@Et@gt(32)@E@st=@E@gtVT(33)@Et@pt1=@Et@stxt (34)Adjust@Et@pt1according to the sum of gradients and carry over to@Et@yt1@Et@yt1=@Et@pt1pt1MXi=1@Et@pit1(35)6Under review as a conference paper at ICLR 2017Time4h8h16h24h32h40h48h60h72hBits/Character0.911.11.21.31.41.51.61.71.8Standard LSTM (test)Feedback LSTM (test)Standard LSTM (train)Feedback LSTM (train)Test Bits/Character1.41.451.51.551.61.651.71.75Train Bits/Character0.80.911.11.21.31.41.51.61.7Standard LSTMFeedback LSTMFigure 4: Training progress on enwik8 corpus, bits/character3 E XPERIMENTSWe ran experiments on the enwik8 dataset. It constitutes first 108bytes of English Wikipedia dump(with all extra symbols present in XML), also known as Hutter Prize challenge dataset2. First 90%of each corpus was used for training, the next 5% for validation and the last 5% for reporting testaccuracy. In each iteration sequences of length 10000 were randomly selected. The learning algo-rithm used was Adagrad1with a learning rate of 0.001. Weights were initialized using so-calledXavier initialization Glorot & Bengio (2010). Sequence length for BPTT was 100 and batch size128, states were carried over for the entire sequence of 10000 emulating full BPTT. Forget bias wasset initially to 1. Other parameters set to zero. The algorithm was written in C++ and CUDA 8 andran on GTX Titan GPU for up to 10 days. Table 1 presents results comparing existing state-of-the-art approaches to the introduced Feedback LSTM algorithm which outperforms all other methodsdespite not having any regularizer.Table 1: Bits per character on the Hutter Wikipedia dataset (test data).BPCmRNN(Sutskever et al., 2011) 1.60GF-RNN (Chung et al., 2015) 1.58Grid LSTM (Kalchbrenner et al., 2015) 1.47Standard LSTM41.45MI-LSTM (Wu et al., 2016) 1.44Recurrent Highway Networks (Zilly et al., 2016) 1.42Array LSTM (Rocki, 2016) 1.40Feedback LSTM 1.39Hypernetworks (Ha et al., 2016) 1.38Feedback LSTM + Zoneout (Krueger et al., 2016) 1.374 S UMMARYWe introduced feedback recurrent network architecture, which takes advantage of temporal natureof the data and monitors the discrepancy between predictions and observations. This prediction error1with a modification taking into consideration only recent window of gradient updates2http://mattmahoney.net/dc/text.html3This method does not belong to the ’dynamic evaluation’ group: 1. It never actually sees test data duringtraining. 2. It does not adapt weights during testing4our implementation7Under review as a conference paper at ICLR 2017information, also known as surprisal, is used when making new guesses. We showed that combiningcommonly used feedforward, recurrent and such feedback signals improves generalization capabil-ities of Long-Short Term Memory network. It outperforms other stochastic and fully deterministicapproaches on enwik8 character level prediction achieving 1.37 BPC.5 F URTHER WORKIt is still an open question what the feedback should really constitute as well as how it shouldinteract with lower-level neurons (additive, multiplicative or another type of connection). Furtherimprovements may be possible with the addition of regularization. Another research direction isincorporating sparsity in order improve disentangling sources of variation in temporal data.ACKNOWLEDGEMENTSThis work has been supported in part by the Defense Advanced Research Projects Agency (DARPA).REFERENCESJunyoung Chung, C ̧ aglar G ̈ulc ̧ehre, KyungHyun Cho, and Yoshua Bengio. Gated feedback recurrentneural networks. CoRR , abs/1502.02367, 2015. URL http://arxiv.org/abs/1502.02367 .Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neu-ral networks. In In Proceedings of the International Conference on Artificial Intelligence andStatistics (AISTATS10). Society for Artificial Intelligence and Statistics , 2010.David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106 , 2016.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural Comput. , 9(8):1735–1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL http://dx.doi.org/10.1162/neco.1997.9.8.1735 .Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. CoRR ,abs/1507.01526, 2015. URL http://arxiv.org/abs/1507.01526 .David Krueger, Tegan Maharaj, J ́anos Kram ́ar, Mohammad Pezeshki, Nicolas Ballas, Nan Rose-mary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron C. Courville, and Chris Pal.Zoneout: Regularizing rnns by randomly preserving hidden activations. CoRR , abs/1606.01305,2016. URL http://arxiv.org/abs/1606.01305 .Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, and Tapani Raiko. Semi-supervised learning with ladder network. CoRR , abs/1507.02672, 2015. URL http://arxiv.org/abs/1507.02672 .Kamil Rocki. Recurrent memory array structures. arXiv preprint arXiv:1607.03085 , 2016.Ilya Sutskever, James Martens, and Geoffrey Hinton. Generating text with recurrent neural networks.In Lise Getoor and Tobias Scheffer (eds.), Proceedings of the 28th International Conference onMachine Learning (ICML-11) , ICML ’11, pp. 1017–1024, New York, NY , USA, June 2011.ACM. ISBN 978-1-4503-0619-5.P. Werbos. Backpropagation through time: what does it do and how to do it. In Proceedings ofIEEE , volume 78, pp. 1550–1560, 1990.Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On mul-tiplicative integration with recurrent neural networks. CoRR , abs/1606.06630, 2016. URLhttp://arxiv.org/abs/1606.06630 .Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnk, and Jrgen Schmidhuber. Recurrent high-way networks, 2016.8
HyEeMu_xx
Under review as a conference paper at ICLR 2017PROGRESSIVE ATTENTION NETWORKS FOR VISUALATTRIBUTE PREDICTIONPaul Hongsuck Seoy, Zhe Linz, Scott Cohenz, Xiaohui Shenz& Bohyung HanyyPOSTECH, KoreazAdobe Research{hsseo, bhhan}@postech.ac.kr{zlin, scohen, xshen}@adobe.comABSTRACTWe propose a novel attention model which can accurately attend to target objectsof various scales and shapes in images. The model is trained to gradually suppressirrelevant regions in an input image via a progressive attentive process over multiplelayers of a convolutional neural network. The attentive process in each layerdetermines whether to pass or suppress features at certain spatial locations for usein the next layer. We further employ local contexts to estimate attention probabilityat each location since it is difficult to infer accurate attention by observing a featurevector from a single location only. The experiments on synthetic and real datasetsshow that the proposed attention network outperforms traditional attention methodsin visual attribute prediction tasks.1 I NTRODUCTIONAttentive mechanisms often play important roles in modern neural networks (NNs) especially incomputer vision tasks. Many visual attention models have been introduced in the previous literature,and they have shown that attaching an attention to NNs can improve the accuracy in various taskssuch as image classification (Jaderberg et al., 2015; Ba et al., 2015; Mnih et al., 2014; Larochelle &Hinton, 2010), image generation (Gregor et al., 2015), image caption generation (Xu et al., 2015) andvisual question answering (Yang et al., 2015; Andreas et al., 2016; Xu & Saenko, 2015).There are several motivations for incorporating attentive mechanisms in NNs. One of them is thatit is analogous to the perceptual process of human beings. The human visual system concentratesattention to a region of interest instead of processing an entire scene. Likewise, in a neural attentionmodel, we can focus processing only on attended areas of the input image. This benefits us in termsof computational resources; the number of hidden units may be reduced since the hidden activationsonly need to encode the region with attention (Mnih et al., 2014).Another important motivation is that some computer vision tasks, e.g. visual question answering(VQA), require identifying the object for accurate attribute prediction. For example, when theinput image contains multiple objects, the task should focus on the object specified by the question.Figure 1 illustrates an example task to predict the color (answer) of a given input number (query).The query specifies a particular object in the input image (number 7 in this example) for answering itsattribute (red). To address this type of tasks, the network architecture should incorporate an attentivemechanism either explicitly or implicitly.One of the most popular attention mechanisms for NNs is the soft attention method (Xu et al.,2015), which aggregates responses in a feature map weighted by their attention probabilities (seeAppendix A for more details). This process results in a single attended feature vector. Since thesoft attention method is fully differentiable, the entire network can be trained end-to-end withstandard backpropagation. However, it can only model attention to local regions with a certain sizedepending on the receptive field of the layer chosen for attention. This makes the soft attentionmethod inappropriate for complicated cases, where objects involve significant variations in theirscales, and shapes.1Under review as a conference paper at ICLR 2017(a) input image (b) first attention (c) second attention (d) third attention (e) final attentionFigure 1: An example reference problem (with the query 7 and the answer red) and intermediateattention maps using our progressive attention model. It shows that attention is gradually refinedthrough the network layers for resolving the reference problem. Distracting patterns at smaller scalesare suppressed at earlier layers while those at larger scales ( e.g. 9) are suppressed at later layers withlarger receptive fields. All attended images are independently rescaled for the visualization.To overcome this limitation, we propose a novel attention network, referred to as progressive attentionnetwork (PAN), which enables precise attention over objects of different scales and shapes byattaching attentive mechanisms to multiple layers within a convolutional neural network (CNN).More specifically, the proposed network forces attention prediction in intermediate feature maps byforwarding the attended feature maps in each layer to the subsequent layers in the CNN. Since afeature to be attended in the current feature map is obtained by combining lower-level features withsmaller receptive fields, the network can learn to distill the precise spatial support relevant to thetarget objects as final attention. The contribution of this work is three-fold:A novel attention model (progressive attention network) which can be learned to predictattention matching accurate scale and shape of a target objectUse of local contexts to improve the stability of the progressive attention modelAchievement of significant performance improvement over traditional soft and hard attentionapproaches in query-specific visual attribute prediction tasksThe rest of this paper is organized as follows. We first review related work in Section 2. In Section 3,we describe the proposed model with local context information. We then present our experimentalresults on several datasets in Section 4 and conclude the paper in Section 5.2 R ELATED WORKAttention on Features The most straightforward attention mechanism is a feature based method,which selects a subset of features by explicitly attaching an attention model to NN architectures. Theapproaches relying on this attention mechanism have improved performance in many tasks (Xu et al.,2015; Yang et al., 2015; Andreas et al., 2016; Xu & Saenko, 2015; Bahdanau et al., 2015; Luonget al., 2015; Weston et al., 2015; Graves et al., 2014). For example, they have been used to handlesequences of variable lengths in neural machine translation models (Bahdanau et al., 2015; Luonget al., 2015), speech recognition (Chorowski et al., 2014) and handwriting generation (Graves, 2013),and manage memory access mechanisms for memory networks (Weston et al., 2015) and neuralturing machines (Graves et al., 2014). When applied to computer vision tasks to resolve referenceproblems, these models are designed to pay attention to CNN features corresponding to subregionsin the input image. Image caption generation and visual question answering are typical examplesbenefited from this attention mechanism (Xu et al., 2015; Yang et al., 2015; Andreas et al., 2016; Xu& Saenko, 2015).Attention by Image Transformation Another stream of attention models is based on imagetransformations. These approaches transform a regular grid and sample from the input image withthe transformed grid whose element corresponds to a location in the input image. Ba et al. (2015)and Mnih et al. (2014) transform an input image with predicted translation parameters ( txandty)and a fixed scale factor ( ^s<1) for image classification or multiple object recognition. Scale factoris also predicted in (Gregor et al., 2015) for image generation, where the network uses Gaussianfilters for sampling. Spatial transformer networks (STNs) predict all six parameters of the affine2Under review as a conference paper at ICLR 2017transformation matrix, and even extend it to a projective transformation and a 16-point thin platespline transformation (Jaderberg et al., 2015). Because all these transformations used in (Jaderberget al., 2015) involve scale factors, STNs are capable of dealing with objects in different sizes. However,STN is limited when there are multiple candidate regions for attention. Our model overcomes thisproblem by formulating attention as progressive filtering on feature maps instead of assuming objectscan be roughly aligned by a single spatial transformation.Multiple Attention Processes There have been several approaches iteratively performing attentiveprocesses to resolve relations between targets. Yang et al. (2015) iteratively attend to imagesconditioned on the previous attention states for visual question answering as the objects of interestare often not specified explicitly in questions but implicitly in relational expressions about the targetobjects. Also, Weston et al. (2015) and Graves et al. (2014) incorporate attention mechanisms tomemory cells iteratively to retrieve different values stored in the memory. Our proposed model issimilar in spirit of iterative attention but aimed at attending to a single target object via operating onmultiple CNN layers progressively, i.e., attention information is predicted progressively from featuremaps through multiple layers of CNN to capture the fine shapes of the target object.In (Jaderberg et al., 2015), the authors also conducted an experiment with a network with multipletransformer layers. However, the attention shapes of STNs are still constrained to the type oftransformation regardless of the number of transformers. In contrast, the quality of the attentionshapes is improved through progressive attention process in the proposed method. Stollenga et al.(2014) introduced a deep network which manipulates intermediate features of a fixed classifier throughchannel-wise attention process. Although the channel-wise attention process is used at multiple layersof the network to manipulate the intermediate feature representations, they never explored spatialattention process. More importantly, this method requires to have an accurate pretrained classifierfor the target classes prior to learning attention while pretraining a general query-specific attributeclassifier is not trivial. It is also notable that both (Jaderberg et al., 2015) and (Stollenga et al., 2014)target simple classification tasks without queries while we aim to tackle the query-specific attributeprediction task where answers from a single input image can be very different depending on the inputquery.Training Attention Models The networks with soft attention are fully differentiable and thustrainable end-to-end by backpropagation. Xu et al. (2015) and Zaremba & Sutskever (2015) introduceda stochastic hard attention, where the network explicitly selects a single feature based on the predictedattention probability map. Because the explicit selection (or sampling) procedure is not differentiable,REINFORCE learning rule (Williams, 1992), is used to make networks trainable. Transformationbased attention models (Ba et al., 2015; Mnih et al., 2014) are mostly trained by REINFORCElearning rule but STN (Jaderberg et al., 2015) proposed a fully differentiable formulation and madeit possible to train end-to-end. Compared to these attention networks, the proposed network isalso trainable end-to-end by the standard backpropagation without any extra techniques since everyoperation within the network is differentiable.3 P ROGRESSIVE ATTENTION NETWORKSTo overcome the limitation of existing attention models in handling variable object scales and shapes,we propose a progressive attention mechanism. In the proposed model, irrelevant features at differentscales are suppressed by attention filtering steps in different CNN layers, and computation is focusedon the features corresponding to regions of interest. At each attention layer, the model predicts anattention map given the input query and the current feature map via an attention module, and then theattention maps is multiplied to the feature maps channel-wise to obtain attended feature map. In eachlayer, each attended feature map is then forwarded to the next layer of the CNN for construction ofthe following feature map, which is illustrated in Figure 2. This progressive attention process allowsus to estimate precise details of attention areas while maintaining deep representations appropriatefor high-level inference tasks.3Under review as a conference paper at ICLR 2017feature map (fl)attention probability ( αl)attended feature map ( fl)attended feature (fatt)attribute classifierΣnext convolution layer ( gCNNl+1)Figure 2: Overall procedure of progressive attention. Attentive processes are repeatedly applied tofeature maps at multiple layers and the resulting attended feature maps are used as input featuremaps for the next convolution layers in CNN. Attention probabilities lare estimated from featuremaps and input query. In the last attention layer, the attended feature maps are aggregated to a singlefeature vector (by sum pooling) and fed to the final attribute classifier.3.1 P ROGRESSIVE ATTENTIVE PROCESSLetfl2RHlWlClbe an output feature map of a layer l2f0;:::;Lgin CNN with width Wl,heightHlandClchannels, and fli;j2RClbe a feature at (i;j)of the feature map fl. In the proposedPAN, an attentive process is applied to multiple layers of CNN and we obtain the attended featuremap ^fl= [^fli;j], which is given by^fli;j=li;jfli;j: (1)Here, the attention probability li;jfor a feature fli;jis calculated bysli;j=glatt(fli;j;q;latt)andli;j=softmax i;j(sl)ifl=L(sli;j) otherwise; (2)whereglatt()denotes the attention function with a set of parameters lattfor layerl,sli;jis theattention score at (i;j)in layerl,qis the query, and ()is a sigmoid function. The attentionprobability at each location is independent of others in the same feature map, where a sigmoidfunction is employed to constrain attention probabilities between 0 and 1. For the last layer ofattention, we use a softmax function over the entire spatial region for final aggregation of features.Unlike the soft attention model (see Appendix A), in the intermediate attention layers, the attendedfeature map ^flis not summed up to generate a single vector representation of the attended regions.Instead, the attended feature map is forwarded to the next layer as an input to compute the nextfeature map, which is given byfl+1=gl+1CNN(^fl;l+1CNN) (3)wheregl+1CNN()is the next CNN operations parameterized by lCNN.This feedforward procedure with attentive processes in CNN is repeated from the input of the CNN,i.e.,f0=I, until ^fLis obtained. Then, the attended feature fattis finally retrieved by summing upall the features in the final attended feature map ^fLas in soft attention, which is given byfatt=HXiWXj^fLi;j=HXiWXjLi;jfLi;j: (4)The attended feature fattobtained by such process is then used as the input to the visual attributeclassifier as illustrated in Figure 2.In our models, we place the attention layers to the output of max pooling layers instead of every layerin CNN because the reduction of feature resolution within CNN mainly comes from pooling layers.In practice„ we can also skip the first few pooling layers and only attach the attention module to theoutputs of last Kpooling layers.4Under review as a conference paper at ICLR 2017(a)αi,jlgattlfi,jlαi,jlgattlFi,jl(b)Figure 3: Attention estimation (a) without local context and (b) with local context. In (a), li;jispredicted from fli;jonly while its spatially adjacent features are also used to estimate li;jin (b).3.2 M ULTI -RESOLUTION ATTENTION ESTIMATIONIn Eq. (3), the resolution of attention probability map ldepends on the size of the feature mapin the corresponding layer. Due to the nature of a CNN with convolution and pooling layers, theresolution oflwill decrease with the increasing depth of a layer. Since the attentive processes areperformed over multiple layers recursively in our framework, it is possible to attend to the regions ofspecific sizes and shapes. Note that the proposed network can exploit high-level semantics in deeprepresentations for inference without losing attention resolution.The progressive attention model is still very effective in predicting fine attention shapes as theattention information is aggregated over multiple layers to suppress irrelevant structures at differentgranularity. In lower layers, features whose receptive fields contain small distractors are suppressedfirst. Meanwhile, the features from a part of large distractors remain intact but passed to the next layerdelaying its suppression. In higher layers, features of these large distractors would get low attentionprobability as each feature contains information from larger receptive fields allowing the attentionmodule to distinguish whether the feature is from a distractor or the target object. This phenomenonis well demonstrated in the qualitative results in our experiments (Section 4). An additional benefit ofprogressive attention is that it is more straightforward during inference since it is a pure feedforwardnetwork.3.3 L OCAL CONTEXTA basic version of PAN discussed so far predicts an attention probability li;jbased solely on thefeaturefli;jat a single feature map location. We can improve the quality of attention estimation byallowing the attention layers to observe a local context of the target feature. The local context Fli;jofa featurefli;jis composed of its spatially adjacent features. For example, the local context can begiven byFli;j=ffls;tjisi+;jtj+gas illustrated in Figure 3. The attentionscore is now predicted by the attention network with local context assli;j=glatt(Fli;j;q;latt): (5)In this architecture, the area of the local context is given by the filter size corresponding to thecomposite operation of convolution followed by pooling in the next layer. The local context does notneed to be considered in the last layer of attention since its activations are used to compute the finalattended feature map. Local context improves attention prediction as it enables the centroid feature tobe compared with surrounding features which makes the estimated attention more discriminative.3.4 T RAINING PROGRESSIVE ATTENTION NETWORKSTraining a PAN is as simple as training a soft attention network (Xu et al., 2015) because everyoperation within the network is differentiable. The entire network is trained end-to-end by the standardbackpropagation minimizing the binary cross entropies of the object-specific visual attributes. Whenwe train it from a pretrained CNN, the CNN part should always be fine-tuned together since theintermediate attention maps may change the input distributions of their associated layers in CNN.5Under review as a conference paper at ICLR 2017(a) MREF (b) MDIST (c) MBGFigure 4: Example of the MREF datasets.conv1 (3×3@32)pool1 (2×2)conv2 (3×3@32)pool2 (2×2)conv3 (3×3@32)pool3 (2×2)att(STN)fc (classification layer)STNatt(soft)SANatt(hard)HANatt1att2att4PAN qfi,jlFi,jlfc layer(fusion layer, 32 activations)fc layer(estimation layer, 1 activation)si,jlconv4 (3×3@32)pool4 (2×2)att3(a) Network architectures of models on MREF. Arrows rep-resents direct connection to next layer without attention.conv1 (3×3@32)pool1 (2×2)conv2 (3×3@32)pool2 (2×2)conv3 (3×3@32)pool3 (2×2)att(STN)fc (classification layer)STNatt(soft)SOFTatt(hard)HARDatt1att2att3 (soft)HAttNet qfi,jlFi,jlfc layer(fusion layer, 32 activations)fc layer(estimation layer, 1 activation)si,jl(b) Architecture of attention function glatt(). Lo-cal contextsFli;jare used only in PAN-CTX.Figure 5: Detailed illustration of network architectures on MNIST Reference experiments.4 E XPERIMENTS4.1 MNIST R EFERENCEDatasets We conduct experiments on a synthetic dataset created from MNIST (LeCun et al., 1998).The synthetic dataset is referred to as MNIST Reference (MREF; Figure 4a), where each trainingexample is a triple of an image, a query number and its color label. The task on this dataset is topredict the color of the number identified by a query. Five to nine distinct MNIST numbers withdifferent colors in fgreen;yellow;white;red;bluegand scales in [0:5;3:0]are randomly sampledand located in each 100100image. When coloring numbers, Gaussian noise is added to thereference color value. To simulate more realistic situations, we made two variants of MREF bychainging backgrounds to either distractors (MDIST; Figure 4b) or natural images (MBG; Figure 4c).Background images in MDIST are constructed with randomly cropped 55patches of MNISTimages whereas backgrounds of MBG are filled with natural scene images randomly chosen from theSUN Database (Xiao et al., 2014). The training, validation and test sets contain 30,000, 10,000 and10,000 images respectively.Experimental Settings We implement the proposed network with and without the local contextobservation referred to as PAN-CTX and PAN, respectively. In addition, soft attention network (SAN),hard attention network (HAN) (Xu et al., 2015) and two variants of spatial transformer network(STN-S and STN-M) (Jaderberg et al., 2015), are used as baseline models for comparisons. WhileSTN-S is the model with a single transformer layer, STN-M contains multiple transformer layers inthe network. We reimplemented SAN and STNs following the descriptions in (Xu et al., 2015) and(Jaderberg et al., 2015), respectively, and trained HAN by optimizing the marginal log-likelihoodloss as it is more accurate and feasible due to small search space in our task. The architecture ofimage encoding network in SAN and HAN and localization networks in STNs are all identical for faircomparisons. CNN in the proposed network also has the same architecture except for the additionallayers for hierarchical attention. The CNN is composed of four stacks of 33convolutions with 32channels (stride 1) followed by a 22max pooling layer (stride 2) as illustrated in Figure 5a. Weused a single fc layer for classification because the task requires simple color prediction. The attentionfunctionsglatt()for all models are formed as multi-layer perceptrons with two layers (Figure 5b).6Under review as a conference paper at ICLR 2017Table 1: Performance of attention models on MREF, MDIST, and MBG datasets.(a) Color prediction accuracy [%]MREF MDIST MBGSTN-S 39.10 38.32 32.27STN-M 93.89 85.09 52.25SAN 82.94 75.73 53.77HAN 81.84 78.49 55.84PAN 95.92 91.65 69.46PAN-CTX 98.51 96.02 85.55(b) True-positive ratio [%]MREF MDIST MBGUniform 2.34 2.35 2.39SAN 13.61 12.56 6.73HAN 13.95 13.81 7.64PAN 17.39 13.10 8.62PAN-CTX 22.59 22.80 11.010.5 1.0 1.5 2.0 2.5 3.0Scale0.700.750.800.850.900.951.00AccuracyPAN_CTXPANHANSANSTN-M0.5 1.0 1.5 2.0 2.5 3.0Scale0.600.650.700.750.800.850.900.951.00AccuracyPAN_CTXPANHANSANSTN-M0.5 1.0 1.5 2.0 2.5 3.0Scale0.30.40.50.60.70.80.9AccuracyPAN_CTXPANHANSANSTN-M(a) Attribute prediction accuracies of different models on the test subsets in different scales.0.0 0.2 0.4 0.6 0.8 1.0Recall0.00.10.20.30.40.5PrecisionPAN-CTXHANSAN0.0 0.2 0.4 0.6 0.8 1.0Recall0.00.10.20.30.40.5PrecisionPAN-CTXHANSAN0.0 0.2 0.4 0.6 0.8 1.0Recall0.000.050.100.150.200.25PrecisionPAN-CTXHANSAN(b) The precision-recall curves of object segmentation with attention probability.Figure 6: Analysis of algorithms on MREF (left), MDIST (middle), and MBG (right).The function takes the concatenation of a query q, which is a one-hot vector representing the targetobject and a feature vector fli;j, and outputs an attention score sli;j. In PAN-CTX, the attentionfunctions of att1,att2 andatt3 additionally take the local context Fli;jcontaining the adjacentfeatures with = 2. Every model is trained from scratch.Results Table 1a presents color prediction accuracy of all compared algorithms. It is obviousthat PAN outperforms all the previous approaches with significant margins and PAN-CTX furtherimproves the performance by exploiting the local contexts for attention estimation. While STN-Soften fails to predict the correct answers, STN-M learns to predict the color of the target objectthrough multiple transformations and shows comparable performance to PAN in MREF. However,the performance of STN-M dramatically drops as the dataset becomes more complex and realistic,resulting in even lower performance than SAN and HAN. Also, note that STN-S is capable ofattending to any region attended by STN-M since both models predict attention regions by estimatingan affine transformation. STN-M achieves the improvement by learning multiple transformers fromgradients coming from different levels of features. In contrast to those parametric models, theproposed network can predict attention map with more fine-grained shapes capturing the spatialsupport of the target object better.To evaluate the scale sensitivity of each model, we divided the test images into five subsets based ontarget object scales with uniform interval and computed the accuracies of the models. The resultsare presented in Figure 6a, where SAN and HAN tend to predict the correct answers only in a scalerange between 1.0 and 2.0, while their performance is degraded significantly with wild scale changes.STN-M becomes vulnerable to scale variations in more realistic settings. In contrast, PAN andPAN-CTX are robust to scale variations due to their multi-scale attention machanism especially whenthe local contexts are incorporated.Unlike STNs whose attention is constrained to rhombic regions, those models based on feature-wiseattention maps can produce attention regions adaptive to the shapes of the target object. We evaluatethe attention quality of these models using two complementary criteria: true-positive ratio (TPR)7Under review as a conference paper at ICLR 2017query: 8answer: redSAN: whiteHAN: yellowPAN: redInput & Outputs SAN HANPAN -CTXattention 3 attention 2 attention 4(a)(b)(d)(c)Figure 7: Qualitative results of SAN, HAN and PAN-CTX. (a) Input images faded by attendedfeature map (c). (b) Magnitude of activations in feature maps fli;jbefore attention: the activations aremapped to original image space by spreading activations to their receptive fields. (c) Magnitude ofactivations in attended feature maps ^fli;jwhich shows the effect of attention in contrast to (b). (d)Magnitude of activations of the attended feature maps ^fli;jin its original resolution of the featuremap. For PAN-CTX, only last three attention layers are visualized and attentions of ealier layersare accumulated for visualizing higher attention layers. For HAN, (c) and (d) represent attentionprobability because attended feature map is not available. Every image except for input image isrescaled into [0;1]by(xmin)=(maxmin).and precision-recall (PR) curve. TPR measures how strong attention is given to proper location bycomputing the ratio of the aggregated attention probability within the desired area (a.k.a., ground-truth segmentation) to the attention probability in the whole image (Table 1b). PR measures theoverlaps between ground-truth segmentations and binarized segmentation predictions constructedwith different thresholds (Figure 6b). Note that the proposed model with the local context observationgives the best results with significant margin compared to all the other methods in terms of bothcriteria. These results suggest that PAN-CTX constructs more accurate shapes of attended regionsthan all other attention models.Figure 7 shows the qualitative results of the proposed method and two baselines on the MBG dataset.The proposed model yields accurate attention regions eventually by gradually augmenting attentionand suppressing irrelevant regions in the image. We can observe that the proposed model couldmaintain the high attention resolution through the progressive attention process. In contrast, thebaseline models attend to the target objects only once at the top layer resulting in a coarse attention insize and shape. More qualitative results in these experiments are presented in Appendix C.4.2 A TTRIBUTE PREDICTION ON VISUAL GENOMEDataset Visual Genome (VG) (Krishna et al., 2016) is an image dataset containing several types ofannotations: question/answer pairs, image captions, objects, object attributes and object relationship.We formulate the object attribute prediction as a multi-label classification task with reference. Givenan input image and a query ( i.e., an object category), we predict the binary attributes of individualobjects specified by a query. We used 827 object classes and 749 attribute classes that appear more8Under review as a conference paper at ICLR 2017Table 2: Weighted mAP of the attribute prediction and TPR of attentions measured with ground-truthbounding boxes on VG dataset.attention only w/ priormAP TPR mAP TPRSAN 27.62 15.01 31.84 17.65HAN 27.72 17.24 31.93 19.70PAN-CTX 29.38 18.01 32.50 20.17Query : shoeInput ImagePAN -CTXattention map 3 attention map 2 masked imageHANattention map masked imageFigure 8: Visualization of example attentions of HAN and PAN-CTX on VG dataset. Attention mapspresent magnitude of attended features and red boxes show ground truth bounding boxes of query.than 100 times. A total of 86,674 images with 667,882 object attribute labels are used for ourexperiment, and they are split into training, validation and test sets each containing 43,337, 8,667 and34,670 images. The task is challenging because scales of objects largely vary and the attributes maybe associated with very small objects.Experimental Settings and Results We mainly compare our algorithm with SAN and HAN sinceSTNs could not learn a proper attention process on VG. The transformer layers of STNs generatedpadded images of different sizes and rotations to encode the query vector to fit the query-specificbiases. All the networks share the same CNN architecture of VGG-16 network (Simonyan &Zisserman, 2015), which is pretrained on ImageNet (Deng et al., 2009) and is further fine-tunedon the VG dataset for the attribute prediction. For SAN and HAN, an attention layer is attachedto the last pooling layer in VGG-16 while PAN stacks an additional attention layer with the localcontextsFli;jwith= 2on top of each of the last three pooling layers in VGG-16. We skip to placeattention layers at the first two pooling layers ( pool1 andpool2 ) because the features in those layersare not discriminative enough to filter out.We also test models with object class conditional prior. Inthese models, the final attended feature is fused with the query once more by a fully connected layerallowing the network to reflect the conditional distribution of the attributes given the query. Refer toAppendix B for more detailed descriptions on the network architectures.All three models are evaluated in terms of mean average precision (mAP) weighted by the frequen-cies of the attribute labels in the test set, where the computation of mAP follows PASCAL VOCprotocol (Everingham et al., 2010). The proposed method consistently achieves the best weightedmAP scores in both experimental settings as shown in Table 2 but the gain reduces with object classconditional prior. Table 2 also shows TPR of each model measured with the ground-truth boundingbox for evaluating the attention qualities, and the proposed method shows the best TPR. Figure 8presents the qualitative results of the proposed network and HAN on VG dataset.5 C ONCLUSIONWe proposed a novel hierarchical attention network, which progressively attends to regions of interestthrough multiple layers of a CNN. As the model is recursively applied to multiple layers of CNNwith an inherent feature hierarchy, it accurately predicts regions of interest with variable sizes andshapes. We also incorporate local contexts into our attention network for more robust estimation.The proposed network can be trained end-to-end with standard error backpropagation. We tested themodel on both synthetic and real datasets, and demonstrated significant performance improvementover existing attention methods.9Under review as a conference paper at ICLR 2017REFERENCESJacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Deep compositional questionanswering with neural module networks. In CVPR , 2016.Jimmy Ba, V olodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visualattention. In ICLR , 2015.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. In ICLR , 2015.Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. End-to-end continuousspeech recognition using attention-based recurrent nn: first results. arXiv preprint arXiv:1412.1602 ,2014.J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchicalimage database. In CVPR , 2009.Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman.The pascal visual object classes (voc) challenge. International journal of computer vision , 88(2):303–338, 2010.Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 ,2013.Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprintarXiv:1410.5401 , 2014.Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. Draw: A recurrent neural network forimage generation. In ICML , pp. 1462–1471, 2015.Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In NIPS ,pp. 2008–2016, 2015.Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, StephanieChen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting languageand vision using crowdsourced dense image annotations. arXiv preprint arXiv:1602.07332 , 2016.Hugo Larochelle and Geoffrey E Hinton. Learning to combine foveal glimpses with a third-orderboltzmann machine. In NIPS , pp. 1243–1251, 2010.Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-basedneural machine translation. In EMNLP , 2015.V olodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In NIPS ,pp. 2204–2212, 2014.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. ICLR , 2015.Marijn F Stollenga, Jonathan Masci, Faustino Gomez, and Jürgen Schmidhuber. Deep networkswith internal selective attention through feedback connections. In Advances in Neural InformationProcessing Systems , pp. 3545–3553, 2014.Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In ICLR , 2015.Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcementlearning. Machine learning , 8(3-4):229–256, 1992.Jianxiong Xiao, Krista A Ehinger, James Hays, Antonio Torralba, and Aude Oliva. Sun database:Exploring a large collection of scene categories. International Journal of Computer Vision , pp.1–20, 2014.10Under review as a conference paper at ICLR 2017Huijuan Xu and Kate Saenko. Ask, attend and answer: Exploring question-guided spatial attentionfor visual question answering. arXiv preprint arXiv:1511.05234 , 2015.Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, andYoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. InICML , 2015.Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks forimage question answering. arXiv preprint arXiv:1511.02274 , 2015.Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprintarXiv:1505.00521 , 2015.11Under review as a conference paper at ICLR 2017AppendicesA S OFT ATTENTION MODELIn this appendix section, we explain the soft attention network which is introduced in (Xu et al., 2015)and used as one of the baseline models in the experiments. Given a feature map, the soft attentionnetwork calculates an attention probability map and uses it to compute the attended feature forclassification or other tasks. Given a feature map f2RHWCand a queryqcontaining informationof where to attend, a soft attention model first obtains an attended feature map ^f2RHWC, whereWis width,His height, and Cis the number of channels. The input feature map fis generally aCNN output of an input image I, which is given byf= CNN(I): (6)For each feature fi;j2RCat(i;j)of the feature map fand the query q, the attention probabilitymap denoted by = [i;j]is given bysi;j=gatt(fi;j;q;att) (7)i;j= softmax i;j(s);0i;j1 (8)wheregatt()is the attention network parameterized by attands= [si;j]is an attention score map.The attention score map is normalized with softmax to produce attention probabilities i;j. Note thatgatt()can be any kind of network such as a multilayer perceptron.Let^fi;j2RCbe a vector of the attended feature map ^fat(i;j). Then, the attended feature denotedbyfatt2RCis computed by a weighted sum of features asfatt=HXiWXj^fi;j=HXiWXji;jfi;j: (9)Ideally, the locations in the feature map corresponding to the receptive fields containing an objectof interest should have the maximum attention probability while the others have zero probabilitiessimilarly to the hard attention. This statement stands true only if the target object is perfectly alignedwith the receptive fields in terms of position and scale. In practice, however, object location and sizevary whereas the structure of receptive fields is fixed. Note that there exists the trade-off between theattention resolution and the representation power. If we choose to extract deep and high-level features,we give up high resolution in attention. On the other hand, we need to rely on shallow representationsto increase attention resolution. This trade-off limits the performance of existing attention models.B N ETWORK ARCHITECTURES ON VISUAL GENOMEIn PAN, the convolution and pooling layers of VGG-16 network (Simonyan & Zisserman, 2015),pretrained on ImageNet (Deng et al., 2009), are used, and three additional attention layers att1,att2andatt3 are stacked on top of the last three pooling layers pool3 ,pool4 andpool5 respectivelyas illustrated in Figure 9a. The attention functions of att1 andatt2 take the local contexts Fli;jinaddition to the query qand the target feature fli;jto obtain the attention score sli;j. The size of thelocal contexts is squared with that of the receptive fields of the next three convolution layers beforethe next attention by setting = 3. Three convolutions same as the next three convolution layers inCNN firstly encode the target feature and the local context, and are initiallized with the same weightsas in CNN (Figure 9b). This embedding is then concatenated with the one-hot query vector and fedto two fully connected layers, one fusing two modalities and the other estimating the attention score.Inatt3, the attention function takes the concatenation of the query and the target feature and feed itto two fully connected layers (Figure 9c). The attended feature fattobtained from the last attentionlayer att3 is finally fed to a classification layer to predict the attributes.The baseline networks also share the same architecture of CNN of VGG-16 network as in PAN(Figure 9a). In SAN, the soft attention described in Appendix A is attached to the top of CNN. InHAN, the hard attention (Xu et al., 2015) is attached to the top of CNN instead. The hard attention is12Under review as a conference paper at ICLR 2017conv1_1 (3 ×3@64)conv1_2 (3×3@64)fc (classification layer)att(soft)SANatt(hard)HANatt3PANpool1 (2×2)conv2_1 (3 ×3@128)conv2_2 (3×3@128)pool2 (2×2)conv3_1 (3 ×3@256)conv3_2 (3×3@256)conv3_3 (3×3@256)pool3 (2×2)att1conv4_1 (3 ×3@512)conv4_2 (3×3@512)conv4_3 (3×3@512)pool4 (2×2)att2conv5_1 (3 ×3@512)conv5_2 (3×3@512)conv5_3 (3×3@512)pool5 (2×2)q fi,jlFi,jlfeature+context embedding(three 3×3convolution layers)fc layer(fusion layer, Clactivations)fc layer(estimation layer, 1 activation)si,jlqfi,jlfc layer(fusion layer, 512 activations)fc layer(estimation layer, 1 activation)si,jl(a) Network Architectures of Models.q Fi,jlfeature+context embedding(two 3×3convolution layers)fc layer(fusion layer, Clactivations)fc layer(estimation layer, 1 activation)si,jl(b) Architecture of the intermediate attention functionsglatt()inatt1 andatt2 of PAN.conv1_1 (3 ×3@64)conv1_2 (3×3@64)fc (classification layer)att(soft)SOFTatt(hard)HARDatt3 (soft)HAttNetpool1 (2×2)conv2_1 (3 ×3@128)conv2_2 (3×3@128)pool2 (2×2)conv3_1 (3 ×3@256)conv3_2 (3×3@256)conv3_3 (3×3@256)pool3 (2×2)att1conv4_1 (3 ×3@512)conv4_2 (3×3@512)conv4_3 (3×3@512)pool4 (2×2)att2conv5_1 (3 ×3@512)conv5_2 (3×3@512)conv5_3 (3×3@512)pool5 (2×2)q fi,jlFi,jlfeature+context embedding(next three convolution layers)fc layer(fusion layer, Clactivations)fc layer(estimation layer, 1 activation)si,jlqfi,jlfc layer(fusion layer, 512 activations)fc layer(estimation layer, 1 activation)si,jl(c) Architecture of the attention functions of SAN and HAN,and the last attention function of PAN.Figure 9: Detailed illustration of network architectures on Visual Genome experiments.implemented to maximize the marginal likelihood directly during training while the original papermaximized the variational lower bound of the marginal likelihood because of the large attentionsearch space. For testing, we also directly calculated the marginal likelihood instead of picking asingle prediction with the highest attention probability. This is possible because of relatively smallsearch space of attention in our problem compared to the image captioning where the search spaceof attention increases exponentially depending on the lengths of sequences. The attention functionsin the baselines consist of two fully connected layers taking the concatenation of the query and thetarget feature as in the attention function of att3 in PAN.The proposed network and the baselines described above use the query for obtaining the attentionprobabilities and give us the pure strength of the attention models. However, the target objectclass, represented by the query, gives much more information than just attetion. It confines possibleattributes and filters irrelevant attributes. For these reasons, we additionally experiment on a set ofmodels that incorporate the target object class conditional prior for the attribute prediction. In thesemodels, the query is fused with the attended feature fattby an additional fully connected layer andthe fused feature is used as the input of the classification layer.13Under review as a conference paper at ICLR 2017C M ORE QUALITATIVE RESULTS ON MNIST R EFERENCEquery: 2answer: blueSAN: blueHAN: bluePAN -CTX: blueInput & Outputs SAN HANPAN -CTXattention 3 attention 2 attention 4query: 9answer: whiteSAN: whiteHAN: whitePAN -CTX: whitequery: 7answer: redSAN: yellowHAN: yellowPAN -CTX: redquery: 2answer: blueSAN: blueHAN: redPAN -CTX: blueFigure 10: The qualitative results of SAN, HAN and PAN-CTX on the MREF and MDIST datasets.For each example, attended images are shown in the first row and the corresponding attention mapsare shown on the second row. In case of the progressive attention network, the last three attentionmaps (attention 2, 3 and 4) are visualized. As can be seen, attention map at deeper layers reveal theevidence of aggregation over earlier attention maps.14Under review as a conference paper at ICLR 2017query: 1answer: yellowSAN: yellowHAN: bluePAN -CTX: yellowInput & Outputs SAN HANPAN -CTXattention 3 attention 2 attention 4query: 9answer: blueSAN: yellowHAN: whitePAN -CTX: bluequery: 1answer: greenSAN: redHAN: greenPAN -CTX: greenquery: 4answer: blueSAN: yellowHAN: yellowPAN -CTX: blueFigure 11: More qualitative results of SAN, HAN and PAN-CTX on the MBG dataset.15Under review as a conference paper at ICLR 2017query: 1Inputs SAN HANPAN -CTXattention 3 attention 2 attention 4query: 7(a)query: 1Inputs SAN HANPAN -CTXattention 3 attention 2 attention 4query: 5(b)Figure 12: Two common failure cases of attention models on the MBG dataset. (a) The models attendto a part of a larger structure which resembles the target object. (b) The models are confused bybackground distractors that are similar to the target object. Although failed, the examples show thatthe results of PAN-CTX are more visually interpretable (attended to query-like structures).16Under review as a conference paper at ICLR 2017D M ORE QUALITATIVE RESULTS ON VISUAL GENOMEQuery: floorAnswer: woodenSAN: 37.86 %HAN: 26.34 %PAN: 59.79 %Query: skyAnswer: cloudySAN: 30.81 %HAN: 32.86 %PAN: 56.25 %Query: capAnswer: blueSAN: 28.50 %HAN: 16.18 %PAN: 69.16 %Input & Query SAN HANPAN -CTXattention 3 attention 2Figure 13: The qualitative results of SAN, HAN and PAN-CTX on the VG dataset. For each example,the attended images are presented in the first row while their attended feature maps are shown in thesecond row. In the case of the PAN, last two attention maps are visualized where the attention mapsat deeper layers reveal the evidence of aggregation of attention information over previous layers. Thered boxes within the final attended images represent the ground truth bounding boxes for the queryobject annotated in the VG dataset. Each object may have multiple bounding boxes annotated bydifferent annotators. The annotated answer is presented in the first column. The percentage for eachmethod means the probability of the GT answer for corresponding method.17Under review as a conference paper at ICLR 2017Query: branchAnswer: bareSAN: 10.05 %HAN: 6.32 %PAN: 37.06 %Query: carAnswer: parkedSAN: 19.35 %HAN: 8.14 %PAN: 68.89 %Query: shirtAnswer: whiteSAN: 26.36 %HAN: 40.00 %PAN: 45.75%Input & Query SAN HANPAN -CTXattention 3 attention 2Figure 14: More qualitative results of SAN, HAN and PAN-CTX on the VG dataset.18
SJ8BZTjeg
Under review as a conference paper at ICLR 2017UNSUPERVISED LEARNING USING GENERATIVE AD-VERSARIAL TRAINING ANDCLUSTERINGVittal Premachandran and Alan L. YuilleDepartment of Computer ScienceJohns Hopkins Universityfvittalp, ayuille1 g@jhu.eduABSTRACTIn this paper, we propose an unsupervised learning approach that makes use of twocomponents; a deep hierarchical feature extractor, and a more traditional cluster-ing algorithm. We train the feature extractor in a purely unsupervised mannerusing generative adversarial training and, in the process, study the strengths oflearning using a generative model as an adversary. We also show that adversar-ial training as done in Generative Adversarial Networks (GANs) is not sufficientto automatically group data into categorical clusters. Instead, we use a more tra-ditional grouping algorithm, k-means clustering, to cluster the features learnedusing adversarial training. We experiment on three well-known datasets, CIFAR-10, CIFAR-100 and STL-10. The experiments show that the proposed approachperforms similarly to supervised learning approaches, and, might even be betterin situations with small amounts of labeled training data and large amounts ofunlabeled data.1 I NTRODUCTIONMuch of the recent work in machine learning and computer vision has focused on llearning tech-niques for high-level tasks such as image classification (Krizhevsky et al. (2012); Simonyan &Zisserman (2014); He et al. (2015)). Many of the state-of-the-art models employ ConvolutionalNeural Networks (CNNs) to extract high-level feature representations by processing the input datausing multiple layers of convolutions, usually followed by some non-linear transform. CNNs havesuccessfully demonstrated to yield high-quality feature representations that produce state-of-the-artresults on a variety of tasks, not only on image classification (as mentioned above), but also onsemantic segmentation (Long et al. (2015); Chen et al. (2016a)), boundary detection (Xie & Tu(2015); Premachandran et al. (2015)), and object detection (Girshick et al. (2014)), among oth-ers. These models are trained to produce high-quality features using backpropagation, usually bypretraining on a large dataset (such as ImageNet) and then fine tuning on the relevant dataset. Un-fortunately, supervised learning suffers from certain challenges, especially, in terms of scalabilitysince it requires large amounts of labeled data. Labeling millions of images requires extensive effortand is time consuming. Moreover, supervised training with a predefined set of classes, limits thegeneralizability of the learned feature representations to novel classes.To overcome the difficulties of labeling large amounts of training data, effort has gone into thedevelopment of semi-supervised and unsupervised learning techniques. The goal of unsupservisedlearning techniques is to learn representations that are interpretable, easily transferable to noveltasks and novel object categories, and to disentangle the informative representation of the data fromnuisance variables (e.g. lighting, viewpoint, etc.) purely from unlabeled data. A common and widelyused method for unsupervised learning is to do clustering using k-Means. k-Means clustering is asimple method that groups input features into different clusters. Traditionally, this approach mainlyused low-level features such as raw pixel intensities, HOG features, GIST features, SIFT features,etc. Although the performance of k-means on such features is usually poor, Wang et al. (2015) useddeep network features and employed k-means clustering to show strong results on grouping objectparts. But, the deep network that was used to extract the features was pre-trained on ImageNet usingclass-label supervision (so, object knowledge was known). It would be a natural extension to see ifone can learn robust features using hierarchical feature learning in a purely unsupervised manner.1Under review as a conference paper at ICLR 2017However, since the objectives of unsupervised learning are not as concrete as the objectives ofsupervised learning, optimizing deep hierarchical models using backpropagation becomes difficult.Attempts have been made to come up with “pretext” objective functions, which are usually drivenby “common sense” requirements, to do unsupervised learning. Some examples of these objec-tives include minimizing the reconstruction error (Vincent et al. (2008)), training models to identifysurrogate classes (Dosovitskiy et al. (2014)), predicting spatial position of image patches (Doerschet al. (2015); Noroozi & Favaro (2016)), and minimizing the distance in the representation space forobjects tracked over a time period in a video sequence (Wang & Gupta (2015))Recently, much interest has gone into adversarial training. Generative Adversarial Networks(GANs) (Goodfellow et al. (2014)) are of particular interest in this work. Progress in GANs haveenabled significant improvement in the quality of images being generated in the past couple of years(Denton et al. (2015); Radford et al. (2015)). While much of the recent effort has gone in the de-velopment of better architectures and training procedures for modeling and training the generativenetwork, in this work, we systematically study the power of the representations learned by the gen-erator’s adversary, i.e., the discriminative model.In this paper, we learn a deep network using generative adversarial training. We use the featuresextracted from the discriminative component and fuse it with traditional unsupservised learning al-gorithms like k-Means to improve their performance. We perform various experiments over manydifferent datasets (CIFAR-10, CIFAR-100 and STL-10) and show that the representations that canbe learned purely by unsupervised learning from an adversarial signal helps to learn meaningfulrepresentations of input data. Our experiments show that under situations with minimal amounts ofsupervised training examples (and large amounts of unsupervised data), the representations learnedwith adversarial training perform competitively in comparison to supervised training on a similararchitecture. We now provide a brief summary of adversarial training employed by GAN and Info-GAN.2 B ACKGROUND ON ADVERSARIAL TRAININGGenerative Adversarial Networks (Goodfellow et al. (2014)) are composed of two components; thegenerator,G(:), and the discriminator, D(:). The generator maps a latent encoding to the data space,while the discriminator distinguishes between samples generated by the generator and real data. Thegenerator is trained to fool the discriminator, while the discriminator is trained to not get fooled bythe generator.More formally, given training data samples, xPdata(x), wherePdata(x)is the true data dis-tribution, the training of GANs proceeds by iterating between two-steps. In the first step, we fixthe parameters of the generative model, sample a latent code, zPnoise(z), and generate datasamples,G(z), which is then used to train the discriminator, D(:), by updating its parameters to dis-tinguish between G(z)andx. The parameters of the discriminator can be updated by maximizingthe expected log-likelihood,ExPdata (x)[log(D(x))] +EzPnoise (z)[log(1D(G(z)))]: (1)In the second step, we fix the parameters of the discriminator and update the parameters of thegenerator to generate samples that get classified as real by the discriminator. The parameters of G(:)can be updated by minimizing,EzPnoise (z)[log(1D(G(z)))]: (2)The objective of this minimax game can be written asminGmaxDV(G;D ) =ExPdata (x)[log(D(x))] +EzPnoise (z)[log(1D(G(z)))]: (3)2.1 I NFOGANThe formulation described above uses a noise vector, z, which is used by the generator, G(.), tosynthesize data. This noise vector does not impose any constraints on what the generated datashould look like. Chen et al. (2016b) introduce a neat and simple idea to extend GANs into a featureidentifying system called InfoGAN. InfoGAN uses a structured latent code, c, which is input to2Under review as a conference paper at ICLR 2017the generator, G(.), in addition to the noise vector, z. The code can either be a discrete code or acontinuous code. In order to encourage the code to capture the inherent semantic structures in thetraining data, a new term is introduced to the objective function, which acts as a regularizer thatforces high mutual information between the latent code, cand the generated sample, G(z;c). Sinceit is hard to maximize the mutual information, I(c;G(z;c)), directly (because one would need toknow the true distribution P(cjx)), Chen et al. (2016b) provide a variational lower bound, whichcan be obtained when using a parametric auxiliary, Q(cjx), to approximate P(cjx). The variationallower bound that is obtained is,LI(G;Q) =EcP(c);zPnoise (z)[logQ(cjG(c;z))] +H(c): (4)The InfoGAN objective is a regularized version of the original GAN objective (Eq. 3), where theregularizer is the variational lower bound of mutual information,minG;QmaxDVInfoGAN (G;D;Q ) =V(G;D )LI(G;Q): (5)Chen et al. (2016b) share the parameters between Q(.) and D(.), which helps reduce the computa-tional cost. We do the same in all of our experiments.As can be seen from the first term of Eq. 4, the lower bound of the mutual information regularizerconveniently turns out to be a recognition model. If the optimization procedure converges success-fully, one can hope to have learned a latent code that ends up representing the most salient andstructured semantic features present in the data. The noise parameters, z, end up providing thestochasticity to the input that result in the production of samples with diversity.3 U NSUPERVISED LEARNING WITHADVERSARIAL TRAINING ANDK-MEANS ++ C LUSTERINGAs mentioned in Section 1, we are interested in learning representations of images in a purely unsu-pervised manner. Both GAN, and InfoGAN provide a way to train the discriminative network usingthe generated images as an adversary. InfoGAN, is particularly interesting since it has the abilityto directly predict the different categories that might be present in the training database. While thequalitative results presented in Chen et al. (2016b) shows that the categories can be automaticallyidentified on the MNIST dataset, unfortunately, the same result does not seem to extend to morecomplicated and realistic datasets (CIFAR-10, CIFAR-100 and STL-10). We modified the InfoGANcode released by the authors to enable support of the more realistic RGB data. We then trained themodel on the above mentioned datasets to experiment if it could automatically identify the categor-ical clusters present in the respective datasets. We found that while InfoGAN that we trained onthe above-mentioned datasets was successful in generating images that looked different for differentcategorical codes, it was unable to identify the class-level grouping that is present in these datasets.Instead, we adopt a hybrid strategy for unsupervised learning. We first use the generative networkas an adversary to train the discriminative network until convergence. Upon convergence, we ex-tract features from the penultimate layer of the D(.) network and run a more traditional clusteringalgorithm, i.e., k-means++. Surprisingly, this simple strategy turns out to be much more effectiveat grouping data from similar categories than the approach of directly predicting the categoricalgroups. Note that one can plug in more sophisticated unsupervised learning algorithms instead ofk-means++. We use k-means++ to show that even a simple approach can produce reasonable results.Another motivation for using the features from the penultimate layers is that it facilitates featuretransferability to novel classes and tasks. It is common in the supervised learning approaches to firsttrain a deep network on ImageNet images using class-level supervision, then to perform net surgeryto chop off the top level weights, and using this truncated network as a feature extractor for furtherfine tuning on different datasets and tasks. Doing so does not prevent the model from being trainedonly on the ultimate task that it might be used for. One can train the network on a “pretext” taskand transfer the learned weights to other novel tasks. This is especially crucial for unsupervisedlearning since the pretext task that is used to train the models is almost always much different fromthe specific task that the model will ultimately be used for.3Under review as a conference paper at ICLR 2017conv2dsize=5x5dim=64stride=2Conv2dsize=5x5dim=128stride=2Conv2dsize=5x5dim=256stride=2Conv2dsize=5x5dim=512stride=2fcdim=512LeakyReLULeakyReLUBatchNormLeakyReLUBatchNormLeakyReLUBatchNormT/FQ(c|x)DiscriminativeNetworkxdeconv2Dsize=5x5dim=256stride=2deconv2Dsize=5x5dim=128stride=2deconv2Dsize=5x5dim=64stride=2deconv2Dsize=5x5dim=3stride=2tanhReLUReLUReLUReLUBatchNormBatchNormBatchNormBatchNormfczcGenerativeNetworkG(z,c)Figure 1: Figure shows the InfoGAN architecture that was used in all our experiments. Notice thatthe input to G(.) is a combination of zandc. Also notice that most of the parameters are sharedbetween the Q(.) network and the D(.) network, thus improving the computational efficiency.3.1 N ETWORK ARCHITECTUREWe use the DCGAN architecture from Radford et al. (2015) since it is widely used for generatingimages. Figure 1 shows a visualization of the architecture.Generator: Note that the generator has been slightly modified to accept the structured latent code,c, in addition to the random noise, z. The first layer is a fully-connected (fc) layer,which is then reshaped into a 2-D grid of spatial resolution s=16s=16, wheresis the size ofthe output image to be produced. Subsequent to this reshaping, the architecture has four layers oftransposed convolution (sometimes referred to as deconvolution) with a stride of 2, eachof which upsamples the input features to twice the spatial resolution. These layers are sandwichedbybatch norm andReLU layers. Finally, we use a tanh non-linearity to map the features into[1;1].Discriminator: The discriminator is a standard CNN with a series of convolutional layers followedby non-linearities. The architecture uses four convolutional layers sandwiched by batch normandleakyReLU layers. We don’t use maxpooling to reduce the spatial resolution of the input.Instead, we convolve the feature maps with a stride of two, which results in the output of eachconvolution layer to be half the spatial resolution of the input feature map. This base architectureis shared between D(.) and Q(.). On top of this shared network, we use an fclayer to extractthe features, which are then used to predict the categorical distribution. Notice that most of thecomputational cost is shared between the D(.) and the Q(.) networks thereby making the entiretraining process to be computationally efficient.3.2 U NSUPERVISED LEARNING WITH K -MEANS ++As mentioned previously, while InfoGAN has the ability to group data into multiple groups automat-ically, there is no constraint to enforce that the groups need to correspond to the various object-levelcategories that are present in the dataset. While this turned out to be true for the MNIST dataset(Chen et al. (2016b)), we believe that it was possible because the variations in the strokes that pro-duce different digits correspond to the source of biggest variation in the dataset, which convenientlycorresponds to the various digit categories, thereby enabling InfoGAN to act as a category recogni-tion model. In more realistic datasets, the sources of biggest variation need not (and, usually, do not)correspond to variations in the object-level categories. Our experiments show this to be true. Whenwe trained InfoGAN to automatically group the CIFAR-10 images into 10 categories, we found thatwhile InfoGAN was able to group the images into different groups, the groups did not correspondto object category-level groupings. Figure 2 shows some example samples generated by the model.4Under review as a conference paper at ICLR 2017Each row corresponds to a different category and each column in the row corresponds to a differentsample from that category (obtained by keeping cfixed and by varying z). We can see that whileeach row look different from each other, it does not correspond to the CIFAR-10 categories.Therefore, we employ a hybrid approach to unsupervised clustering. We first train the discriminativenetwork using either the vanilla GAN objective or the InfoGAN objective, until convergence. Uponconvergence, we extract features for each image in the training set, from the top of the sharednetwork, labeled as (x)in Figure 1, and do average pooling across the spatial resolution,for each feature channel. We then cluster these features using k-means++ into a discrete set of k-categories. We set kto be the number of object classes that are present in the respective dataset.The cluster centers learned by k-means++ clustering act as the templates for the kcategories thatare present in the dataset.During testing, we extract the feature representation of the test images by passing them throughthe discriminative network trained using the generator as an adversary, do average poolingon(x), and compute the distance of the test feature vector to each of the centers learnt by k-means++ clustering during the training phase. The test image is assigned an index correspondingto the index of the closest center. Our experiments show that clustering on (x)produces betterresults than directly using the recognition model of InfoGAN. Note that while we use the simple k-means++ algorithm for clustering, it could be replaced by more sophisticated unsupervised learningalgorithms. We do not explore further down this route since the scope of this work is to study thestrength of the features learned by adversarial training.Figure 2: Figure shows samples generated from InfoGAN trained on the CIFAR-10 dataset whenthe system was encouraged to identify 10 categories. Each row corresponds to a different clusteridentified by InfoGAN. Each column corresponds to a different sample from that clusters. Wecan see that while InfoGAN can identify clusters that are different from each other, they do notcorrespond to the CIFAR-10 categories. See Sec. 4.1 for quantitative results.An advantage of the hybrid approach is that it now allows us to use a variety of different “pretext”objectives. In other words one can decouple the training objective from the testing requirements. Infact, we experimented by encouraging InfoGAN to identify more groups in the training data thannumber of object-categories in the dataset. For example, we trained InfoGAN on CIFAR-10 datasetby encouraging the system to identify [10, 20, 30, 35, 40, 50 and 75] groups. Of course, these groupsdo not correspond to category-level groupings. However, to our surprise, we found that when thefeatures obtained from InfoGANs trained on large number of categories were used for clustering,they performed better at object categorization than the features obtained from an InfoGAN trainedon the same number of object categories as present in the dataset. Section 4 provides quantitativeresults on these experiments.5Under review as a conference paper at ICLR 20174 E XPERIMENTSWe perform experiments on multiple datasets; CIFAR-10, CIFAR-100 and STL-101. We use groundtruth labels only for evaluation purposes and for training the supervised learning baseline. The train-ing procedure is entirely unsupervised. We report results using two standard metrics that are usedfor evaluating unsupervised learning algorithms; Adjusted RAND Index (ARI) and the NormalizedMutual Information (NMI) score. We provide three baselines; (i) we report results using simplefeatures such as pixel intensities, HOG and GIST, which we call low-level visual features, (ii) wereport results on the features obtained using standard GAN training, (iii) as an upper bound, wereport results using supervised learning where we train the weights in a discriminator network withthe same architecture using category-level labels that are provided by the datasets.It is important to remember that we are interested in comparing the quality of the learned featuresthat can be used for transfer to novel images and not just the classification score on an pre-definedset of categories. The classification accuracy captures only how well a test image was correctlyclassified. If incorrectly classified, it does not quantify how bad the mistake was. ARI, on the otherhand, is a better metric for evaluating the properties of the features because it measures not onlyhow accurately pairs of objects were correctly grouped together, but also takes into account howmany pairs of data points were incorrectly grouped. Therefore, when comparing with the model thatwas trained using supervised learning, we ignore the top-level classification layer of that model, andquantify the quality of the representations, i.e., the features extracted from the penultimate layer,using ARI after clustering on them.Figure 3: This figure shows all the 64 filters from the first layer of the discriminative network trainedon CIFAR-10. The visualization on the left corresponds to the filters learned using adversarialtraining. The visualization on the right corresponds to the filters learned for the same architectureusing supervised learning. It is interesting to see that there the filters on the left have more highfrequency components and the filters on the right are more smooth.Before we go into the quantitative results, we visualize the filters of the first layer of the discrim-inative network and compare them across two different training procedures. Figure 3 shows thevisualization. On the left are the filters from the network that was trained using adversarial training.On the right are the filters from a network with the same architecture but trained using class-levelsupervision. Both these networks were trained using the CIFAR-10 dataset. We can see that whilesome of the filters look similar to each other, many of them are quite different. It is clear that thefilters on the right are more smooth than the filters on the left. Recollect that filters on the left aretrained to fit both the real images and the generated images. When the generated images are not ashigh-quality as the real images, the filters that D(.) learns might not be as regularized as the ones1We have released the code that was used in all our experiments at https://github.com/VittalP/UnsupGAN6Under review as a conference paper at ICLR 2017(a) (b)Figure 4: CIFAR-10: (a) Plots the performance of the grouping algorithm when using the featureslearned from InfoGAN training when trained over multiple categories. Zero groups correspondsto vanilla GAN. -32 and -64 correspond to the output sizes of the generated images. -InfoGANcorresponds to the results obtained with direct prediction using the recognition model in InfoGAN.(b) Note that InfoGAN features perform better than vanilla GAN features. However, supervisedlearning outperforms unsupervised learning on this database.learnt using only real data. We hypothesize that improving the quality of the generated images canhelp regularize the first layer filters in D(.). We leave this route of exploration for future work.4.1 CIFAR-10The CIFAR-10 consists of 50k training images and 10k testing images, of size 3232, dividedamong 10 categories. We trained the model for two different image sizes; 3232and6464. Wetrained InfoGAN with different numbers of categories f10, 20, 30, 35, 40, 50, 75 g. Figure 4a showsa plot of the performance measures versus the number of groups InfoGAN was trained to identify.We can see from the figure that as we increase the number of categories, the performance of themodel goes up into a certain point and drop after that. This indicates that there exists databases forwhich grouping into more categories than present in the ground truth might help. We also plot theperformance of the InfoGAN model when used directly as a prediction model. We can see fromthe plots that k-means++ clustering produces better results (ARI-32=0.097; NMI-32=0.18) thandirect prediction (ARI-32-InfoGAN: 0.085; NMI-32-InfoGAN: 0.14). We label the direct predictionresults with a (-InfoGAN).Figure 4b compares the performance when using different features. We can see that InfoGANfeatures trained with 50 clusters beats the features learned using vanilla GAN by a small margin.However, supervised training does much better (as one might have expected).4.2 CIFAR-100In these sets of experiments, we use the images from the CIFAR-100 database for training. Thisdatabase also contains 50k training examples and 10k test images, divided among 100 fine scalecategories and 20 coarse level categories. We test the performance on the coarse categories. Asbefore, we experiment the InfoGAN training with multiple categories f10, 20, 35, 50g. While thetrend is not as noticeable as in the case of CIFAR-10, the best performance is obtained when we use50 categories. Also, as before, the k-means++ clustering of the features produces better performance(ARI=0.04) than the recognition model of InfoGAN (ARI=0.036).7Under review as a conference paper at ICLR 2017(a) (b)Figure 5: CIFAR-100: (a) # of groups used to train InfoGAN has less of an effect on CIFAR-100 thanit had on CIFAR-10. However, the performance of k-means++ clustering is still better than directprediction using the recognition model of InfoGAN. Please see Fig. 4a for labeling conventions.(b) InfoGAN features and GAN features perform similarly on this dataset. However, supervisedlearning features are only slightly better than the unsupervised counterparts.Figure 5b compares the performance when we use different different features. Notice that the fea-tures obtained by adversarial training are as competitive as the features obtained using supervisedtraining. We that this is because of two reasons; (i) CIFAR-100 coarse level categories are muchharder to distinguish than the CIFAR-10 categories, making it difficult for the supervised model tolearn good features, (ii) the number of training examples per category in CIFAR-100 is lesser thanCIFAR-10 because we are training using the 20 coarse categories compared with 10 of CIFAR-10.We label the direct prediction results with a (-InfoGAN).4.3 STL-10Finally, we also perform experiments on the STL-10 dataset. This database consists of 5000 imagesfor training with labels, 100000 training images without labels, and 8000 images for testing. Thedataset consists of 10 categories, and all the images are of size 9696. This dataset brings out theadvantages of unsupervised learning algorithms. The database is more than two times bigger thanCIFAR-10 and CIFAR-100 datasets in terms of the number of images and each image is 9 times thesize of the CIFAR images. Figure 6b shows that the unsupervised learning with adversarial trainingoutperforms the same models trained using supervised learning. From Figure 6a, we also noticethat the features learned using vanilla GAN does better than the features learned using InfoGAN.Increasing the complexity of the datasets makes it difficult for InfoGAN to group the images in thedataset.5 C ONCLUSIONIn this paper, we explore an unsupervised feature learning technique where the model is trained us-ing adversarial training from a generative network. We use a generative model to generate imagesthat act as an adversary to the discriminative network. We explore the standard GAN architectureand the InfoGAN architecture for training the discriminative model. We also show that direct predic-tion using InfoGAN’s recognition model does not always result in identifying object category-levelinformation. Instead, we fuse the features learned by adversarial training with a traditional unsu-pervised learning approach, k-means clustering, and show that this combination produces betterresults than direct prediction. We also show that, in situations where there are limited amounts oflabeled training data and large amounts of unlabeled data, adversarial training has the potential tooutperform supervised learning.8Under review as a conference paper at ICLR 2017(a) (b)Figure 6: STL-10: (a) InfoGAN’s performance drops with increase in the number of groups. (b)Vanilla GAN’s features outperform InfoGAN-trained features. Also, notice that, with just 5000labeled training images, supervised learning starts to reach its limits. However, our model makesuse of the additional 100000 unlabeled images and is able to learn representations that surpass theperformance of features learned using the supervised model.REFERENCESLiang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille.Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, andfully connected crfs. arXiv preprint arXiv:1606.00915 , 2016a.Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan:Interpretable representation learning by information maximizing generative adversarial nets. InNIPS , 2016b.Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using alaplacian pyramid of adversarial networks. In Advances in neural information processing systems ,pp. 1486–1494, 2015.Carl Doersch, Abhinav Gupta, and Alexei A. Efros. Unsupervised visual representation learning bycontext prediction. In International Conference on Computer Vision , 2015.Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discrimina-tive unsupervised feature learning with convolutional neural networks. In Advances in NeuralInformation Processing Systems , pp. 766–774, 2014.Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for ac-curate object detection and semantic segmentation. In Proceedings of the IEEE conference oncomputer vision and pattern recognition , pp. 580–587, 2014.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. arXiv preprint arXiv:1512.03385 , 2015.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-lutional neural networks. In Advances in neural information processing systems , pp. 1097–1105,2012.Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic seg-mentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition ,pp. 3431–3440, 2015.9Under review as a conference paper at ICLR 2017Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsawpuzzles. 2016. URL http://arxiv.org/abs/1603.09246 .Vittal Premachandran, Boyan Bonev, Xiaochen Lian, and Alan L. Yuille. PASCAL boundaries:A class-agnostic semantic boundary dataset. CoRR , abs/1511.07951, 2015. URL http://arxiv.org/abs/1511.07951 .Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recogni-tion. CoRR , abs/1409.1556, 2014.Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting andcomposing robust features with denoising autoencoders. In Proceedings of the 25th internationalconference on Machine learning , pp. 1096–1103. ACM, 2008.Jianyu Wang, Zhishuai Zhang, Vittal Premachandran, and Alan Yuille. Discovering internal repre-sentations from object-cnns using population encoding. arXiv preprint arXiv:1511.06855 , 2015.Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos.InProceedings of the IEEE International Conference on Computer Vision , pp. 2794–2802, 2015.Saining Xie and Zhuowen Tu. Holistically-nested edge detection. In Proceedings of the IEEEInternational Conference on Computer Vision , pp. 1395–1403, 2015.10
Sy6iJDqlx
Published as a conference paper at ICLR 2017ATTEND , ADAPT AND TRANSFER :ATTENTIVE DEEP ARCHITECTURE FOR ADAPTIVETRANSFER FROM MULTIPLE SOURCES IN THE SAMEDOMAINJanarthanan RajendranUniversity of Michiganrjana@umich.eduAravind S. LakshminarayananIndian Institute of Technology Madrasaravindsrinivas@gmail.comMitesh M. KhapraIndian Institute of Technology Madrasmiteshk@cse.iitm.ac.inPrasanna PMcGill Universityprasanna.p@cs.mcgill.caBalaraman RavindranIndian Institute of Technology Madrasravi@cse.iitm.ac.inABSTRACTTransferring knowledge from prior source tasks in solving a new target task canbe useful in several learning applications. The application of transfer poses twoserious challenges which have not been adequately addressed. First, the agentshould be able to avoid negative transfer, which happens when the transfer ham-pers or slows down the learning instead of helping it. Second, the agent shouldbe able to selectively transfer, which is the ability to select and transfer from dif-ferent and multiple source tasks for different parts of the state space of the targettask. We propose A2T (Attend, Adapt and Transfer), an attentive deep architec-ture which adapts and transfers from these source tasks. Our model is genericenough to effect transfer of either policies or value functions. Empirical evalua-tions on different learning algorithms show that A2T is an effective architecturefor transfer by being able to avoid negative transfer while transferring selectivelyfrom multiple source tasks in the same domain.1 I NTRODUCTIONOne of the goals of Artificial Intelligence (AI) is to build autonomous agents that can learn andadapt to new environments. Reinforcement Learning (RL) is a key technique for achieving suchadaptability. The goal of RL algorithms is to learn an optimal policy for choosing actions thatmaximize some notion of long term performance. Transferring knowledge gained from tasks solvedearlier to solve a new target task can help, either in terms of speeding up the learning process orin terms of achieving a better solution, among other performance measures. When applied to RL,transfer could be accomplished in many ways (see Taylor & Stone (2009; 2011) for a very goodsurvey of the field). One could use the value function from the source task as an initial estimate inthe target task to cut down exploration [Sorg & Singh (2009)]. Alternatively one could use policiesfrom the source task(s) in the target task. This can take one of two forms - (i) the derived policiescan be used as initial exploratory trajectories [Atkeson & Schaal (1997); Niekum et al. (2013)] inthe target task and (ii) the derived policy could be used to define macro-actions which may then beused by the agent in solving the target task [Mannor et al. (2004); Brunskill & Li (2014)].Authors contributed equally1Published as a conference paper at ICLR 2017While transfer in RL has been much explored, there are two crucial issues that have not been ad-equately addressed in the literature. The first is negative transfer , which occurs when the transferresults in a performance that is worse when compared to learning from scratch in the target task.This severely limits the applicability of many transfer techniques only to cases for which some mea-sure of relatedness between source and target tasks can be guaranteed beforehand. This brings usto the second problem with transfer, which is the issue of identifying an appropriate source taskfrom which to transfer. In some scenarios, different source tasks might be relevant and useful fordifferent parts of the state space of the target task. As a real world analogy, consider multiple players(experts) who are good at different aspects of a game (say, tennis). For example, Player 1 is good atplaying backhand shots while Player 2 is good at playing forehand shots. Consider the case of a newplayer (agent) who wants to learn tennis by selectively learning from these two experts. We handlesuch a situation in our architecture by allowing the agent to learn how to pick and use solutions frommultiple and different source tasks while solving a target task, selectively applicable for differentparts of the state space. We call this selective transfer . Our agent can transfer knowledge fromPlayer 1 when required to play backhand shots and Player 2 for playing forehand shots. Further,let us consider consider the situation that both Player 1 and Player 2 are bad at playing drop shots.Apart from the source tasks, we maintain a base network that learns from scratch on the target task.The agent can pick and use the solution of the base network when solving the target task at the partsof the state space where transferring from the source tasks is negative. Such a situation could arisewhen the source task solutions are irrelevant for solving the target task over a specific portion of thestate space, or when the transferring from the source tasks is negative over a specific portion of thestate space (for example, transferring the bad drop shot abilities of Players 1 and 2). This situationalso entails the first problem of avoiding negative transfer. Our framework allows an agent to avoidtransferring from both Players 1 and 2 while learning to play drop shots, and rather acquire the dropshot skill by learning to use the base network. The architecture is trained such that the base networkuses not just the experience obtained through the usage of its solutions in the target task, but theoverall experience acquired using the combined knowledge of the source tasks and itself. This en-ables the base network solutions to get closer to the behavior of the overall architecture (which usesthe source task solutions as well). This makes it easier for the base network to assist the architectureto fine tune the useful source task solutions to suit the target task perfectly over time.The key contribution in the architecture is a deep attention network , that decides which solutions toattend to, for a given input state. The network learns solutions as a function of current state therebyaiding the agent in adopting different solutions for different parts of the state space in the target task.To this end, we propose A2T: Attend, Adapt and Transfer, an Attentive Deep Architecture for Adap-tive Transfer, that avoids negative transfer while performing selective transfer from multiple sourcetasks in the same domain. In addition to the tennis example, A2T is a fairly generic framework thatcan be used to selectively transfer different skills available from different experts as appropriate tothe situation. For instance, a household robot can appropriately use skills from different expertsfor different household chores. This would require the skill to transfer manipulation skills acrossobjects, tasks and robotic actuators. With a well developed attention mechanism, the most appropri-ate and helpful combination of object-skill-controller can be identified for aiding the learning on arelated new task. Further, A2T is generic enough to effect transfer of either action policies or action-value functions, as the case may be. We also adapt different algorithms in reinforcement learningas appropriate for the different settings and empirically demonstrate that the A2T is effective fortransfer learning for each setting.2 R ELATED WORKAs mentioned earlier, transfer learning approaches could deal with transferring policies or valuefunctions. For example, Banerjee & Stone (2007) describe a method for transferring value functionsby constructing a Game tree . Similarly, Sorg & Singh (2009) use the value function from a sourcetask as the initial estimate of the value function in the target task.Another method to achieve transfer is to reuse policies derived in the source task(s) in the targettask. Probabilistic Policy Reuse as discussed in Fern ́andez & Veloso (2006) maintains a library ofpolicies and selects a policy based on a similarity metric, or a random policy, or a max-policy fromthe knowledge obtained. This is different from the proposed approach in that the proposed approach2Published as a conference paper at ICLR 2017can transfer policies at the granularity of individual states which is not possible in policy-reuserendering it unable to learn customized policy at that granularity.Atkeson & Schaal (1997); Niekumet al. (2013) evaluated the idea of having the transferred policy from the source tasks as explorativepolicies instead of having a random exploration policy. This provides better exploration behaviorprovided the tasks are similar. Talvitie & Singh (2007) try to find the promising policy from a setof candidate policies that are generated using different action mapping to a single solved task. Incontrast, we make use of one or more source tasks to selectively transfer policies at the granularityof state. Apart from policy transfer and value transfer as discussed above, Ferguson & Mahadevan(2006) discuss representation transfer using Proto Value Functions.The idea of negative and selective transfer have been discussed earlier in the literature. For example,Lazaric & Restelli (2011) address the issue of negative transfer in transferring samples for a relatedtask in a multi-task setting. Konidaris et al. (2012) discuss the idea of exploiting shared commonfeatures across related tasks. They learn a shaping function that can be used in later tasks.The two recent works that are very relevant to the proposed architecture are discussed in Parisottoet al. (2015) and Rusu et al. (2016). Parisotto et al. (2015) explore transfer learning in RL acrossAtari games by trying to learn a multi-task network over the source tasks available and directly fine-tune the learned multi-task network on the target task. However, fine-tuning as a transfer paradigmcannot address the issue of negative transfer which they do observe in many of their experiments.Rusu et al. (2016) try to address the negative transfer issue by proposing a sequential learning mech-anism where the filters of the network being learned for an ongoing task are dependent throughlateral connections on the lower level filters of the networks learned already for the previous tasks.The idea is to ensure that dependencies that characterize similarity across tasks could be learnedthrough these lateral connections. Even though they do observe better transfer results than directfine-tuning, they are still not able to avoid negative transfer in some of their experiments.3 P ROPOSED ARCHITECTURELet there be Nsource tasks and let K1;K2;:::KNbe the solutions of these source tasks 1;:::Nrespectively. Let KTbe the solution that we learn in the target task T. Source tasks refer to tasksthat we have already learnt to perform and target task refers to the task that we are interested inlearning now. These solutions could be for example policies or state-action values. Here the sourcetasks should be in the same domain as the target task, having the same state and action spaces. Wepropose a setting where KTis learned as a function of K1;:::;KN;KB, whereKBis the solutionof a base network which starts learning from scratch while acting on the target task. In this work,we use a convex combination of the solutions to obtain KT.KT(s) =wN+1;sKB(s) +NXi=1wi;sKi(s) (1)N+1Xi=1wi;s= 1;wi;s2[0;1] (2)wi;sis the weight given to the ith solution at state s.The agent uses KTto act in the target task. Figure 1a shows the proposed architecture. While thesource task solutions K1;:::;KNremain fixed, the base network solutions are learnt and hence KBcan change over time. There is a central network which learns the weights ( wi;s,i21;2;:::;N +1),given the input state s. We refer to this network as the attention network . The [0;1]weights deter-mine the attention each solution gets allowing the agent to selectively accept or reject the differentsolutions, depending on the input state. We adopt a soft-attention mechanism whereby more thanone weight can be non-zero [Bahdanau et al. (2014)] as opposed to a hard-attention mechanism[Mnih et al. (2014)] where we are forced to have only one non-zero weight.wi;s=exp (ei;s)N+1Pj=1exp (ej;s);i2f1;2;:::;N + 1g (3)3Published as a conference paper at ICLR 2017(a) (b)Figure 1: (a) A2T architecture. The doted arrows represent the path of back propagation. (b) Actor-Critic using A2T.(e1;s;e2;s;:::;eN+1;s) =f(s;a) (4)Here,f(s;a)is a deep neural network (attention network), which could consist of convolutionlayers and fully connected layers depending on the representation of input. It is parametrised by aand takes as input a state sand outputs a vector of length N+ 1, which gives the attention scoresfor theN+ 1solutions at state s. Eq.(3) normalises this score to get the weights that follow Eq.(2).If theith source task solution is useful at state s, thenwi;sis set to a high value by the attentionnetwork. Working at the granularity of states allows the attention network to attend to differentsource tasks, for different parts of the state space of the target task, thus giving it the ability toperform selective transfer. For parts of the state space in the target task, where the source tasksolutions cause negative transfer or where the source task solutions are not relevant, the attentionnetwork learns to give high weight to the base network solution (which can be learnt and improved),thus avoiding negative transfer.Depending on the feedback obtained from the environment upon following KT, the attention net-work’s parameters aare updated to improve performance.As mentioned earlier, the source task solutions, K1;:::;KNremain fixed. Updating these sourcetask’s parameters would cause a significant amount of unlearning in the source tasks solutions andresult in a weaker transfer, which we observed empirically. This also enables the use of source tasksolutions, as long as we have the outputs alone, irrespective of how and where they come from.Even though the agent follows KT, we update the parameters of the base network that producesKB, as if the action taken by the agent was based only on KB. Due to this special way of updatingKB, apart from the experience got through the unique and individual contribution of KBtoKTinparts of the state space where the source task solutions are not relevant, KBalso uses the valuableexperience got by using KTwhich uses the solutions of the source tasks as well.This also means that, if there is a source task whose solution Kjis useful for the target task insome parts of its state space, then KBtries to replicate Kjin those parts of the state space. Inpractise, the source task solutions though useful, might need to be modified to suit perfectly for thetarget task. The base network takes care of these modifications required to make the useful sourcetask solutions perfect for the target task. The special way of training the base network assists thearchitecture in achieving this faster. Note that the agent could follow/use KjthroughKTeven whenKBdoes not attain its replication in the corresponding parts of the state space. This allows for agood performance of the agent in earlier stages training itself, when a useful source task is availableand identified.Since the attention is soft, our model has the flexibility to combine multiple solutions. The use ofdeep neural networks allow the model to work even for large, complex RL problems. The deepattention network, allows the agent to learn complex selection functions, without worrying about4Published as a conference paper at ICLR 2017representation issues a priori. To summarise, for a given state, A2T learns to attend to specificsolutions and adapts this attention over different states, hence attaining useful transfer . A2T isgeneral and can be used for transfer of solutions such as policy and value.3.1 P OLICY TRANSFERThe solutions that we transfer here are the source task policies, taking advantage of which, we learna policy for the target task. Thus, we have K1;:::;KN;KB;KT 1;:::N;B;T. Hererepresents a stochastic policy, a probability distribution over all the actions. The agent acts in thetarget task, by sampling actions from the probability distribution T. The target task policy Tis gotas described in Eq.(1) and Eq.(2). The attention network that produces the weights for the differentsolutions, is trained by the feedback got after taking action following T. The base network thatproducesBis trained as if the sampled action came from B(though it originally came from T),the implications of which were discussed in the previous section. When the attention network’sweight for the policy Bis high, the mixture policy Tis dominated by B, and the base networklearning is nearly on-policy. In the other cases, Bundergoes off-policy learning. But if we lookclosely, even in the latter case, since Bmoves towards T, it tries to be nearly on-policy all thetime. Empirically, we observe that Bconverges. This architecture for policy transfer can be usedalongside any algorithm that has an explicit representation of the policy. Here we describe twoinstantiations of A2T for policy transfer, one for direct policy search using REINFORCE algorithmand another in the Actor-Critic setup.3.1.1 P OLICY TRANSFER IN REINFORCE A LGORITHMS USING A2T:REINFORCE algorithms [Williams (1992)] can be used for direct policy search by making weightadjustments in a direction that lies along the gradient of the expected reinforcement. The full ar-chitecture is same as the one shown in Fig.1a with K . We do direct policy search, and theparameters are updated using REINFORCE. Let the attention network be parametrized by aandthe base network which outputs Bbe parametrized by b. The updates are given by:a a+a(rb)@PMt=1log(T(st;at))@a(5)b b+b(rb)@PMt=1log(B(st;at))@b(6)wherea;bare non-negative factors, ris the return obtained in the episode, bis some baselineandMis the length of the episode. atis the action sampled by the agent at state stfollowingT.Note that while T(st;at)is used in the update of the attention network, B(st;at)is used in theupdate of the base network.3.1.2 P OLICY TRANSFER IN ACTOR -CRITIC USING A2T:Actor-Critic methods [Konda & Tsitsiklis (2000)] are Temporal Difference (TD) methods that havetwo separate components, viz., anactor and a critic . The actor proposes a policy whereas the criticestimates the value function to critique the actor’s policy. The updates to the actor happens throughTD-error which is the one step estimation error that helps in reinforcing an agent’s behaviour.We use A2T for the actor part of the Actor-Critic. The architecture is shown in Fig.1b. The actor,A2T is aware of all the previous learnt tasks and tries to use those solution policies for its benefit.The critic evaluates the action selection from Ton the basis of the performance on the target task.With the same notations as REINFORCE for st;at;a;b;a;b;B;T; let actionatdictatedbyTlead the agent to next state st+1with a reward rt+1and letV(st)represent the value of statestandthe discount factor. Then, the update equations for the actor are as below:t=rt+1+V(st+1)V(st) (7)a a+at@logT(st;at)@a@logT(st;at)@a(8)5Published as a conference paper at ICLR 2017b b+bt@logB(st;at)@b@logB(st;at)@b(9)Here,tis the TD error. The state-value function Vof the critic is learnt using TD learning.3.2 V ALUE TRANSFERIn this case, the solutions being transferred are the source tasks’ action-value functions, which wewill call asQfunctions. Thus, K1;:::;KN;KB;KT Q1;:::;QN;QB;QT. LetArepresentthe discrete action space for the tasks and Qi(s) =fQ(s;aj)8aj2Ag. The agent acts by usingQTin the target task, which is got as described in Eq.(1) and Eq.(2). The attention network and thebase network of A2T are updated as described in the architecture.3.2.1 V ALUE TRANSFER IN QLEARNING USING A2T:The state-action value Qfunction is used to guide the agent to selecting the optimal action aat astates, whereQ(s;a)is a measure of the long-term return obtained by taking action aat states. Oneway to learn optimal policies for an agent is to estimate the optimal Q(s;a)for the task. Q-learning[Watkins & Dayan (1992)] is an off-policy Temporal Difference (TD) learning algorithm that doesso. The Q-values are updated iteratively through the Bellman optimality equation [Puterman (1994)]with the rewards obtained from the task as below:Q(s;a) E[r(s;a;s0) +maxa0Q(s0;a0)]In high dimensional state spaces, it is infeasible to update Q-value for all possible state-action pairs.One way to address this issue is by approximating Q(s;a)through a parametrized function approx-imatorQ(s;a;),thereby generalizing over states and actions by operating on higher level features[Sutton & Barto (1998)]. The DQN [Mnih et al. (2015)] approximates the Q-value function with adeep neural network to be able to predict Q(s;a)over all actions a, for all states s.The loss function used for learning a Deep Q Network is as below:L() =Es;a;r;s0[yDQNQ(s;a;)2];withyDQN=r+maxa0Q(s0;a0;)Here,Lrepresents the expected TD error corresponding to current parameter estimate .rep-resents the parameters of a separate target network , whilerepresents the parameters of the onlinenetwork . The usage of a target network is to improve the stability of the learning updates. Thegradient descent step is shown below:rL() =Es;a;r;s0[(yDQNQ(s;a;))rQ(s;a)]To avoid correlated updates from learning on the same transitions that the current network simulates,an experience replay [Lin (1993)] D(of fixed maximum capacity) is used, where the experiencesare pooled in a FIFO fashion.We use DQN to learn our experts Qi;i21;2:::N on the source tasks. Q-learning is used to ensureQT(s)is driven to a good estimate of Qfunctions for the target task. Taking advantage of the off-policy nature of Q-learning, both QBandQTcan be learned from the experiences gathered by an-greedy behavioral policy based on QT. Let the attention network that outputs wbe parametrisedbyaand the base network outputting QBbe parametrised by b. Letaandbrepresent theparameters of the respective target networks. Note that the usage of target here is to signify theparameters ( a;b) used to calculate the target value in the Q-learning update and is different fromits usage in the context of the target task. The update equations are:yQT= (r+maxa0QT(s0;a0;a;b)) (10)LQT(a;b) =Es;a;r;s0[(yQTQT(s;a;a;b))2] (11)6Published as a conference paper at ICLR 2017(a) Chain World (b) Puddle World 1 (c) Puddle World 2Figure 2: Different worlds for policy transfer experimentsLQB(b) =Es;a;r;s0[(yQTQB(s;a;b))2] (12)raLQT=E[(yQTQT(s;a))raQT(s;a)] (13)rbLQB=E[(yQTQB(s;a))rbQR(s;a)] (14)aandbare updated with the above gradients using RMSProp. Note that the Q-learning updates forboth the attention network (Eq.(11)) and the base network (Eq.(12)) use the target value generatedbyQT. We use target networks for both QBandQTto stabilize the updates and reduce the non-stationarity as in DQN training. The parameters of the target networks are periodically updated tothat of the online networks.4 E XPERIMENTS AND DISCUSSIONWe evaluate the performance of our architecture A2T on policy transfer using two simulated worlds,viz., chain world and puddle world as described below. The main goal of these experiments is to testthe consistency of results with the algorithm motivation. Chain world: Figure 2a shows the chainworld where the goal of the agent is to go from one point in the chain (starting state) to anotherpoint (goal state) in the least number of steps. At each state the agent can choose to either moveone position to the left or to the right. After reaching the goal state the agent gets a reward that isinversely proportional to the number of steps taken to reach the goal.Puddle worlds: Figures 2b and 2c show the discrete version of the standard puddle world thatis widely used in Reinforcement Learning literature. In this world, the goal of the agent is to gofrom a specified start position to the goal position, maximising its return. At each state the agentcan choose one of these four actions: move one position to the north, south, east or west.With 0:9probability the agent moves in the chosen direction and with 0:1probability it moves in a randomdirection irrespective of its choice of action. On reaching the goal state, the agent gets a rewardof+10. On reaching other parts of the grid the agent gets different penalties as mentioned in thelegend of the figures. . We evaluate the performance of our architecture on value transfer using theArcade Learning Environment (ALE) platform [Bellemare et al. (2012)]. Atari 2600: ALE providesa simulator for Atari 2600 games. This is one of the most commonly used benchmark tasks for deepreinforcement learning algorithms [Mnih et al. (2015), Mnih et al. (2016), Parisotto et al. (2015),Rusu et al. (2016)]. We perform our adaptive transfer learning experiments on the Atari 2600 gamePong.4.1 A BILITY TO DO SELECTIVE TRANSFERIn this section, we consider the case when multiple partially favorable source tasks are availablesuch that each of them can assist the learning process for different parts of the state space of thetarget task. The objective here is to first show the effectiveness of the attention network in learningtofocus only on the source task relevant to the state the agent encounters while trying to completethe target task and then evaluating the full architecture with an additional randomly initialised basenetwork.7Published as a conference paper at ICLR 2017(a) The weights given by the attention network. Selectivetransfer in REINFORCE(b) Selective transfer in Actor-CriticFigure 3: Results of the selective policy transfer experimentsThis is illustrated for the Policy Transfer setting using the chain world shown in (Fig. 2a). Considerthat the target task LTis to start inAorBwith uniform probability and reach Cin the least numberof steps. Now, consider that two learned source tasks, viz.,L1andL2, are available. L1is thesource task where the agent has learned to reach the left end ( A) starting from the right end ( B). Incontrast,L2is the source task where the agent has learned to reach the right end ( B) starting fromthe left end ( A). Intuitively, it is clear that the target task should benefit from the policies learnt fortasksL1andL2. We learn to solve the task LTusing REINFORCE given the policies learned forL1andL2. Figure 3a (i) shows the weights given by the attention network to the two source taskpolicies for different parts of the state space at the end of learning. We observe that the attentionnetwork has learned to ignore L1, andL2for the left, and right half of the state space of the targettask, respectively. Next, we add base network and evaluate the full architecture on this task. Figure3a (ii) shows the weights given by the attention network to the different source policies for differentparts of the state space at the end of learning. We observe that the attention network has learned toignoreL1, andL2for the left, and right half of the state space of the target task, respectively. As thebase network replicates Tover time, it has a high weight throughout the state space of the targettask.We also evaluate our architecture in a relatively more complex puddle world shown in Figure 2c. Inthis case,L1is the task of moving from S1toG1, andL2is the task of moving from S2toG1.In the target task LT, the agent has to learn to move to G1starting from either S1orS2chosenwith uniform probability. We learn the task LTusing Actor-Critic method, where the following areavailable (i) learned policy for L1(ii) learned policy for L2and (iii) a randomly initialized policynetwork (the base network). Figure 3b shows the performance results. We observe that actor-criticusing A2T is able to use the policies learned for L1, andL2and performs better than a networklearning from scratch without any knowledge of source tasks.We do a similar evaluation of the attention network, followed by our full architecture for valuetransfer as well. We create partially useful source tasks through a modification of the Atari 2600game Pong. We take inspiration from a real world scenario in the sport Tennis, where one couldimagine two different right-handed (or left) players with the first being an expert player on theforehand but weak on the backhand, while the second is an expert player on the backhand but weakon the forehand. For someone who is learning to play tennis with the same style (right/left) as theexperts, it is easy to follow the forehand expert player whenever he receives a ball on the forehandand follow the backhand expert whenever he receives a ball on the backhand.We try to simulate this scenario in Pong. The trick is to blur the part of the screen where we wantto force the agent to be weak at returning the ball. The blurring we use is to just black out all pixelsin the specific region required. To make sure the blurring doesn’t contrast with the background, wemodify Pong to be played with a black background (pixel value 0) instead of the existing gray (pixelvalue 87). We construct two partially helpful source task experts L1andL2.L1is constructed by8Published as a conference paper at ICLR 2017Figure 4: Visualisation of the attention weights in the Selective Transfer with Attention Networkexperiment: Green and Blue bars signify the attention probabilities for Expert-1 ( L1) and Expert-2 (L2) respectively. We see that in the first two snapshots, the ball is in the lower quadrant andas expected, the attention is high on Expert-1, while in the third and fourth snapshots, as the ballbounces back into the upper quadrant, the attention increases on Expert-2.training a DQN on Pong with the upper quadrant (the agent’s side) blurred, while L2is constructedby training a DQN with the lower quadrant (the agent’s side) blurred. This essentially results inthe ball being invisible when it is in the upper quadrant for L1and lower quadrant for L2. Wetherefore expect L1to be useful in guiding to return balls on the lower quadrant, and L2for theupper quadrant. The goal of the attention network is to learn suitable filters and parameters so that itwill focus on the correct source task for a specific situation in the game. The source task experts L1andL2scored an average of 9.2and8respectively on Pong game play with black background. Withan attention network to suitably weigh the value functions of L1andL2, an average performance of17.2 was recorded just after a single epoch (250,000 frames) of training. (The score in Pong is in therange of [21;21]). This clearly shows that the attention mechanism has learned to take advantageof the experts adaptively. Fig. 4 shows a visualisation of the attention weights for the same.Figure 5: Selective Value Transfer.We then evaluate our full architecture (A2T) inthis setting, i.e with an addition of DQN learn-ing from scratch (base network) to the above set-ting. The architecture can take advantage of theknowledge of the source task experts selectivelyearly on during the training while using the ex-pertise of the base network wherever required, toperform well on the target task. Figure 5 sum-marizes the results, where it is clear that learn-ing with both the partially useful experts is betterthan learning with only one of them which in turnis better than learning from scratch without anyadditional knowledge.4.2 A BILITY TOAVOID NEGATIVE TRANSFER AND ABILITYTOTRANSFER FROM FAVORABLE TASKWe first consider the case when only one learnedsource task is available such that its solution K1(policy or value) can hamper the learning process of the new target task. We refer to such a sourcetask as an unfavorable source task. In such a scenario, the attention network shown in Figure 1ashould learn to assign a very low weight (ignore) to K1. We also consider a modification of thissetting by adding another source task whose solution K2is favorable to the target task. In such ascenario, the attention network should learn to assign high weight (attend) to K2while ignoring K1.We now define an experiment using the puddle world from Figure 2b for policy transfer. The targettask in our experiment is to maximize the return in reaching the goal state G1starting from any oneof the states S1;S2;S3;S4. We artificially construct an unfavorable source task by first learningto solve the above task and then negating the weights of the topmost layer of the actor network.We then add a favorable task to the above setting. We artificially construct a favorable source task9Published as a conference paper at ICLR 2017(a) Avoiding negative transfer(Pong) and transferringfrom a favorable task(b) Avoiding negative transfer(Freeway) and transfer-ring from a favorable taskFigure 7: Avoiding negative transfer and transferring value from a favorable task(higher the better).Specific training and architecture details are mentioned in APPENDIX. The plots are averaged overtwo runs with different random seeds.simply by learning to solve the target task and using the learned actor network. Figure 6 showsthe results. The target task for the value transfer experiment is to reach expert level performanceon Pong. We construct two kinds of unfavorable source tasks for this experiment. Inverse-Pong :A DQN on Pong trained with negated reward functions, that is with R0(s;a) =R(s;a)whereR(s;a)is the reward provided by the ALE emulator for choosing action aat states.Freeway :An expert DQN on another Atari 2600 game, Freeway, which has the same range of optimal valuefunctions and same action space as Pong. We empirically verified that the Freeway expert DQNleads to negative transfer when directly initialized and fine-tuned on Pong which makes this a goodproxy for a negative source task expert even though the target task Pong has a different state space.Figure 6: Avoiding negative transfer and trans-ferring policy from a favorable task(lower thebetter).We artificially construct a favorable source taskby learning a DQN to achieve expertise on thetarget task (Pong) and use the learned network.Figure 7a compares the performance of the var-ious scenarios when the unfavorable source taskis Inverse-Pong, while Figure 7b offers a similarcomparison with the negative expert being Free-way.From all the above results, we can clearly see thatA2T does not get hampered by the unfavorablesource task by learning to ignore the same andperforms competitively with just a randomly ini-tialized learning on the target task without any ex-pert available. Secondly, in the presence of an ad-ditional source task that is favorable, A2T learnsto transfer useful knowledge from the same whileignoring the unfavorable task, thereby reachingexpertise on the target task much faster than theother scenarios.4.3 V ISUALIZATION : EVOLUTION OFATTENTION WEIGHTS WITH ONE POSITIVE AND ONE NEGATIVE EXPERTWe present the evolution of attention weights for the experiment described in Section 4.2 wherewe focus on the efficacy of the A2T framework in providing an agent the ability to avoid negativetransfer andtransfer from a favorable source task (perfect expert) . Figure 8 depicts the evolution of10Published as a conference paper at ICLR 2017the attention weights (normalised in the range of [0;1]) during the training of the A2T framework.The corresponding experiment is the case where the target task is to solve Pong, while there are twosource task experts, one being a perfect Pong playing trained DQN (to serve as positive expert), andthe other being the Inverse-Pong DQN trained with negated reward functions (to serve as negativeexpert). Additionally, there’s also the base network that learns from scratch using the experiencegathered by the attentively combined behavioral policy from the expert networks, the base networkand itself.Figure 8: Evolution of attention weights withone positive and one negative expert.We train the framework for 30 epochs, and theplot illustrates the attention weights every secondepoch. We clearly see from figure 8 that there isno weird co-adaptation that happens in the train-ing, and the attention on the negative expert isuniformly low throughout. Initially, the frame-work needs to collect some level of experienceto figure out that the positive expert is optimal(or close to optimal). Till then, the attention ismostly on the base network, which is learningfrom scratch. The attention then shifts to the pos-itive expert which in turn provides more reward-ing episodes and transition tuples to learn from.Finally, the attention drifts slowly to the base net-work from the positive expert again, after whichthe attention is roughly random in choosing be-tween the execution of positive expert and thebase network. This is because the base networkhas acquired sufficient expertise as the positiveexpert which happens to be optimal for the tar-get task. This visualization clearly shows that A2T is a powerful framework in ignoring a negativeexpert throughout and using a positive expert appropriately to learn quickly from the experiencegathered and acquire sufficient expertise on the target task.4.4 W HEN A PERFECT EXPERT IS NOT AVAILABLE AMONG THE SOURCE TASKSFigure 9: Partial Positive Expert ExperimentIn our experiments in the previous subsectiondealing with prevention of negative transfer andusing a favorable source task, we consider thepositive expert as a perfect (close to optimal) ex-pert on the same task we treat as the target task.This raises the question of relying on the pres-ence of a perfect expert as a positive expert. Ifwe have such a situation, the obvious solution isto execute each of the experts on the target taskand vote for them with probabilities proportionalto the average performance of each.The A2T framework is however generic and notintended to just do source task selection . We il-lustrate this with an additional baseline experi-ment, where the positive source task is an im-perfect expert on the target task . In such a case,just having a weighted average voting among theavailable source task networks based on their in-dividual average rewards is upper bounded by theperformance of the best available positive expert, which happens to be an imperfect expert on the tar-get task. Rather, the base network has to acquire new skills not present in the source task networks.We choose a partially trained network on Pong, that scores an average of 8(max: 21). The graphin figure 9 clearly shows that the A2T framework with a partial Pong expert and a negative expertperforms better than i) learning from scratch, ii) A2T with only one negative expert, and performsworse than A2T with one perfect positive expert and one negative expert. This is expected because11Published as a conference paper at ICLR 2017a partial expert cannot provide as much of expert knowledge as a perfect expert, but still providessome useful knowledge in speeding the process of solving the target task. An important conclusionfrom this experiment is that the A2T framework is capable of discovering new skills not availableamong any of the experts when such skills are required for optimally solving the target task. Tomaintain consistency, we perform the same number of runs for averaging scores and experimentedwith both learning rates and pick the better performing one (0.00025).5 C ONCLUSION AND FUTURE WORKIn this paper we present a very general deep neural network architecture, A2T, for transfer learningthat avoids negative transfer while enabling selective transfer from multiple source tasks in the samedomain. We show simple ways of using A2T for policy transfer and value transfer. We empiricallyevaluate its performance with different algorithms, using simulated worlds and games, and showthat it indeed achieves its stated goals. Apart from transferring task solutions, A2T can also be usedfor transferring other useful knowledge such as the model of the world.While in this work we focused on transfer between tasks that share the same state and action spacesand are in the same domain, the use of deep networks opens up the possibility of going beyond thissetting. For example, a deep neural network can be used to learn common representations [Parisottoet al. (2015)] for multiple tasks thereby enabling transfer between related tasks that could possiblyhave different state-action spaces. A hierarchical attention over the lower level filters across sourcetask networks while learning the filters for the target task network is another natural extension totransfer across tasks with different state-action spaces. The setup from Progressive Neural Networks[Rusu et al. (2016)] could be borrowed for the filter transfer, while the A2T setup can be retained forthe policy/value transfer. Exploring this setting for continuous control tasks so as to transfer frommodular controllers as well avoid negative transfer is also a potential direction for future research.The nature of tasks considered in our experiments is naturally connected to Hierarchical Reinforce-ment Learning and Continual Learning. For instance, the blurring experiments inspired from Tennisbased on experts for specific skills like Forehand and Backhand could be considered as learning fromsub-goals (program modules) like Forehand and Backhand to solve a more complex and broadertask like Tennis by invoking the relevant sub-goals (program modules). This structure could be veryuseful to build a household robot for general purpose navigation and manipulation whereby specificskills such as manipulation of different objects, navigating across different source-destination points,etc could be invoked when necessary. The attention network in the A2T framework is essentiallyasoft meta-controller and hence presents itself as a powerful differentiable tool for Continual andMeta Learning. Meta-Controllers have typically been been designed with discrete decision struc-ture over high level subgoals. This paper presents an alternate differentiable meta-controller with asoft-attention scheme. We believe this aspect can be exploited for differentiable meta-learning ar-chitectures for hierarchical reinforcement learning. Over all, we believe that A2T is a novel way toapproach different problems like Transfer Learning, Meta-Learning and Hierarchical ReinforcementLearning and further refinements on top of this design can be a good direction to explore.ACKNOWLEDGEMENTSThanks to the anonymous reviewers of ICLR 2017 who have provided thoughtful remarks andhelped us revise the paper. We would also like to thank Sherjil Ozair, John Schulman, YoshuaBengio, Sarath Chandar, Caglar Gulchere and Charu Chauhan for useful feedback about the work.12Published as a conference paper at ICLR 2017REFERENCESChristopher G Atkeson and Stefan Schaal. Robot learning from demonstration. In In Proceedingsof International Conference on Machine Learning , volume 97, 1997.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. arXiv preprint arXiv:1409.0473 , 2014.Bikramjit Banerjee and Peter Stone. General game learning using knowledge transfer. In In The20th International Joint Conference on Artificial Intelligence , 2007.Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environ-ment: An evaluation platform for general agents. arXiv preprint arXiv:1207.4708 , 2012.Emma Brunskill and Lihong Li. Pac-inspired option discovery in lifelong reinforcement learning. InProceedings of the 31st International Conference on Machine Learning (ICML-14) , pp. 316–324,2014.Kimberly Ferguson and Sridhar Mahadevan. Proto-transfer learning in markov decision processesusing spectral methods. Computer Science Department Faculty Publication Series , pp. 151, 2006.Fernando Fern ́andez and Manuela Veloso. Probabilistic policy reuse in a reinforcement learningagent. In Proceedings of the fifth international joint conference on Autonomous agents and mul-tiagent systems , pp. 720–727. ACM, 2006.Vijay Konda and John Tsitsiklis. Actor-critic algorithms. In SIAM Journal on Control and Opti-mization , pp. 1008–1014. MIT Press, 2000.George Konidaris, Ilya Scheidwasser, and Andrew G Barto. Transfer in reinforcement learning viashared features. The Journal of Machine Learning Research , 13(1):1333–1371, 2012.Alessandro Lazaric and Marcello Restelli. Transfer from multiple mdps. In Advances in NeuralInformation Processing Systems , pp. 1746–1754, 2011.Long-Ji Lin. Reinforcement learning for robots using neural networks. Technical report, DTICDocument, 1993.Shie Mannor, Ishai Menache, Amit Hoze, and Uri Klein. Dynamic abstraction in reinforcementlearning via clustering. In Proceedings of the twenty-first international conference on Machinelearning , pp. 71. ACM, 2004.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier-stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprintarXiv:1312.5602 , 2013.V olodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. InAdvances in Neural Information Processing Systems , pp. 2204–2212, 2014.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle-mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-levelcontrol through deep reinforcement learning. Nature , 518(7540):529–533, 2015.V olodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, TimHarley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcementlearning. arXiv preprint arXiv:1602.01783 , 2016.Scott Niekum, Sachin Chitta, Andrew G Barto, Bhaskara Marthi, and Sarah Osentoski. Incrementalsemantically grounded learning from demonstration. In Robotics: Science and Systems , volume 9,2013.Emilio Parisotto, Jimmy Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transferreinforcement learning. CoRR , abs/1511.06342, 2015.Martin L Puterman. Markov decision processes: Discrete stochastic dynamic programming. 1994.13Published as a conference paper at ICLR 2017Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick,Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. CoRR ,abs/1606.04671, 2016.Jonathan Sorg and Satinder Singh. Transfer via soft homomorphisms. In Proceedings of The 8thInternational Conference on Autonomous Agents and Multiagent Systems-Volume 2 , pp. 741–748.International Foundation for Autonomous Agents and Multiagent Systems, 2009.Richard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning . MIT Press,Cambridge, MA, USA, 1st edition, 1998. ISBN 0262193981.Erik Talvitie and Satinder Singh. An experts algorithm for transfer learning. In Proceedings of the20th international joint conference on Artifical intelligence , pp. 1065–1070. Morgan KaufmannPublishers Inc., 2007.Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey.The Journal of Machine Learning Research , 10:1633–1685, 2009.Matthew E Taylor and Peter Stone. An introduction to intertask transfer for reinforcement learning.AI Magazine , 32(1):15, 2011.Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning , 8(3):279–292, 1992.Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcementlearning. Machine learning , 8(3-4):229–256, 1992.14Published as a conference paper at ICLR 2017APPENDIX A: D ETAILS OF THE NETWORK ARCHITECTURE IN VALUETRANSFER EXPERIMENTSFor the source task expert DQNs, we use the same architecture as [Mnih et al. (2015)] where theinput is 84844with 32convolution filters, dimensions 88, stride 44followed by 64convolution filters with dimensions 44and stride 22, again followed by 64convolution filtersof size 33and stride 11. This is then followed by a fully connected layer of 512units and finallyby a fully connected output layer with as many units as the number of actions in Pong (Freeway)which is 3. We use ReLU nonlinearity in all the hidden layers.With respect to the A2T framework architecture, we have experimented with two possible architec-tures:The base and attention networks following the NIPS architecture of Mnih et al. (2013)except that the output layer is softmax for the attention network.The base and attention networks following the Nature architecture of Mnih et al. (2015)with a softmax output layer for the attention network.Specifically, the NIPS architecture of Mnih et al. (2013) takes in a batch of 84844inputs,followed by 16convolution filters of dimensions 88with stride 44,32convolution filters withdimensions 44and stride 22, a fully connected hidden layer of 256units, followed by theoutput layer. For the Selective Transfer with Blurring experiments described in Section 4.1, we usethe second option above. For the other experiments in Section 4.2 and the additional experiments inAppendix, we use the first option. The attention network has N+ 1outputs where Nis the numberof source tasks.APPENDIX B: T RAINING DETAILSTRAINING ALGORITHMFor all our experiments in Value Transfer, we used RMSProp as in [Mnih et al. (2015)] for updatinggradient. For Policy Transfer, since the tasks were simple, stochastic gradient descent was sufficientto provide stable updates. We also use reward clipping, target networks and experience replay for ourvalue transfer experiments in exactly the same way (all hyper parameters retained) as [Mnih et al.(2015)]. A training epoch is 250,000 frames and for each training epoch, we evaluate the networkswith a testing epoch that lasts 125,000 frames. We report the average score over the completedepisodes for each testing epoch. The average scores obtained this way are averaged over 2 runs withdifferent random seeds. In the testing epochs, we use = 0:05in the-greedy policy.LEARNING RATEIn all our experiments, we trained the architecture using the learning rates, 0:0025 and0:0005 . Ingeneral, the lower learning rate provided more stable (less variance) training curves. While com-paring across algorithms, we picked the best performing learning rate out of the two ( 0:0025 and0:0005 ) for each training curve.APPENDIX C: B LURRING EXPERIMENTS ON PONGThe experts are trained with blurring (hiding the ball) and black background as illustrated in AP-PENDIX A. Therefore, to compare the learning with that of a random network without any addi-tional knowledge, we ran the baseline DQN on Pong with a black background too. Having a blackbackground provides a rich contrast between the white ball and the black background, thereby mak-ing training easier and faster, which is why the performance curves in that setting are different tothe other two settings reported for Inverse Pong and Freeway Negative transfer experiments whereno blacking is done and Pong is played with a gray background. The blurring mechanism in Pongis illustrated in APPENDIX E.15Published as a conference paper at ICLR 2017APPENDIX E: B LURRING MECHANISM IN PONG - DETAILS(a) Ball in upper quad (b) Blurred upper quad (c) Ball in lower quad (d) Blurred lower quadFigure 10: The figures above explain the blurring mechanism for selective transfer experiments onPong. The background of the screen is made black. Let X(8484) denote an array containingthe pixels of the screen. The paddle controlled by the agent is the one on the right. We focus onthe two quadrants X1 =X[: 42;42 :] andX2 =X[42 :;42 :] of the Pong screen relevant to theagent controlled paddle. To simulate an expert that is weak at returning balls in the upper quadrant,the portion of X1till the horizontal location of agent-paddle, ie X1[:;: 31] is blacked out, whilesimilarly, for simulating weakness in the bottom quadrant, we blur the portion of X2till the agent-paddle’s horizontal location, ie X2[:;: 31] = 0 . Figures 10a and 10b illustrate the scenarios ofblurring the upper quadrant before and after blurring; and similarly do 10c and 10d for blurring thelower quadrant. Effectively, blurring this way with a black screen is equivalent to hiding the ball(white pixel) in the appropriate quadrant where weakness is to be simulated. Hence, Figures 10band 10d are the mechanisms used while training a DQN on Pong to hide the ball at the respectivequadrants, so to create the partially useful experts which are analogous to forehand-backhand expertsin Tennis.X[:a;:b]indicates the subarray of Xwith all rows upto row index aand all columnsupto column index b.APPENDIX D: B LURRING EXPERIMENTS ON BREAKOUTSimilar to our Blurring experiment on Pong, we additionally ran another experiment on the Atari2600 game, Breakout, to validate the efficiency of our attention mechanism. We consider a setupwith two experts L1andL2along with our attention network. The experts L1andL2were trainedby blurring the lower left and right quadrants of the breakout screen respectively. We don’t haveto make the background black like in the case of Pong because the background is already black inBreakout and direct blurring is sufficient to hiding the ball in the respective regions without anycontrasts introduced. We blur only the lower part so as to make it easy for the agent to at leastanticipate the ball based on the movement at the top. We empirically observed that blurring the tophalf (as well) makes it hard to learn any meaningful partially useful experts L1andL2.The goal of this experiment is to show that the attention network can learn suitable filters so as todynamically adapt and learn to select the expert appropriate to the situation (game screen) in thetask. The expert L1which was blurred on the left bottom half is bound to weak at returning balls onthat region while L2is expected to be weak on the right. This is in the same vein as the forehand-backhand example in Tennis and its synthetic simulation for Pong by blurring the upper and lowerquadrants. During game play, the attention mechanism is expected to ignore L2when the ball ison the bottom right half (while focusing on L1) and similarly ignore L2(while focusing on L1)when the ball is on the left bottom half. We learn experts L1andL2which score 42.2 and39.8respectively. Using the attention mechanism to select the correct expert, we were able to achievea score of 94.5 after training for 5epochs. Each training epoch corresponds to 250;000decisionsteps, while the scores are averaged over completed episodes run for 125;000decision steps. Thisshows that the attention mechanism learns to select the suitable expert. Though the performance islimited by the weaknesses of the respective experts, our goal is to show that the attention paradigmis able to take advantage of both experts appropriately. This is evident from the scores achieved bystandalone experts and the attention mechanism. Additionally, we also present a visualization of theattention mechanism weights assigned to the experts L1andL2during game play in APPENDIXG. The weights assigned are in agreement with what we expect in terms of selective attention. Theblurring mechanism is visually illustrated in APPENDIX F.16Published as a conference paper at ICLR 2017APPENDIX F: B LURRING MECHANISM IN BREAKOUT - DETAILS(a) Ball in lower-left quad (b) Blurred lower-left quad (c) Ball in lower-right quad (d) Blurred lower-right quadFigure 11: The figures above explain the blurring mechanism used for selective transfer experimentson Breakout. The background of the screen is already black. Let X(8484) denote an arraycontaining the pixels of the screen. We focus on the two quadrants X1 =X[31 : 81;4 : 42] andX2 =X[31 : 81;42 : 80] . We perform blurring in each case by ensuring X1 = 0 andX2 = 0 forall pixels within them for training L1andL2respectively. Effectively, this is equivalent to hidingthe ball in the appropriate quadrants. Blurring X1simulates weakness in the lower left quadrant,while blurring X2simulates weakness in the lower right quadrant. We don’t blur all the way downupto the last row to ensure the paddle controlled by the agent is visible on the screen. We also don’tblack the rectangular border with a width of 4pixels surrounding the screen. Figures 11a and 11billustrate the scenarios of blurring the lower left quadrant before and after blurring; and similarly do11c and 11d for blurring the lower right quadrant.APPENDIX G: B LURRING ATTENTION VISUALIZATION ON BREAKOUTFigure 12: Visualisation of the attention weights in the Selective Transfer with Attention for Break-out: Green and Blue bars signify the attention probabilities for Expert-1 ( L1) and Expert-2 ( L2)respectively on a scale of [0;1]. We see that in the first two snapshots, the ball is in the lower rightquadrant and as expected, the attention is high on Expert-1, while in the third and fourth snapshots,the ball is in the lower right quadrant and hence the attention is high on Expert-2.17Published as a conference paper at ICLR 2017APPENDIX J: C ASE STUDY OF TARGET TASK PERFORMANCE LIMITED BYDATA AVAILABILITY(a) Comparison of Sparse Pong to Normal Pong (b) A2T with a positive and negative expertFigure 13: This experiment is a case study on a target task where the performance is limited by dataavailability. So far, we focused on experiments where the target task is to solve Pong (normal orblack background) for Value Transfer, and Puddle Worlds for Policy Transfer. In both these cases, arandomly initialized value (or policy) network learning without the aid of any expert network is ableto solve the target task within a reasonable number of epochs (or iterations). We want to illustrate acase where solving the target task in reasonable time is hard and the presence of a favorable sourcetask significantly impacts the speed of learning. To do so, we consider a variant of Pong as our targettask. In this variant, only a small probability of transition tuples (s;a;r;s0)with non-zero reward rare added to the Replay Memory (and used for learning through random batch sampling). This way,the performance on the target task is limited by the availability of rewarding (positive or negative)transitions in the replay memory. This synthetically makes the target task of Pong a sparse rewardproblem because the replay memory is largely filled with transition tuples that have zero reward. Wedo not use any prioritized sampling so as to make sure the sparsity has a negative effect on learningto solve the target task. We use a version of Pong with black background (as used in Section 4.1for the Blurring experiments) for faster experimentation. = 0:1was used for the plots illustratedabove. Figure 13a clearly shows the difference between a normal Pong task without any syntheticsparsity and the new variant we introduce. The learning is much slower and is clearly limited by dataavailability even after 20 epochs (20 million frames) due to reward sparsity. Figure 13b describesa comparison between the A2T setting with one positive expert which expertly solves the targettask and one negative expert, learning from scratch, and direct fine-tuning on a negative expert. Weclearly see the effect of having the positive expert in one of the source tasks speeding up the learningprocess significantly when compared to learning from scratch, and also see that fine-tuning on topof a negative expert severely limits learning even after 20 epochs of training. We also see that theA2T framework is powerful to work in sparse reward settings and avoids negative transfer even insuch cases, while also clearly learning to benefit from the presence of a target task expert amongthe source task networks. Importantly, this experiment demonstrates that transfer learning has asignificant effect on tasks which may be hard (infeasible to solve within a reasonable training time)without any expert available. Further, A2T is also beneficial for such (sparse reward) situations whenaccessing the weights of an expert network is not possible, and only outputs of the expert (policyor value-function) can be used. Such synthetic sparse variants of existing tasks is a good way toexplore future directions in the intersection of Inverse Reinforcement Learning and Reward-BasedLearning, with A2T providing a viable framework for off-policy and on-policy learning.18
ByBwSPcex
Under review as a conference paper at ICLR 2017SONG FROM PI: A M USICALLY PLAUSIBLE NETWORKFORPOPMUSIC GENERATIONHang Chu, Raquel Urtasun, Sanja FidlerDepartment of Computer ScienceUniversity of TorontoOntario, ON M5S 3G4, Canadafchuhang1122,urtasun,fidler g@cs.toronto.eduABSTRACTWe present a novel framework for generating pop music. Our model is a hierarchi-cal Recurrent Neural Network, where the layers and the structure of the hierarchyencode our prior knowledge about how pop music is composed. In particular, thebottom layers generate the melody, while the higher levels produce the drums andchords. We conduct several human studies that show strong preference of our gen-erated music over that produced by the recent method by Google. We additionallyshow two applications of our framework: neural dancing and karaoke, as well asneural story singing.1 I NTRODUCTIONNeural networks have revolutionized many fields. They have not only proven to be powerful inperforming perception tasks such as image classification and language understanding, but have alsoshown to be surprisingly good “artists”. In Gatys et al. (2015 ), photos were turned into paintings byexploiting particular drawing styles such as Van Gogh’s, Kiros et al. (2015 ) produced stories aboutimages biased by writing style (e.g., romance books), Karpathy et al. (2016 ) wrote Shakespeareinspired novels, and Simo-Serra et al. (2015 ) gave fashion advice.Music composition is another artistic domain where neural based approaches have been proposed.Early approaches exploiting Recurrent Neural Networks ( Bharucha & Todd (1989 );Mozer (1996 );Chen & Miikkulainen (2001 );Eck & Schmidhuber (2002 )) date back to the 80’s. The main varia-tions between the different models is the representation of the notes and the outputs they produced,which typically encode melody and chord. Most of these approaches were single track, in that theyproduced only one note per time step. The exception is Boulanger-lewandowski et al. (2012 ) whichgenerated polyphonic music, i.e., simultaneous independent melodies.In this paper, we aim to generate pop music, where the melody but also chords and other instrumentsmake up what is typically called a song. We draw inspiration from the Song from byMacdonald1,a piano video on Youtube, where the pleasing music is created from a sequence of digits of . Thisvideo shows both the randomness and the regularity of music. On one hand, since any possible digitsequence is a subset of the digit sequence, this implies that pleasing music can be created evenfrom a totally random base signal. On the other hand, the composer uses specific rules such as AHarmonic Minor scale and harmonies to convert the digit sequence into a music sheet. It is theserules that play the key role in converting randomness into music.Following the ideas of Songs from , we aim to generate both the melody as well as accompanyingeffects such as chords and drums. Arguably, these turn even a not particularly pleasing melody intoa well sounding song. We propose a hierarchical approach, where each level is a Recurrent NeuralNetwork producing a key aspect of the song. The bottom layers generate the melody, while thehigher levels produce drums and chords. This enables the drum and chord layers to compensatefor the melody in order to produce appleasing music. Adopting the key idea from Songs from ,we condition our model on the scale type allowing the melody generator to learn the notes that aretypically played in a particular scale.1https://youtu.be/OMq9he-5HUU1Under review as a conference paper at ICLR 2017We train our model on 100 hours of midi music containing user-composed pop songs and videogame music. We conduct human studies with music generated with our approach and compare itagainst a recent approach by Google, showing that our songs are strongly preferred over the baseline.In our human study we also perform an ablation analysis of our model. We additionally show twonew applications: neural dancing and karaoke as well as neural music singing. As part of the firstapplication we generate a stickman dancing to our music and lyrics that can be sung with, while inthe second application we condition on the output of Kiros et al. (2015 ) which writes a story about animage and convert it into a pop song. We refer the reader to http://www.cs.toronto.edu/songfrompi/for our demos and results.2 R ELATED WORKGenerating music has been an active research area for decades. It brings together machines learn-ing researchers that aim to capture the complex structure of music ( Eck & Schmidhuber (2002 );Boulanger-lewandowski et al. (2012 )), as well as music professionals ( Chan et al. (2006 )) and en-thusiasts ( Johnson ;Sun) that want to see how far a computer can get to be a real composer. Real-timemusic generation is also explored for gaming ( Engels et al. (2015 )).Early approaches mostly instilled knowledge from music theory into generation, by using rules ofhow music segments can be stitched together in a plausible way, e.g., Chan et al. (2006 ). On theother hand, neural networks have been used for music generation since the 80’s ( Bharucha & Todd(1989 );Mozer (1996 );Chen & Miikkulainen (2001 );Eck & Schmidhuber (2002 )).Mozer (1996 )used a Recurrent Neural Network that produced pitch, duration and chord at each time step. Unlikemost other neural network approaches, this work encodes music knowledge into the representation.Eck & Schmidhuber (2002 ) was first to use LSTMs to generate both melody and chord. ComparedtoMozer (1996 ), the LSTM captured more global music structure across the song.Like us, Kang et al. (2012 ) built upon the randomness of melody by trying to accompany it withdrums. However, in their model the scale type is enforced. No details about the model are given, andthus it is virtually impossible to compare to. Boulanger-lewandowski et al. (2012 ) propose to learncomplex polyphonic musical structure which has multiple notes playing in parallel through the song.The model is single-track in that it only produces melody, whereas in our work we aim to producemulti-track songs. Just recently, Huang & Wu (2016 ) proposed a 2-layer LSTM that, like Boulanger-lewandowski et al. (2012 ), produces music that is more complex than a single note sequence, andis able to produce chords. The main novelty of our work over existing approaches is a hierarchicalmodel that incorporates knowledge from music theory to build the neural architecture, and producesmulti-track pop music (melody, chord, drum). We also present two novel fun applications.3 C ONCEPTS FROM MUSIC THEORYWe start by introducing the basic notation and definitions from music theory. A note defines thebasic unit that music is composed of. Music follows the 12-tone system, i.e., 12 is the cycle lengthof all notes. The 12 tones are: C,C♯=D♭,D,D♯=E♭,E,F,F♯=G♭,G,G♯=A♭,A,A♯=B♭,B. Abaris a short segment of time that corresponds to a specific number of beats (notes). The boundariesof the bar are indicated by vertical bar lines.Scale is a subset of notes. There are four types of scales most commonly used: Major (Minor ),Har-monic Minor ,Melodic Minor andBlues . Each scale type specifies a sequence of relative intervals(or shifts) which act relative to the starting note. For example, the sequence for the scale type Majoris2!2!1!2!2!2!1. Thus, C Major specifies the starting note to be C, and applyingthe relative sequence of shifts yields: C2 !D2 !E1 !F2 !G2 !A2 !B1 !C. The subset ofnotes specified by C Major is thus C, D, E, F, G, A, and B (a subset of seven notes). All scales typeshave a subset of seven notes except for Blues which has six. In total we have 48 unique scales, i.e.4 scale types and 12 possible starting notes. We treat Major andMinor as one type as for a Majorscale there is always a Minor that has exactly the same set of notes. In music theory, this is referredto as Relative Minor .2Under review as a conference paper at ICLR 2017xtprfytkeyytprsytchdytdrmxt1prfyt1keyyt1prsxt2prfyt2keyyt2prsxt3prfyt3keyyt3prsxt4prfyt4keyyt4prsyt4chdyt4drm...............xt8prfyt8keyyt8prsyt8chdyt8drmxt9prfyt9keyyt9prs...........................xt16prfyt16keyyt16prsyt16chdyt16drmxt17prfyt17keyyt17prsKey Layer jsPress LayerChord LayerDrum LayerFigure 1: Overview of our framework. Only skip connections for the current time step tare plotted.Chord is a group of notes that sound good together. Similarly to scale, a chord has a start note anda type defining a set of intervals. There are mainly 6 types in triads chords: Major Chord ,MinorChord ,Augmented Chord ,Diminished Chord ,Suspended 2nd Chord , and Suspended 4th Chord .The Circle of Fifths is often used to produce a chord progression. It maps 12 chord starting notesto a circle. When changing from one chord to another chord, moving to a nearby chord on the circleis often preferred as this forms a strong chord progression that produces the sense of harmony.4 H IERARCHICAL RECURRENT NETWORKS FOR POPMUSIC GENERATIONWe follow the high level idea behind the Song from to define our model. In particular, we gen-erate music with a hierarchical Recurrent Neural Network where the layers and the structure of thehierarchy encode our prior knowledge about how pop music is composed. We first outline the modeland describe the details and justifications for our choices in the subsections that follow.We condition our generation on the scale type, as this helps the model to pick up the regularities inpop songs. We encode melody with two random variables at each time step, representing which keyis being played (the key layer ) and the duration that the key will be pressed (the press layer ). Themelody is generated conditioned on the scale, which does not vary across the song as is typically thecase in pop music. We assume the drums and the chords are independent given the melody. Thusconditioned on the melody, at each time step we generate the chord (the chord layer ) as well as thedrums (the drum layer ). The output at all layers yields the final song. We refer the reader to Fig. 1for an illustration of our hierarchical model.4.1 T HE ROLE OF SCALEIt is known from music theory that while in principle each song has 12 tones to choose from, most ofthe notes are in fact only using the six (for Blues) or seven (for other scales) tone subsets specifiedby the scale rule. We found that by conditioning the music generator on scale it captures theseregularities more easily. However, we do not enforce the notes to be generated from the subset andallow our model to generate notes outside the scale.We confirm the above musical fact by analysing over 100 hours of pop song music fromthemidi man dataset. Since scale is defined relative to a starting note, we first try to factor outits influence and normalize all songs to have identical start note. To identify the scale of a song, wecompute the histogram over the 12 tones and match it with the 48 tone subsets of 4 scale types with12 different start notes. We then normalize all songs to have start note Cby applying a constant shifton all notes. This allows us to categorize any song into 4 scale types. Since this shift affects all notesat once, it does not affect how the song sounds (its harmony). Our analysis shows that for all notesin all Major scale songs, 94:66% are within the tone subset. For Harmonic Minor ,Melodic Minor ,3Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 2: Distribution of within-scale note ratio for four scale types. x-axis: percentage of tones within thescale type’s tone set, y-axis: percentage of songs of the scale type. (a)-(d) shows Major (Minor ),HarmonicMinor ,Melodic Minor , and Blues , respectively.andBlues the percentage of notes that belong to the main tone set is 87:16%,85:11%, and 90:93%,respectively. We refer the reader to Fig. 2, where the x-axis denotes the percentage of within-scalenotes of a song, and the y-axis indicates how many songs in the dataset have that percentage. Notethat the majority of the notes follow the scale rule. Furthermore, different scale types have differentinlier distribution. We thus represent scale with a single random variable s2 f1; ;4gwhich isfixed for the whole song, and condition the model on it.24.2 T WO-LAYER RNN FORMELODY GENERATIONWe represent the melody with two random variables per time step: which key is pressed, and theduration of the press. We use a RNN to generate the keys conditioned on the scale. Then conditionedon the output of the key layer, a second RNN generates the duration of the press at each time step.In particular, we model the key layer with a two-layer LSTM ( Hochreiter & Schmidhuber (1997 ))with a 512-dimensional hidden state, which outputs a note (key) at each time step. Note that wecondition on scale s, thus we have a different set of weights for each scale. We only allow notesbetween C3toC6as notes outside this range are usually too low or too high to sound good. Weremind the reader that given a scale, seven (or six for blues) out of the twelve notes (per octave) arestatistically more plausible, however we allow the model to choose from all 12. This results in a37-dimensional output, as there are 36 possible notes corresponding to 3 octaves with 12 notes peroctave, plus silence. Let htkeybe the hidden state of the second key decoder layer at time t. Wecompute the probability of each key using the softmax:P(ytkey)/exp(vytkeyhtkey) (1)where vytkeyis the row of V(the output embedding matrix of notes), corresponding to note ytkey.As input to the LSTM we use a vector that concatenates multiple features: a one-hot encoding of theprevious generated note yt1key, Lookback features, and the melody profile. The Lookback featureswere proposed by Google Magenta ( Waite et al. ) to make it easier for the model to memorizerecently produced notes and potentially repeat them. They include skip connections from two andone bar ago (a bar is 8 consecutively played notes), i.e., yt16keyandyt8key. They also contain twoadditional features, indicating whether the last generated key has been copied from one or two barsago, i.e. /x31(yt1key;yt18key)and /x31(yt1key;yt116key). They also add a 5-dimensional feature indicatinga binary encoding of the current time t. This helps the model keep track where in a 4bar range itis, and thus produce music accordingly.In addition, we introduce a new feature which we refer to as the melody profile . Intuitively, theprofile represents the high-level music flow. To get the profile for each song, we compute the localnote histogram at each time step with width of two bars, and cluster all local histograms within thesong into 10 clusters via k-means. We order the 10 clusters with mean note ordered from low tohigh as cluster 1 to 10, and apply moving averages on the cluster id sequence to encourage localsmoothness. This results in a 10-dimensional one-hot vector representation of the cluster id for eachtime step. This additional information allows the user to set the melody’s ups and downs of the song.The keys alone are not sufficient to describe how the melody is performed. Additionally we also needto know the duration that each key needs to be pressed for. Towards this goal, conditioned on the2For readers with musical background, the Twelve-Tone Serialism technique Schoenberg & Newlin (1951 )prevents emphasis of any one tone. However, our data analysis indicates that pop music is not influenced by it.4Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 3: Co-occurrence of tones in melody (y-axis) and chord (x-axis). (a)-(d) shows Major (Minor ),Har-monic Minor ,Melodic Minor , and Blues , respectively.melody, we generate the duration of each key with a two-layer LSTM with a 512-dimensional hiddenstate. We represent the duration of pressing as a forward counting sequence that is conditioned onthe generated melody. The press outputs 1 when a new key is pressed, and sequentially outputs 2,3, 4 and so on as the key is held on. When the current key is released, the press counter is reset to1. Compared to the event on-off representation of Waite et al. , our representation learns the melodyflow and how to press separately. This is important, as Waite et al. has extremely unbalanced outputdistributions dominated by the repeat-of-holding event. We represent press ytprsas a 8-dimensionalone-hot vector. The input to our LSTM is yt1prs, concatenated with the 37-dimensional one-hotencoding of the melody key ytkey.4.3 C HORD AND DRUM RNN L AYERSWe studied all existing chords in our 100 hours of pop music. Although in principle a chord can beany arbitrary combination of multiple notes, we observed that in the actual music data 99:19% ofthe chords belong to one of 72 chord classes (6 types 12 start notes). Fig. 3shows the correlationbetween the melody’s tone and the starting note of the chord playing at the same time. It can beseen that chord is strongly correlated with melody. These two findings inspire our design. We thusrepresent chord ytchdas a one-hot encoding with 72 classes, and predict it using a two-layer LSTMwith a 512-dimensional hidden state. We generate one chord at each time step. The input is yt4chdconcatenated with yt3:tkey.We look at our music dataset and find all unique drum patterns with duration of a half bar. We thencompute the histogram of all the patterns. This forms a long tail distribution, where 94:60% comesfrom the top 100 common patterns. We generate drum conditioned on the key layer using a two-layer LSTM with 512 dimensional hidden states. Drum ytdrmis represented as one-hot encodingwith of the 100 unique one-bar-long drum patterns. The input is yt4drmconcatenated with the notesfrom the previous three times steps yt3:tkey.4.4 L EARNINGWe use cross-entropy as our loss function to train each layer. We follow the typical training strategywhere we make predictions at each layer and time step but feed in ground-truth information to thenext. This effectively decomposes training, and allows to train all layers in parallel. We use theAdam optimizer, a learning rate of 2e-3 and a learning rate decay of 0.99 after each epoch for 10epochs.4.5 M USIC SYNTHESIS : PUTTING ALL THE OUTPUTS TOGETHERTo synthesize music we first randomly choose a scale and a profile xprf. For generating xprf, werandomly choose one cluster id with a random duration, and repeat until we get the desired totallength of the music sequence. We then perform inference in our model conditioned on the chosenscale, and use xprfas input to our key layer. At each time step, we sample a key according toP(ytkey). We encode it as a one-hot vector and pass to the press, chord and drum layers. We samplethe press, chords and drums at each time step in a similar fashion.5Under review as a conference paper at ICLR 2017Figure 4: Example of our music generation. From top to bottom: melody, chord and drum respectively.Before putting the outputs across layers together, we further adjust the generated sequences at thebar level. For melody, we first check at each bar if the first step is a continuation of a previous noteor silence. If it is the latter, we find the first newly pressed note within the bar and move it to thebeginning of the bar. We do similarly for the windows of two half-bars as well as the four quarter-bars. This makes the melody more likely to be on the beat, and generally sounds better. We verifythis in our experiments.For chord, we generate one chord at each half bar, which is the majority of all single step chordgenerations. Furthermore, we incorporate the rule of chord progression in the Circle of Fifths asbetween chords pairwise smooth terms, and compute the final chord using dynamic programming.For drum, we generate one pattern at each half bar.Our model generates with scale starting note C, and then applies a constant shift to generate musicwith other starting notes. Besides scale, which instrument to use is also customizable. However, wesimply set all instruments as grand piano in all experiments, as the effect and musical meaning ofdifferent instrument combinations is beyond the scope of this paper.5 E XPERIMENTSTo train our model, we took 100 hours of pop music from midi manwhich consists of user-composedpop songs and video game music. In our generation, we always use 120 beats per minute with 4 timesteps per beat. However, songs in the dataset can have arbitrary speed. To neutralize the effect ofthis, we detect the most frequent interval between two adjacent notes for each song, and iterativelydivide or multiply this interval by 2 until it falls in the range between 0:25sand0:5s. We use thisas a measure of the song’s beat duration. We then adjust the song’s temporal axis so that all songshave the same beat duration of 0:5s.A MIDI file can be separated into different channels/tracks, where the 9th channel is specificallypreserved for drums. We categorize the rest of non-drum tracks into melody, chord, and else, bysimply setting thresholds on average number of unique notes within a bar and average number ofnote changing within a bar, as chords are by definition repetitive. Fig. 4shows an example of ourmusic generation.To evaluate the quality of our music generation, we conduct a human survey with 27 participants.All subjects are university students who did not have any prior knowledge about the content of ourproject. In the survey, participants are presented with several pairs of 30-second music clips, and areasked to vote which clip in the pair sounds better. We gave no other information about what theyare listening to. They are also allow to submit a neutral vote in case they cannot decide between thetwo choices. In our study, we consider three cases: our full method versus Magenta Waite et al. , ourmethod with melody only versus Google Magenta ( Waite et al. ), and our method versus our methodwithout the temporal alignment described in Sec.4.5. We randomly generated 10 songs per methodand randomly shuffled each pair. For the Magenta baseline we used its Lookback version, whichwas the latest version at the time of our submission.As shown in Table 1, most participants prefer songs produced by our method compared to Magenta.Participants also made comments such as music sounds better with percussion than piano alone ,andmultiple instruments with continuous play is much better . This confirms that our multi-layergeneration improves music quality. Few participants also point out that drums sound too differentand do not participate to the melody perfectly , which indicates that further improvements can be stillmade. In the second comparison, we study if the quality improvement of our method is only caused6Under review as a conference paper at ICLR 2017Method Ours Magenta Ours-MO Magenta Ours Ours-NA% of votes 81:6% 14:4% 69:6% 13:6% 75:2% 12:0%Table 1: Human evaluation of music generated by different methods: ours and Waite et al. ’s Magenta. Ours-MO and Ours-NA are short for Ours Melody Only and Ours No Alignment. We allowed neutral votes, thus thesum of the pair is less than 100%.Human Magenta Ourssub-seq 7:06 4:39 4:65repeat 4:04 17:08 2:33Table 2: Evaluations of the longest matching sub-sequence with training, and self repeating times.by adding chords and drums, or is also related to our two-layer melody generation with alignment. Itcan be seen that without chords and drums, the score drops as expected, but is still much higher thanthe Magenta baseline. This is because our method produces less recursion and silence , and fasterand more accurate tempo as mentioned by the participants. In the last comparison, most participantsprefer our full method than the no-alignment version, since beats are more subtle and better timed .This proves the usefulness of temporal alignment. We performed significance tests on the evaluationresults in Table 1. All comparisons passed the significance test with significance level 5%. Thelowest alpha values to reject the null hypothesis are 1e-19, 1e-14, and 1e-19, respectively. Furtherexperimental results of removing music scale in our method and adding temporal alignment to thebaseline can be found on our project page.To examine the suitability of the four scale types we use, we collected the list of all existing musicalscales from Wikipedia and measured the scale distribution of the dataset. 37:8%of the data belongsto our four scales, 47:7%belongs to Acoustic, Algerion, Lydian, Adonai Malakh, and Ukrainian,while 14:5%belongs to the rest 31 uncommonly seen scales such as Insen, Iwato, Yo, and Enigmatic.We also found out that the five scales that accounts for 47:7%are either one or two degree awayfrom one of our used scales (all notes are the same except one being one or two steps away). Thisexperiment shows that even in the most rigorous musical setting, at least 85:5%of online songs arevery close to the four scales that we use.Finally we study our model’s capabilities to generate new music. Towards this goal, we generated100 sequences of 50 seconds of length using different random initializations. We perform twoevaluations. First, for each sequence, we search for the longest sub-sequence of keys that matchespart of the training data, and record its length. This evaluates how much the model copies thetraining data. Secondly, we break each generated melody into segments of 2-bars in length (inspiredby common definition of music plagiarism). We then compare each segment to all segments in therest of the 100 generated songs, and record the repeat time. This evaluates how much the modelrepeats itself. For comparison, we repeat the same evaluation for the Magenta baseline, and humancomposed music. Table 2reports the results. It can be seen that our method performs similarlyas Magenta in terms of copying ( sub-seq ). It is somewhat surprising that human composers infact tend to copy more from other songs, which indicates that both generation approaches can befurther relaxed in terms copying. Our method is less likely to generate recurring melodies ( repeat )compared to Magenta, and is closer to the statistics of human-produced songs.6 A PPLICATIONSIn this section we demonstrate two novel applications of our pop music generation framework. Werefer the reader to http://www.cs.toronto.edu/songfrompi/ for the music videos.6.1 N EURAL DANCING AND KARAOKEIn our first application, we attempt to generate both music and a stickman dancing to it, as well asa sequence of karaoke-like text that people can sing along with. To learn the relationship betweenmusic and dance, we download 1 hour of video from the game Just Dance , as well as the MIDI filesfor songs included in the video from different sources. We use the method in Newell et al. (2016 )to track single-frame 2D human pose in the videos. We process the single-frame tracking result toensure left-right body consistency through time, and then use the method of Zhou et al. (2016 ) toconvert the 2D pose sequence into 3D. Example results are shown in Fig. 5. We observe that ourpose processing pipeline is able to extract reasonable human poses most of the time. However, the7Under review as a conference paper at ICLR 2017(a) (b) (c) (d)Figure 5: Examples from Just Dance and 3D human pose tracking result. (a) and (b) are success cases, posetracking fails in (c), and (d) shows the defect in video which makes tracking difficult.quality is not perfect due to tracking failure or video effects. We define pose similarity as averageeuclidean distance of all joints, and cluster poses into 456 clusters. We used Frey & Dueck (2007 )as the number of clusters is large.We learn to generate a stickman dancing by adding another dancing layer on top of the key layer,just like for drum and chord. We generate one pose at each beat, which is equivalent to 4 timesteps or 0.5 seconds in a 120 beat-per-minute music. In particular, we predict one of the 456 poseclusters using a linear projection layer followed by softmax. We use cross-entropy at each time stepas our loss function. At inference time, we further apply moving average to temporally smooth thegenerated 3D pose sequence.To learn the relationship between music and lyrics, we collect 51 hours of lyrics data from theinternet. This data contains 50 hours of text without music, and the rest 1 hour are songs we collectedfrom Just Dance . For the music part, we temporally align each sentence in the lyrics with the midimusic by using the widely-existing lrcformat, which records the time tag at the beginning of everysentence. We select words that appear at least 4 times, which yields a vocabulary size of 3390including unknown and end-of-sentence. Just as for dance, we generate one word per beat usinganother lyrics layer on top of the key layer.6.2 N EURAL STORY SINGINGIn this application our aim is to sing a song about a photo. We first generate a story about thephoto with the neural storyteller Kiros et al. (2015 ) and try to accompany the generated text withmusic. We utilize the same 1 hour dataset of temporally aligned lyrics and music. We further includethe phoneme list of our 3390 vocabulary as we also want to sing the story. Starting from the textproduced by neural storyteller, we arrange it into a temporal sequence with 1 beat per word and ashort pause for end-of-sentence, where the pause length is decided such that the next sentence startsfrom a new bar. As our dataset is relatively small, we generate the profile conditioned on the text,which has less dimensions compared to the key. This is done by a 2-layer LSTM that takes as inputthe generated profile at the last time step concatenated with a one-hot vector of the current word, andoutputs the current profile. We then generate the song with our model given the generated profile.The generated melody key is then used to decide on the pitch frequency of a virtual singer, assumingthe key-to-pitch correspondence of a grand piano. We further constrain that the singer’s final pitch isalways in the range of E3toG4, which we empirically found to be the natural pitch range. We thenreplace all words outside the vocabulary with the sound Ooh, and play the rendered singing with thegenerated music.7 C ONCLUSION AND FUTURE WORKWe have presented a hierarchical approach to pop song generation which exploits music theory inthe model design. In contrast to past work, our approach is able to generate multi-track music. Ourhuman studies shows the strength of our framework compared to an existing strong baseline. Weadditionally proposed two new applications: neural dancing & karaoke, and neural story singing.In this paper, we show that incorporating knowledge from the music theory into the model, aswell as capturing multiple aspects of music results in better sounding songs. However, generatingappealing and interesting music that captures structure, rhythm, and mood is challenging, and thereis an exciting road ahead to improve on these aspects in the future.8Under review as a conference paper at ICLR 2017REFERENCESJamshed J. Bharucha and Peter M. Todd. Modeling the perception of tonal structure with neuralnets. Computer Music Journal , 13(4):44–53, 1989.Nicolas Boulanger-lewandowski, Yoshua Bengio, and Pascal Vincent. Modeling temporal depen-dencies in high-dimensional sequences: Application to polyphonic music generation and tran-scription. In ICML , 2012.Michael Chan, John Potter, and Emery Schubert. Improving algorithmic music composition withmachine learning. In 9th International Conference on Music Perception and Cognition , 2006.Chun-Chi J. Chen and Risto Miikkulainen. Creating melodies with evolving recurrent neural net-works. In International Joint Conference on Neural Networks , 2001.Douglas Eck and Juergen Schmidhuber. A first look at music composition using lstm recurrentneural networks. 2002.Steve Engels, Fabian Chan, and Tiffany Tong. Automatic real-time music generation for games. InAIIDE Workshop , 2015.Brendan J Frey and Delbert Dueck. Clustering by passing messages between data points. volume315, pp. 972–976, 2007.Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. A neural algorithm of artistic style. InarXiv:1508.06576 , 2015.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Allen Huang and Raymond Wu. Deep learning for music. arXiv preprint arXiv:1606.04930 , 2016.Daniel Johnson. Composing music with recurrent neural networks. https://goo.gl/YP9QyR .Semin Kang, Soo-Yol Ok, and Young-Min Kang. Automatic Music Generation and Machine Learn-ing Based Evaluation , pp. 436–443. Springer Berlin Heidelberg, 2012.Andrej Karpathy, Justin Johnson, and Li Fei-Fei. Visualizing and understanding recurrent networks.InICLR 2016 Workshop , 2016.Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Tor-ralba, and Sanja Fidler. Skip-thought vectors. In NIPS , 2015.David Macdonald. Song from .https://youtu.be/OMq9he-5HUU .Reddit midi man. Midi collection. https://goo.gl/4moEZ3 .Michael C. Mozer. Neural network music composition by prediction: Exploring the benefits ofpsychoacoustic constraints and multi-scale processing. Connection Science , 6(2-3), 1996.Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estima-tion. In ECCV , 2016.Arnold Schoenberg and Dika Newlin. Style and idea. Technical report, Williams and NorgateLondon, 1951.Edgar Simo-Serra, Sanja Fidler, Francesc Moreno-Noguer, and Raquel Urtasun. Neuroaesthetics infashion: Modeling the perception of beauty. In CVPR , 2015.Felix Sun. Deephear - composing and harmonizing music with neural networks. https://goo.gl/7OTZZL .Elliot Waite, Douglas Eck, Adam Roberts, and Dan Abolafia. Project magenta. https://magenta.tensorflow.org/ .Wikipedia. List of musical scales and modes. https://goo.gl/5kvXLP .Xiaowei Zhou, Menglong Zhu, Spyridon Leonardos, Kosta Derpanis, and Kostas Daniilidis. Sparse-ness meets deepness: 3d human pose estimation from monocular video. In CVPR , 2016.9
SyCSsUDee
Under review as a conference paper at ICLR 2017SEMANTIC NOISE MODELING FORBETTER REPRESENTATION LEARNINGHyo-Eun Kimand Sangheum HwangLunit Inc.Seoul, South Koreafhekim, shwang g@lunit.ioKyunghyun ChoCourant Institute of Mathematical Sciences and Centre for Data ScienceNew York UniversityNew York, NY 10012, USAkyunghyun.cho@nyu.eduABSTRACTLatent representation learned from multi-layered neural networks via hierarchicalfeature abstraction enables recent success of deep learning. Under the deep learn-ing framework, generalization performance highly depends on the learned latentrepresentation. In this work, we propose a novel latent space modeling method tolearn better latent representation. We designed a neural network model based onthe assumption that good base representation for supervised tasks can be attainedby maximizing the sum of hierarchical mutual informations between the input,latent, and output variables. From this base model, we introduce a semantic noisemodeling method which enables semantic perturbation on the latent space to en-hance the representational power of learned latent feature. During training, latentvector representation can be stochastically perturbed by a modeled additive noisewhile preserving its original semantics. It implicitly brings the effect of semanticaugmentation on the latent space. The proposed model can be easily learned byback-propagation with common gradient-based optimization algorithms. Experi-mental results show that the proposed method helps to achieve performance ben-efits against various previous approaches. We also provide the empirical analysesfor the proposed latent space modeling method including t-SNE visualization.1 I NTRODUCTIONEnhancing the generalization performance against unseen data given some sample data is the mainobjective in machine learning. Under that point of view, deep learning has been achieved manybreakthroughs in several domains such as computer vision (Krizhevsky et al., 2012; Simonyan &Zisserman, 2015; He et al., 2016), natural language processing (Collobert & Weston, 2008; Bah-danau et al., 2015), and speech recognition (Hinton et al., 2012; Graves et al., 2013). Deep learningis basically realized on deep layered neural network architecture, and it learns appropriate task-specific latent representation based on given training data. Better latent representation learned fromtraining data results in better generalization over the future unseen data. Representation learningor latent space modeling becomes one of the key research topics in deep learning. During the pastdecade, researchers focused on unsupervised representation learning and achieved several remark-able landmarks on deep learning history (Vincent et al., 2010; Hinton et al., 2006; Salakhutdinov &Hinton, 2009). In terms of utilizing good base features for supervised learning, the base representa-tion learned from unsupervised learning can be a good solution for supervised tasks (Bengio et al.,2007; Masci et al., 2011).The definition of ‘good’ representation is, however, different according to target tasks. In unsuper-vised learning, a model is learned from unlabelled examples. Its main objective is to build a modelCorresponding author1Under review as a conference paper at ICLR 2017to estimate true data distribution given examples available for training, so the learned latent rep-resentation normally includes broadly-informative components of the raw input data (e.g., mutualinformation between the input and the latent variable can be maximized for this objective). In su-pervised learning, however, a model is learned from labelled examples. In the case of classification,a supervised model learns to discriminate input data in terms of the target task using correspond-ing labels. Latent representation is therefore obtained to maximize the performance on the targetsupervised tasks.Since the meaning of good representations vary according to target tasks (unsupervised or super-vised), pre-trained features from the unsupervised model are not be guaranteed to be useful forsubsequent supervised tasks. Instead of the two stage learning strategy (unsupervised pre-trainingfollowed by supervised fine-tuning), several works focused on a joint learning model which opti-mizes unsupervised and supervised objectives concurrently, resulting in better generalization per-formance (Goodfellow et al., 2013; Larochelle & Bengio, 2008a; Rasmus et al., 2015; Zhao et al.,2015; Zhang et al., 2016; Cho & Chen, 2014).In this work, we propose a novel latent space modeling method for supervised learning as an exten-sion of the joint learning approach. We define a good latent representation of standard feed-forwardneural networks under the basis of information theory. Then, we introduce a semantic noise model-ingmethod in order to enhance the generalization performance. The proposed method stochasticallyperturbs the latent representation of a training example by injecting a modeled semantic additivenoise. Since the additive noise is randomly sampled from a pre-defined probability distribution ev-ery training iteration, different latent vectors from a single training example can be fully utilizedduring training. The multiple different latent vectors produced from a single training example aresemantically similar under the proposed latent space modeling method, so we can expect semanticaugmentation effect on the latent space.Experiments are performed on two datasets; MNIST and CIFAR-10. The proposed model results inbetter classification performance compared to previous approaches through notable generalizationeffect (stochastically perturbed training examples well cover the distribution of unseen data).2 M ETHODOLOGYThe proposed method starts from the existing joint learning viewpoint. This section first explainsthe process of obtaining a good base representation for supervised learning which is the basis of theproposed latent space modeling method. And then, we will describe how the proposed semanticnoise modeling method perturbs the latent space while maintaining the original semantics.2.1 B ASE JOINT LEARNING MODELIn a traditional feed-forward neural network model (Figure 1(a)), output Yof input data Xis com-pared with its true label, and the error is propagated backward from top to bottom, which implicitlylearns a task-specific latent representation Zof the input X. As an extension of a joint learningapproach, an objective to be optimized can be described in general as below (Larochelle & Bengio,2008b):minLunsup +Lsup (1)whereLunsup andLsupare respectively an unsupervised loss and a supervised loss, and andare model parameters to be optimized during training and a loss weighting coefficient, respectively.In terms of modeling Lunsup in Eq. (1), we assume that good latent representation Zis attainedby maximizing the sum of hierarchical mutual informations between the input, latent, and outputvariables; i.e. the sum of the mutual information between the input Xand theZand the mutualinformation between the Zand the output Y. Each mutual information is decomposed into anentropy and a conditional entropy terms, so the sum of hierarchical mutual informations is expressedas follows:I(X;Z) +I(Z;Y) =H(X)H(XjZ) +H(Z)H(ZjY) (2)2Under review as a conference paper at ICLR 2017X Z YX Z YXR ZRX Z YXR ZR ZP YP(a) (b) (c) Figure 1: (a) Standard feed-forward neural network model, (b) feed-forward neural network modelwith reconstruction paths, and (c) feed-forward neural network model with reconstruction andstochastic perturbation paths.where I(;)is the mutual information between random variables, and H()andH(j)are the entropyand the conditional entropy of random variables, respectively. Note that the sum of those mutualinformations becomes equivalent to the total correlation of X,Z, andYunder the graphical structureof the general feed-forward model described in Figure 1(a); P(X;Z;Y ) =P(YjZ)P(ZjX)P(X).The total correlation is equal to the sum of all pairwise mutual informations (Watanabe, 1960).Our objective is to find the model parameters which maximize I(X;Z) +I(Z;Y). Since H(X)andH(Z)are non-negative, and H(X)is constant in this case, the lower bound on I(X;Z) +I(Z;Y)can be reduced to1:I(X;Z) +I(Z;Y)H(XjZ)H(ZjY): (3)It is known that maximizing H(XjZ)can be formulated as minimizing the reconstruction errorbetween the input x(i)(i-th example sampled from X) and its reconstruction x(i)Runder the generalaudo-encoder framework (Vincent et al., 2010). Since H(XjZ) +H(ZjY)is proportional to thesum of reconstruction errors of x(i)(with its reconstruction x(i)R) andz(i)(with its reconstructionz(i)R), the target objective can be expressed as follows (refer to Appendix (A1) for the details ofmathematical derivations):minXiLrec(x(i);x(i)R) +Lrec(z(i);z(i)R) (4)whereLrecis a reconstruction loss.Figure 1(b) shows the target model obtained from the assumption that good latent representation Zcan be obtained by maximizing the sum of hierarchical mutual informations. Given an input samplex, feed-forward vectors and their reconstructions are attained deterministically by:z=f1(x)y=f2(f1(x))xR=g01(z) =g01(f1(x))zR=g02(y) =g02(f2(f1(x)):(5)1Although H(Z)is an upper bound of H(ZjY),H(Z)is anyway affected by the process of H(ZjY)beingminimized in Eq. (3). In Section 4, we experimentally show that we can obtain good base model even from therelatively loose lower bound defined in Eq. (3).3Under review as a conference paper at ICLR 2017Given a set of training pairs ( x(i),t(i)) wherex(i)andt(i)are thei-th input example and its label,target objective in Eq. (1) under the model described in Figure 1(b) can be organized as below (withreal-valued input samples, L2 loss LL2is a proper choice for the reconstruction loss Lrec):min:f1;01;2;02gXiLL2(x(i);x(i)R) +LL2(z(i);z(i)R)+LNLL(y(i);t(i)) (6)whereLNLL is a negative log-likelihood loss for the target supervised task. Note that Eq. (6)represents the ‘ proposed-base ’ in our experiment (see Section 4.3).2.2 S EMANTIC NOISE MODELINGBased on the architecture shown in Figure 1(b) with the target objective in Eq. (6), we conjecturethat stochastic perturbation on the latent space during training helps to achieve better generalizationperformance for supervised tasks. Figure 1(c) shows this strategy which integrates the stochasticperturbation process during training. Suppose that ZPis a perturbed version of Z, andYPis anoutput which is feed-forwarded from ZP. Given a latent vector z=f1(x)from an input sample x,z0=z+zeand^y=f2(z0) (7)wherez0and^yare a perturbed latent vector and its output respectively, and zeis an additive noiseused in the perturbation process of z. Based on the architecture shown in Figure 1(c), target objectivecan be modified as:min:f1;01;2;02gXi1LL2(x(i);x(i)R) +LL2(z(i);z(i)R)+2LNLL(y(i);t(i)) +LNLL(^y(i);t(i)):(8)Using random additive noise directly on zeis the most intuitive approach (‘ proposed-perturb (ran-dom) ’ in Section 4.3). However, preserving the semantics of the original latent representation zcannot be guaranteed under the direct random perturbation on the latent space. While the latentspace is not directly interpretable in general, the output logit yof the latent representation zis inter-pretable, because the output logit is tightly coupled to the prediction of the target label. In order topreserve the semantics of the original latent representation after perturbation, we indirectly model asemantic noise on the latent space by adding small random noise directly on the output space.Based on the output (pre-softmax) logit y, the semantic-preserving variation of y(i.e.y0) can bemodeled by y0=y+ye, whereyeis a random noise vector stochastically sampled from a zero-mean Gaussian with small standard deviation ;N(0;2I). Now, the semantic perturbation z0canbe reconstructed from the random perturbation y0through the decoding path g02in Figure 1(c).From the original output logit yand the randomly perturbed output logit y0, semantic additive noisezeon the latent space can be approximately modeled as below:zR=g02(y)z0R=g02(y0) =g02(y+ye)ze'z0RzR=g02(y+ye)g02(y)(9)By using the modeled semantic additive noise zeand the original latent representation z, we canobtain the semantic perturbation z0as well as its output ^yvia Eq. (7) for our target objective Eq. (8).From the described semantic noise modeling process (‘ proposed-perturb (semantic) ’ in Section 4.3),we expect to achieve better representation on the latent space. The effect of the proposed model interms of learned latent representation will be explained in more detail in Section 4.4.4Under review as a conference paper at ICLR 2017(a) (b) Figure 2: Previous works for supervised learning; (a) traditional feed-forward model, and (b) jointlearning model with both supervised and unsupervised losses.3 R ELATED WORKSPrevious works on deep neural networks for supervised learning can be categorized into two types asshown in Figure 2; (a) a general feed-forward neural network model (LeCun et al., 1998; Krizhevskyet al., 2012; Simonyan & Zisserman, 2015; He et al., 2016), and (b) a joint learning model whichoptimizes unsupervised and supervised objectives at the same time (Zhao et al., 2015; Zhang et al.,2016; Cho & Chen, 2014). Here are the corresponding objective functions:min:f1;2gXiLNLL(y(i);t(i)) (10)min:f1;01;2gXiLL2(x(i);x(i)R) +LNLL(y(i);t(i)) (11)whereis a loss weighting coefficient between unsupervised and supervised losses.Since the feed-forward neural network model is normally implemented with multiple layers in adeep learning framework, the joint learning model can be sub-classified into two types according tothe type of reconstruction; reconstruction only with the input data x(Eq. (11)) and reconstructionwith all the intermediate features including the input data xas follows:minXi0@0LL2(x(i);x(i)R) +XjjLL2(h(i)j;h(i)jR) +LNLL(y(i);t(i))1A: (12)whereh(i)jandh(i)jRare thej-th hidden representation of the i-th training example and its reconstruc-tion.Another type of the joint learning model, a ladder network (Figure 3), was introduced for semi-supervised learning (Rasmus et al., 2015). The key concept of the ladder network is to obtainrobust features by learning de-noising functions ( g0) of the representations at every layer of themodel via reconstruction losses, and the supervised loss is combined with the reconstruction lossesin order to build the semi-supervised model. The ladder network achieved the best performance insemi-supervised tasks, but it is not appropriate for supervised tasks with small-scale training set (ex-perimental analysis for supervised learning on permutation-invariant MNIST is briefly summarized+ noise + noise Figure 3: Ladder network; a representative model for semi-supervised learning (Rasmus et al.,2015).5Under review as a conference paper at ICLR 2017in Appendix (A2)). The proposed model in this work can be extended to semi-supervised learning,but our main focus is to enhance the representational power on latent space given labelled data forsupervised learning. We leave the study for semi-supervised learning scenario based on the proposedmethodology as our future research.4 E XPERIMENTSFor quantitative analysis, we compare the proposed methodology with previous approaches de-scribed in Section 3; a traditional feed-forward supervised learning model and a joint learning modelwith two different types of reconstruction losses (reconstruction only with the first layer or with allthe intermediate layers including the first layer). The proposed methodology includes a baselinemodel in Figure 1(b) as well as a stochastic perturbation model in Figure 1(c). Especially in thestochastic perturbation model, we compare the random and semantic perturbations and present somequalitative analysis on the meaning of the proposed perturbation methodology.4.1 D ATASETSWe experiment with two public datasets; MNIST (including a permutation-invariant MNIST case)and CIFAR-10. MNIST (10 classes) consists of 50k, 10k, and 10k 28 28 gray-scale images fortraining, validation, and test datasets, respectively. CIFAR-10 (10 classes) consists of 50k and 10k3232 3-channel images for training and test sets, respectively. We split the 50k CIFAR-10 trainingimages into 40k and 10k for training and validation. Experiments are performed with differentsizes of training set (from 10 examples per class to the entire training set) in order to verify theeffectiveness of the proposed model in terms of generalization performance under varying sizes oftraining set.4.2 I MPLEMENTATIONFigure 4 shows the architecture of the neural network model used in this experiment. W’s areconvolution or fully-connected weights (biases are excluded for visual brevity). Three convolution(33 (2) 32, 33 (2) 64, 33 (2) 96, where each item means the filter kernel size and (stride)with the number of filters) and two fully-connected (the numbers of output nodes are 128 and 10,respectively) layers are used for MNIST. For the permutation-invariant MNIST setting, 784-512-256-256-128-10 nodes of fully-connected layers are used. Four convolution (5 5 (1) 64, 33 (2)64, 33 (2) 64, and 33 (2) 96) and three fully-connected (128, 128, and 10 nodes) layers are usedfor CIFAR-10. Weights on the decoding (reconstruction) path are tied with corresponding weightson the encoding path as shown in Figure 4 (transposed convolution for the tied convolution layerand transposed matrix multiplication for the tied fully-connected layer).In Figure 4, z0is perturbed directly from zby adding Gaussian random noise for random pertur-bation. For semantic perturbation, z0is indirectly generated from y0which is perturbed by addingGaussian random noise on ybased on Eq. (9). For perturbation, base activation vector ( zis the baseFigure 4: Target network architecture; 3 convolution and 2 fully-connected layers were used forMNIST, 5 fully-connected layers were used for permutation-invariant MNIST, and 4 convolutionand 3 fully-connected layers were used for CIFAR-10.6Under review as a conference paper at ICLR 2017Table 1: Error rate (%) on the test set using the model with the best performance on the validationset. Numbers on the first row of each sub-table are the number of randomly chosen per-class train-ing examples. The average performance and the standard deviation of three different random-splitdatasets (except for the case using the entire training set in the last column) are described in this table(error rate on each random set is summarized in Appendix (A3)). Performance of three previous ap-proaches (with gray background; previous-1, 2, 3 are feed-forward model Figure 2(a), joint learningmodel with recon-one Figure 2(b), joint learning model with recon-all Figure 2(b), respectively) andthe proposed methods (proposed-1, 2, 3 are baseline Figure 1(b), random perturbation Figure 1(c),semantic perturbation Figure 1(c), respectively) is summarized.dataset number of per-class examples chosen from 50k entire MNIST training examples entire setMNIST 10 20 50 100 200 500 1k 2k 50kprevious-1 24.55 (3.04) 16.00 (1.33) 10.35 (0.66) 6.58 (0.42) 4.71 (0.28) 2.94 (0.23) 1.90 (0.27) 1.45 (0.08) 1.04previous-2 21.67 (3.19) 13.60 (0.99) 7.85 (0.10) 5.44 (0.37) 4.14 (0.08) 2.50 (0.15) 1.84 (0.07) 1.45 (0.07) 1.12previous-3 20.11 (2.81) 13.69 (0.62) 9.15 (0.15) 6.77 (0.25) 5.39 (0.11) 3.89 (0.27) 2.91 (0.17) 2.28 (0.10) 1.87proposed-1 21.35 (1.16) 11.65 (1.15) 6.33 (0.10) 4.32 (0.31) 3.07 (0.11) 1.98 (0.11) 1.29 (0.09) 0.94 (0.02) 0.80proposed-2 20.17 (1.52) 11.68 (0.81) 6.24 (0.29) 4.12 (0.24) 3.04 (0.13) 1.88 (0.05) 1.24 (0.03) 0.96 (0.08) 0.65proposed-3 20.11 (0.81) 10.59 (0.74) 5.92 (0.12) 3.79 (0.23) 2.72 (0.09) 1.78 (0.05) 1.15 (0.01) 0.88 (0.03) 0.62dataset number of per-class examples chosen from 40k entire CIFAR-10 training examples entire setCIFAR-10 10 20 50 100 200 500 1k 2k 40kprevious-1 73.82 (1.43) 68.99 (0.54) 61.30 (0.83) 54.93 (0.56) 46.97 (0.59) 33.69 (0.43) 26.63 (0.39) 20.97 (0.09) 17.80previous-2 75.68 (1.56) 69.05 (1.13) 61.44 (0.63) 55.02 (0.34) 46.18 (0.51) 33.62 (0.38) 26.78 (0.48) 21.25 (0.40) 17.68previous-3 73.33 (1.06) 67.63 (0.56) 62.59 (0.76) 56.37 (0.20) 50.51 (0.61) 41.26 (0.73) 32.55 (1.20) 26.38 (0.08) 22.71proposed-1 71.63 (0.69) 66.17 (0.40) 58.91 (0.86) 52.65 (0.28) 43.46 (0.30) 31.86 (0.54) 25.76 (0.31) 21.06 (0.18) 17.45proposed-2 71.69 (0.25) 66.75 (0.54) 58.95 (0.63) 53.01 (0.26) 43.71 (0.19) 31.80 (0.18) 25.50 (0.33) 20.81 (0.27) 17.43proposed-3 71.50 (1.14) 66.87 (0.17) 58.30 (0.62) 52.32 (0.08) 42.98 (0.34) 30.91 (0.23) 24.81 (0.26) 20.19 (0.25) 16.16vector for the random perturbation and yis the base vector for the semantic perturbation) is scaled to[0.0, 1.0], and the zero-mean Gaussian noise with 0.2 of standard deviation is added (via element-wise addition) on the normalized base activation. This perturbed scaled activation is de-scaled withthe original min and max activations of the base vector.Initial learning rates are 0.005 and 0.001 for MNIST and permutation-invariant MNIST, and 0.002for CIFAR-10, respectively. The learning rates are decayed by a factor of 5 every 40 epochs until the120-th epoch. For both datasets, the minibatch size is set to 100, and the target objective is optimizedusing Adam optimizer (Kingma & Ba, 2015) with a momentum 0.9. All the ’s for reconstructionlosses in Eq. (11) and Eq. (12) are 0.03 and 0.01 for MNIST and CIFAR-10, respectively. The sameweighting factors for reconstruction losses (0.03 for MNIST and 0.01 for CIFAR-10) are used for1in Eq (8), and 1.0 is used for 2.Input data is first scaled to [0.0, 1.0] and then whitened by the average across all the training exam-ples. In CIFAR-10, random cropping (24 24 image is randomly cropped from the original 32 32image) and random horizontal flipping (mirroring) are used for data augmentation. We selectedthe network that performed best on the validation dataset for evaluation on the test dataset. All theexperiments are performed with TensorFlow (Abadi et al., 2015).4.3 Q UANTITATIVE ANALYSISThree previous approaches (a traditional feed-forward model, a joint learning model with the inputreconstruction loss, and a joint learning model with reconstruction losses of all the intermediatelayers including the input layer) are compared with the proposed methods (the baseline model inFigure 1(b), and the stochastic perturbation model in Figure 1(c) with two different perturbationmethods; random and semantic). We measure the classification performance according to varyingsizes of training set (examples randomly chosen from the original training dataset). Performance isaveraged over three different random trials.7Under review as a conference paper at ICLR 2017(a) (b) Figure 5: Examples reconstructed from the perturbed latent vectors via (a) random perturbation,and (b) semantic perturbation (top row shows the original training examples). More examples aresummarized in Appendix (A4.1).Table 1 summarizes the classification performance for MNIST and CIFAR-10. As we expected,the base model obtained by maximizing the sum of mutual informations ( proposed-base ) mostlyperforms better than previous approaches, and the model with the semantic perturbation ( proposed-perturb (semantic) ) performs best among all the comparison targets. Especially in MNIST, the errorrate of ‘ proposed-perturb (semantic) ’ with 2k per-class training examples is less than the error rateof all types of previous works with the entire training set (approximately 5k per-class examples).We further verify the proposed method on the permutation-invariant MNIST task with a standardfeed-forward neural network. Classification performance is measured against three different sizes oftraining set (1k, 2k, and 5k per-class training examples). ‘ Proposed-perturb (semantic) ’ achieves thebest performance among all the configurations; 2.57%, 1.82%, and 1.28% error rates for 1k, 2k, and5k per-class training examples, respectively. The joint learning model with the input reconstructionloss performs best among three previous approaches; 2.72%, 1.97%, and 1.38% error rates for 1k,2k, and 5k per-class training examples, respectively.4.4 Q UALITATIVE ANALYSISAs mentioned before, random perturbation by adding unstructured noise directly to the latent rep-resentation cannot guarantee preserving the semantics of the original representation. We com-pared two different perturbation methods (random and semantic) by visualizing the examples recon-structed from the perturbed latent vectors (Figure 5). Top row is the original examples selected fromtraining set (among 2k per-class training examples), and the rest are the reconstructions of their per-turbed latent representations. Based on the architecture described in Figure 1(b), we generated fivedifferent perturbed latent representations according to the type of perturbation, and reconstructedthe perturbed latent vectors through decoding path for reconstruction.Figure 5(a) and (b) show the examples reconstructed from the random and semantic perturbations,respectively. For both cases, zero-mean Gaussian random noise (0.2 standard deviation) is used forperturbation. As shown in Figure 5(a), random perturbation partially destroys the original semantics;for example, semantics of ‘1’ is mostly destroyed under random perturbation, and some examplesof ‘3’ are reconstructed as being similar to ‘8’ rather than its original content ‘3’. Figure 5(b)shows the examples reconstructed from the semantic perturbation. The reconstructed examples showsubtle semantic variations while preserving the original semantic contents; for example, thicknessdifference in ‘3’ (example on the third row) or writing style difference in ‘8’ (openness of the topleft corner).Figure 6 shows the overall effect of the perturbation. In this analysis, 100 per-class MNIST exam-ples are used for training. From the trained model based on the architecture described in Figure 1(b),latent representations zof all the 50k examples (among 50k examples, only 1k examples were usedfor training) are visualized by using t-SNE (Maaten & Hinton, 2008). Only the training examples ofthree classes (0, 1, and 9) among ten classes are depicted as black circles for visual discrimination in8Under review as a conference paper at ICLR 2017(a) 0123456789(b) (c) Figure 6: Training examples (circles or crosses with colors described below) over the examplesnot used for training (depicted as background with different colors); (a) training examples (blackcircles), (b) training examples (yellow circles) with 3 random-perturbed samples (blue crosses),and (c) training examples (yellow circles) with 3 semantic-perturbed samples (blue crosses). Bestviewed in color.Figure 6(a). The rest of the examples which were not used for training (approximately 4.9k exam-ples per class) are depicted as a background with different colors. We treat the colored backgroundexamples (not used for training) as a true distribution of unseen data in order to estimate the gener-alization level of learned representation according to the type of perturbation. Figure 6(b) and (c)show the training examples (100 examples per class with yellow circles) and their perturbed ones(3sampled from each example with blue crosses) through random and semantic perturbations,respectively.In Figure 6(b), perturbed samples are distributed near the original training examples, but some sam-ples outside the true distribution cannot be identified easily with appropriate classes. This can beexplained with Figure 5(a), since some perturbed samples are ambiguous semantically. In Fig-ure 6(c), however, most of the perturbed samples evenly cover the true distribution. As mentionedbefore, stochastic perturbation with the semantic additive noise during training implicitly incurs theeffect of augmentation on the latent space while resulting in better generalization. Per-class t-SNEresults are summarized in Appendix (A4.2).5 D ISCUSSIONWe introduced a novel latent space modeling method for supervised tasks based on the standardfeed-forward neural network architecture. The presented model simultaneously optimizes both su-pervised and unsupervised losses based on the assumption that the better latent representation canbe obtained by maximizing the sum of hierarchical mutual informations. Especially the stochas-tic perturbation process which is achieved by modeling the semantic additive noise during trainingenhances the representational power of the latent space. From the proposed semantic noise model-ingprocess, we can expect improvement of generalization performance in supervised learning withimplicit semantic augmentation effect on the latent space.The presented model architecture can be intuitively extended to semi-supervised learning becauseit is implemented as the joint optimization of supervised and unsupervised objectives. For semi-supervised learning, however, logical link between features learned from labelled and unlabelleddata needs to be considered additionally. We leave the extension of the presented approach to semi-supervised learning for the future.REFERENCESMart ́ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, AndrewHarp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, ManjunathKudlur, Josh Levenberg, Dan Man ́e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-9Under review as a conference paper at ICLR 2017cent Vanhoucke, Vijay Vasudevan, Fernanda Vi ́egas, Oriol Vinyals, Pete Warden, Martin Watten-berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learningon heterogeneous systems, 2015. URL http://tensorflow.org/ . Software available fromtensorflow.org.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. In International Conference on Learning Representations (ICLR) ,2015.Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, et al. Greedy layer-wise trainingof deep networks. In Advances in Neural Information Processing Systems (NIPS) , 2007.Kyunghyun Cho and Xi Chen. Classifying and visualizing motion capture sequences using deepneural networks. In International Conference on Computer Vision Theory and Applications , 2014.Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deepneural networks with multitask learning. In International Conference on Machine Learning(ICML) , 2008.Ian Goodfellow, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Multi-prediction deep boltz-mann machines. In Advances in Neural Information Processing Systems (NIPS) , 2013.Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-rent neural networks. In International conference on acoustics, speech and signal processing ,2013.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. In Computer Vision and Pattern Recognition (CVPR) , 2016.Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly,Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networksfor acoustic modeling in speech recognition: The shared views of four research groups. SignalProcessing Magazine, IEEE , 29(6):82–97, 2012.Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep beliefnets. Neural Computation , 18:1527–1554, 2006.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In InternationalConference on Learning Representations (ICLR) , 2015.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-lutional neural networks. In Advances in Neural Information Processing Systems (NIPS) , 2012.Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma-chines. In International Conference on Machine Learning (ICML) , 2008a.Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma-chines. In International Conference on Machine Learning (ICML) , 2008b.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research (JMLR) , 9(Nov):2579–2605, 2008.Jonathan Masci, Ueli Meier, Dan Cires ̧an, and J ̈urgen Schmidhuber. Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial NeuralNetworks , 2011.Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems(NIPS) , 2015.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In Artificial Intelligenceand Statistics Conference (AISTATS) , 2009.10Under review as a conference paper at ICLR 2017Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. In International Conference on Learning Representations (ICLR) , 2015.Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.Stacked denoising autoencoders: Learning useful representations in a deep network with a localdenoising criterion. Journal of Machine Learning Research (JMLR) , 11:3371–3408, 2010.Satosi Watanabe. Information theoretical analysis of multivariate correlation. IBM Journal of re-search and development , 4(1):66–82, 1960.Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsu-pervised objectives for large-scale image classification. In International Conference on MachineLearning (ICML) , 2016.Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders.InInternational Conference on Learning Representations (ICLR) , 2015.11Under review as a conference paper at ICLR 2017APPENDIX(A1) D ERIVATION OF RECONSTRUCTION ERRORS FROM CONDITIONAL ENTROPY TERMSExtended from Section 2. From the lower bound in Eq. (3), we consider the following optimizationproblem (refer to ‘ Section 2. From mutual information to autoencoders ’ in (Vincent et al., 2010)):maxf1;01;2;02gEq(X;Z;Y )[logq(XjZ)] +Eq(X;Z;Y )[logq(ZjY)]: (13)Here, we denote q(X;Z;Y )an unknown joint distribution. Note that ZandYare respectivelythe variables transformed from parametric mappings Z=f1(X)andY=f2(Z)(see Fig. 1).q(X;Z;Y )then can be reduced to q(X)fromq(ZjX;1) =(Zf1(X))andq(YjZ;2) =(Yf2(Z))wheredenotes Dirac-delta function.From the Kullback-Leibler divergence that DKL(qjjp)0for any two distributions pandq, theoptimization in Eq. (13) corresponds to the following optimization problem where p()denotes aparametric distribution:maxf1;01;2;02gEq(X)[logp(XjZ;01)] +Eq(X)[logp(ZjY;02)]: (14)By replacing q(X)with a sample distribution q0(X)and putting all parametric dependencies be-tweenX,ZandY, we will havemaxf1;01;2;02gEq0(X)[logp(XjZ=f1(X);01)] +Eq0(X)[logp(ZjY=f2(f1(X));02)]:(15)For a given input sample xofX, it is general to interpret xRandzRas the parameters of distributionsp(XjXR=xR)andp(ZjZR=zR)which reconstruct xandzwith high probability (i.e. xRandzRare not exact reconstructions of xandz). SincexRandzRare real-valued, we assume Gaussiandistribution for these conditional distributions, that is,p(XjXR=xR) =N(xR; 20I)p(ZjZR=zR) =N(zR; 20I):(16)The assumptions yield logp(j)/LL2(;).With the following relations for logterms in Eq. (15),p(XjZ=f1(x);01) =p(XjXR=g01(f1(x)))p(ZjY=f2(f1(x));02) =p(ZjZR=g02(f2(f1(x)));(17)the optimization problem in Eq. (15) corresponds to the minimization problem of reconstructionerrors for input examples x(i)as below:minf1;01;2;02gXiLL2(x(i);x(i)R) +LL2(z(i);z(i)R): (18)12Under review as a conference paper at ICLR 2017(A2) L ADDER NETWORK ,A REPRESENTATIVE SEMI -SUPERVISED LEARNING MODELExtended from Section 3. We performed experiments with a ladder network model (Rasmus et al.,2015) in order to estimate the performance on pure supervised tasks according to different sizes oftraining set. We used the code (https://github.com/rinuboney/ladder.git) for this experiment. Thenetwork architecture implemented on the source code is used as is; (784-1000-500-250-250-250-10). Based on the same network architecture, we implemented the proposed stochastic perturbationmodel described in Figure 1(c) and compared the classification performance with the ladder networkas described in Table 2 (we did not focus on searching the optimal hyperparameters for the proposedmodel in this experiment). As summarized in the bottom of the table (mean over 3 random trials),the proposed semantic noise modeling method shows a fairly large performance gain compared tothe ladder network model with small-scale datasets (e.g., in a case of 10 per-class training examples,the proposed method achieves 22.11% of error rate, while the ladder network shows 29.66%).Table 2: Classification performance (error rate in %) of the ladder network and the proposed modelon three different sets of randomly chosen training examples (MNIST).set No.1 (# training examples per class) 10 20 50 100 200 500 1k 2k (all) 5kladder network model; Figure 3 25.85 16.48 9.26 6.00 4.66 3.07 2.15 1.26 0.91proposed-perturb (semantic); Figure 1(c) 19.76 12.33 8.77 6.06 4.59 2.93 1.87 1.31 0.93set No.2 (# training examples per class) 10 20 50 100 200 500 1k 2kladder network model; Figure 3 33.14 17.46 10.44 6.67 4.43 2.82 1.94 1.37proposed-perturb (semantic); Figure 1(c) 23.36 15.35 9.43 5.75 4.43 2.99 1.87 1.39set No.3 (# training examples per class) 10 20 50 100 200 500 1k 2kladder network model; Figure 3 29.99 16.99 9.73 7.34 4.39 3.00 2.12 1.47proposed-perturb (semantic); Figure 1(c) 23.21 13.98 8.83 6.51 4.32 2.94 2.22 1.49mean over 3 random trials 10 20 50 100 200 500 1k 2k (all) 5kladder network model; Figure 3 29.66 16.98 9.81 6.67 4.49 2.96 2.07 1.37 0.91proposed-perturb (semantic); Figure 1(c) 22.11 13.89 9.01 6.11 4.45 2.95 1.99 1.40 0.9313Under review as a conference paper at ICLR 2017(A3) Q UANTITATIVE ANALYSISExtended from Section 4.3. Among the total 50k and 40k training examples in MNIST and CIFAR-10, we randomly select the examples for training. Classification performance according to threedifferent randomly chosen training sets are summarized in Table 3 (MNIST) and Table 4 (CIFAR-10). Further experiments with denoising constraints are also included. Zero-mean Gaussian randomnoise with 0.1 standard deviation is used for noise injection. Denoising function helps to achieveslightly better performance on MNIST, but it results in performance degradation on CIFAR-10 (wedid not focus on searching the optimal parameters for noise injection in this experiments).Table 3: Classification performance (error rate in %) on three different sets of randomly chosentraining examples (MNIST).Set No.1 (# train examples per class) 10 20 50 100 200 500 1k 2k (all) 5kfeed-forward model; Figure 2(a) 22.61 14.20 11.25 6.37 4.34 2.63 1.83 1.56 1.04joint learning model with recon-one; Figure 2(b) 18.69 12.21 7.84 5.17 4.02 2.58 1.79 1.47 1.12joint learning model with recon-one with denoising constraints 20.39 11.91 7.41 4.64 3.65 2.57 1.97 1.53 0.97joint learning model with recon-all; Figure 2(b) 18.82 12.82 9.34 6.43 5.23 4.12 2.68 2.42 1.87joint learning model with recon-all with denoising constraints 17.93 11.76 7.32 4.78 3.91 3.04 2.52 1.99 1.36proposed-base; Figure 1(b) 20.23 10.18 6.47 3.89 3.04 1.89 1.33 0.91 0.80proposed-base with denoising constraints 19.88 10.89 6.62 4.26 3.40 2.44 2.11 1.54 1.13proposed-perturb (random); Figure 1(c) 18.38 10.58 6.64 3.78 3.14 1.90 1.21 0.89 0.65proposed-perturb (semantic); Figure 1(c) 19.33 9.72 5.98 3.47 2.84 1.84 1.16 0.84 0.62Set No.2 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 28.84 17.36 10.14 6.20 4.78 3.02 1.61 1.41joint learning model with recon-one; Figure 2(b) 26.09 14.40 7.98 5.18 4.17 2.29 1.94 1.52joint learning model with recon-one with denoising constraints 27.69 13.11 6.95 5.07 3.54 2.37 1.83 1.28joint learning model with recon-all; Figure 2(b) 24.01 14.13 8.98 6.84 5.44 3.51 2.98 2.18joint learning model with recon-all with denoising constraints 23.05 13.29 7.79 5.12 3.92 3.01 2.27 1.84proposed-base; Figure 1(b) 22.95 12.98 6.27 4.43 3.22 2.14 1.37 0.96proposed-base with denoising constraints 26.96 12.21 6.45 4.62 3.13 2.53 1.88 1.49proposed-perturb (random); Figure 1(c) 22.10 12.52 5.97 4.26 2.86 1.94 1.23 0.92proposed-perturb (semantic); Figure 1(c) 21.22 11.52 5.75 3.91 2.61 1.73 1.14 0.89Set No.3 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 22.20 16.43 9.67 7.16 5.02 3.17 2.25 1.39joint learning model with recon-one; Figure 2(b) 20.23 14.19 7.73 5.96 4.22 2.62 1.79 1.35joint learning model with recon-one with denoising constraints 19.32 12.25 7.44 5.39 3.58 2.37 1.49 1.56joint learning model with recon-all; Figure 2(b) 17.51 14.12 9.12 7.04 5.49 4.05 3.08 2.25joint learning model with recon-all with denoising constraints 17.07 12.50 7.86 5.48 4.05 2.97 2.02 1.98proposed-base; Figure 1(b) 20.86 11.79 6.25 4.63 2.96 1.91 1.16 0.96proposed-base with denoising constraints 19.89 11.30 6.26 4.57 3.50 2.63 1.61 1.47proposed-perturb (random); Figure 1(c) 20.02 11.94 6.12 4.32 3.13 1.81 1.28 1.08proposed-perturb (semantic); Figure 1(c) 19.78 10.53 6.03 4.00 2.70 1.76 1.14 0.9214Under review as a conference paper at ICLR 2017Table 4: Classification performance (error rate in %) on three different sets of randomly chosentraining examples (CIFAR-10).Set No.1 (# train examples per class) 10 20 50 100 200 500 1k 2k (all) 4kfeed-forward model; Figure 2(a) 73.30 69.25 62.42 55.65 47.71 34.30 27.04 21.06 17.80joint learning model with recon-one; Figure 2(b) 75.19 70.38 62.25 55.30 46.89 34.12 26.63 21.05 17.68joint learning model with recon-one with denoising constraints 73.72 68.20 61.99 55.23 46.64 36.37 29.78 25.53 21.73joint learning model with recon-all; Figure 2(b) 74.79 68.33 62.92 56.24 51.37 40.30 30.91 26.49 22.71joint learning model with recon-all with denoising constraints 76.56 69.67 64.53 57.88 52.74 42.24 36.90 30.93 27.41proposed-base; Figure 1(b) 70.79 66.57 59.91 52.98 43.29 32.25 26.19 20.92 17.45proposed-base with denoising constraints 71.03 67.49 60.37 53.52 44.28 33.40 28.00 25.06 21.34proposed-perturb (random); Figure 1(c) 71.89 67.12 59.22 52.79 43.87 31.82 25.04 20.97 17.43proposed-perturb (semantic); Figure 1(c) 71.59 66.90 58.64 52.34 42.74 30.94 24.45 20.10 16.16Set No.2 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 72.39 69.49 60.45 54.85 46.91 33.39 26.73 21.00joint learning model with recon-one; Figure 2(b) 74.06 69.14 60.71 54.54 45.70 33.54 27.43 20.90joint learning model with recon-one with denoising constraints 76.40 69.33 60.28 55.38 47.40 36.29 29.31 24.60joint learning model with recon-all; Figure 2(b) 72.28 67.60 61.53 56.65 49.99 42.08 32.99 26.33joint learning model with recon-all with denoising constraints 73.90 69.23 61.90 57.99 52.35 45.12 37.23 30.14proposed-base; Figure 1(b) 72.49 65.62 57.82 52.66 43.20 32.24 25.60 21.32proposed-base with denoising constraints 72.99 66.75 57.78 53.81 44.33 33.56 28.40 25.03proposed-perturb (random); Figure 1(c) 71.84 65.98 58.08 53.37 43.44 31.56 25.69 21.03proposed-perturb (semantic); Figure 1(c) 72.85 66.65 57.44 52.21 42.74 31.17 24.99 20.54Set No.3 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 75.78 68.24 61.02 54.29 46.28 33.38 26.11 20.85joint learning model with recon-one; Figure 2(b) 77.79 67.62 61.37 55.22 45.96 33.21 26.29 21.81joint learning model with recon-one with denoising constraints 76.60 69.27 61.13 55.10 47.50 37.12 29.63 24.88joint learning model with recon-all; Figure 2(b) 72.92 66.97 63.31 56.23 50.16 41.41 33.75 26.31joint learning model with recon-all with denoising constraints 76.83 68.53 65.58 58.29 52.43 45.42 39.01 32.32proposed-base; Figure 1(b) 71.60 66.31 58.99 52.30 43.88 31.10 25.48 20.95proposed-base with denoising constraints 72.39 67.20 60.60 52.64 44.62 33.52 28.01 25.25proposed-perturb (random); Figure 1(c) 71.34 67.15 59.55 52.86 43.81 32.01 25.78 20.42proposed-perturb (semantic); Figure 1(c) 70.06 67.07 58.83 52.41 43.47 30.61 25.00 19.9415Under review as a conference paper at ICLR 2017(A4.1) Q UALITATIVE ANALYSISExtended from Section 4.4. Figure 7 shows reconstructed examples from perturbed (random orsemantic) latent representations (refer to Figure 5 and the analysis described in Section 4.4).Example.1 random perturbation Example.1 semantic perturbation Example.2 random perturbation Example.2 semantic perturbation Figure 7: For each example, top row is the original examples selected from the training set, andthe rest are reconstructed from the perturbed representations via random (left) and semantic (right)perturbations.16Under review as a conference paper at ICLR 2017(A4.2) Q UALITATIVE ANALYSISExtended from Section 4.4. Figure 8 shows the t-SNE results per class on MNIST. The overalltendency is similar to the description in Section 4.4.17Under review as a conference paper at ICLR 2017Figure 8: From top to bottom: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. From left to right: training exam-ples (circle), training examples (circle) + random-perturbed samples (cross), and training examples(circle) + semantic-perturbed samples (cross). Best viewed in color.18
SJc1hL5ee
Under review as a conference paper at ICLR 2017FASTTEXT.ZIP:COMPRESSING TEXT CLASSIFICATION MODELSArmand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herv ́e J ́egou & Tomas MikolovFacebook AI Researchfajoulin,egrave,bojanowski,matthijs,rvj,tmikolov g@fb.comABSTRACTWe consider the problem of producing compact architectures for text classifica-tion, such that the full model fits in a limited amount of memory. After consid-ering different solutions inspired by the hashing literature, we propose a methodbuilt upon product quantization to store word embeddings. While the originaltechnique leads to a loss in accuracy, we adapt this method to circumvent quan-tization artefacts. Combined with simple approaches specifically adapted to textclassification, our approach derived from fastText requires, at test time, onlya fraction of the memory compared to the original FastText, without noticeablysacrificing quality in terms of classification accuracy. Our experiments carried outon several benchmarks show that our approach typically requires two orders ofmagnitude less memory than fastText while being only slightly inferior withrespect to accuracy. As a result, it outperforms the state of the art by a good marginin terms of the compromise between memory usage and accuracy.1 I NTRODUCTIONText classification is an important problem in Natural Language Processing (NLP). Real world use-cases include spam filtering or e-mail categorization. It is a core component in more complex sys-tems such as search and ranking. Recently, deep learning techniques based on neural networkshave achieved state of the art results in various NLP applications. One of the main successes of deeplearning is due to the effectiveness of recurrent networks for language modeling and their applicationto speech recognition and machine translation (Mikolov, 2012). However, in other cases includingseveral text classification problems, it has been shown that deep networks do not convincingly beatthe prior state of the art techniques (Wang & Manning, 2012; Joulin et al., 2016).In spite of being (typically) orders of magnitude slower to train than traditional techniques basedon n-grams, neural networks are often regarded as a promising alternative due to compact modelsizes, in particular for character based models. This is important for applications that need to run onsystems with limited memory such as smartphones.This paper specifically addresses the compromise between classification accuracy and the modelsize. We extend our previous work implemented in the fastText library1. It is based on n-gramfeatures, dimensionality reduction, and a fast approximation of the softmax classifier (Joulin et al.,2016). We show that a few key ingredients, namely feature pruning, quantization, hashing, and re-training, allow us to produce text classification models with tiny size, often less than 100kB whentrained on several popular datasets, without noticeably sacrificing accuracy or speed.We plan to publish the code and scripts required to reproduce our results as an extension of thefastText library, thereby providing strong reproducible baselines for text classifiers that optimizethe compromise between the model size and accuracy. We hope that this will help the engineeringcommunity to improve existing applications by using more efficient models.This paper is organized as follows. Section 2 introduces related work, Section 3 describes our textclassification model and explains how we drastically reduce the model size. Section 4 shows theeffectiveness of our approach in experiments on multiple text classification benchmarks.1https://github.com/facebookresearch/fastText1Under review as a conference paper at ICLR 20172 R ELATED WORKModels for text classification. Text classification is a problem that has its roots in many applica-tions such as web search, information retrieval and document classification (Deerwester et al., 1990;Pang & Lee, 2008). Linear classifiers often obtain state-of-the-art performance while being scal-able (Agarwal et al., 2014; Joachims, 1998; Joulin et al., 2016; McCallum & Nigam, 1998). Theyare particularly interesting when associated with the right features (Wang & Manning, 2012). Theyusually require storing embeddings for words and n-grams, which makes them memory inefficient.Compression of language models. Our work is related to compression of statistical languagemodels. Classical approaches include feature pruning based on entropy (Stolcke, 2000) and quanti-zation. Pruning aims to keep only the most important n-grams in the model, leaving out those withprobability lower than a specified threshold. Further, the individual n-grams can be compressed byquantizing the probability value, and by storing the n-gram itself more efficiently than as a sequenceof characters. Various strategies have been developed, for example using tree structures or hashfunctions, and are discussed in (Talbot & Brants, 2008).Compression for similarity estimation and search. There is a large body of literature on howto compress a set of vectors into compact codes, such that the comparison of two codes approxi-mates a target similarity in the original space. The typical use-case of these methods considers anindexed dataset of compressed vectors, and a query for which we want to find the nearest neigh-bors in the indexed set. One of the most popular is Locality-sensitive hashing (LSH) by Charikar(2002), which is a binarization technique based on random projections that approximates the cosinesimilarity between two vectors through a monotonous function of the Hamming distance betweenthe two corresponding binary codes. In our paper, LSH refers to this binarization strategy2. Manysubsequent works have improved this initial binarization technique, such as spectal hashing (Weisset al., 2009), or Iterative Quantization (ITQ) (Gong & Lazebnik, 2011), which learns a rotation ma-trix minimizing the quantization loss of the binarization. We refer the reader to two recent surveysby Wang et al. (2014) and Wang et al. (2015) for an overview of the binary hashing literature.Beyond these binarization strategies, more general quantization techniques derived from Jegou et al.(2011) offer better trade-offs between memory and the approximation of a distance estimator. TheProduct Quantization (PQ) method approximates the distances by calculating, in the compressed do-main, the distance between their quantized approximations. This method is statistically guaranteedto preserve the Euclidean distance between the vectors within an error bound directly related to thequantization error. The original PQ has been concurrently improved by Ge et al. (2013) and Norouzi& Fleet (2013), who learn an orthogonal transform minimizing the overall quantization loss. In ourpaper, we will consider the Optimized Product Quantization (OPQ) variant (Ge et al., 2013).Softmax approximation The aforementioned works approximate either the Euclidean distanceor the cosine similarity (both being equivalent in the case of unit-norm vectors). However, in thecontext of fastText , we are specifically interested in approximating the maximum inner productinvolved in a softmax layer. Several approaches derived from LSH have been recently proposedto achieve this goal, such as Asymmetric LSH by Shrivastava & Li (2014), subsequently discussedby Neyshabur & Srebro (2015). In our work, since we are not constrained to purely binary codes,we resort a more traditional encoding by employing a magnitude/direction parametrization of ourvectors. Therefore we only need to encode/compress an unitary d-dimensional vector, which fits theaforementioned LSH and PQ methods well.Neural network compression models. Recently, several research efforts have been conductedto compress the parameters of architectures involved in computer vision, namely for state-of-the-art Convolutional Neural Networks (CNNs) (Han et al., 2016; Lin et al., 2015). Some use vectorquantization (Gong et al., 2014) while others binarize the network (Courbariaux et al., 2016). Denilet al. (2013) show that such classification models are easily compressed because they are over-parametrized, which concurs with early observations by LeCun et al. (1990).2In the literature, LSH refers to multiple distinct strategies related to the Johnson-Lindenstrauss lemma.For instance, LSH sometimes refers to a partitioning technique with random projections allowing for sublinearsearch viacell probes, see for instance the E2LSH variant of Datar et al. (2004).2Under review as a conference paper at ICLR 2017Some of these works both aim at reducing the model size and the speed. In our case, since thefastText classifier on which our proposal is built upon is already very efficient, we are primilarlyinterested in reducing the size of the model while keeping a comparable classification efficiency.3 P ROPOSED APPROACH3.1 T EXT CLASSIFICATIONIn the context of text classification, linear classifiers (Joulin et al., 2016) remain competitive withmore sophisticated, deeper models, and are much faster to train. On top of standard tricks commonlyused in linear text classification (Agarwal et al., 2014; Wang & Manning, 2012; Weinberger et al.,2009), Joulin et al. (2016) use a low rank constraint to reduce the computation burden while sharinginformation between different classes. This is especially useful in the case of a large output space,where rare classes may have only a few training examples. In this paper, we focus on a similarmodel, that is, which minimizes the softmax loss `overNdocuments:NXn=1`(yn; BAx n); (1)where xnis a bag of one-hot vectors and ynthe label of the n-th document. In the case of a largevocabulary and a large output space, the matrices AandBare big and can require gigabytes ofmemory. Below, we describe how we reduce this memory usage.3.2 B OTTOM -UP PRODUCT QUANTIZATIONProduct quantization is a popular method for compressed-domain approximate nearest neighborsearch (Jegou et al., 2011). As a compression technique, it approximates a real-valued vector byfinding the closest vector in a pre-defined structured set of centroids, referred to as a codebook.This codebook is not enumerated, since it is extremely large. Instead it is implicitly defined by itsstructure: a d-dimensional vector x2Rdis approximated as^x=kXi=1qi(x); (2)where the different subquantizers qi:x7!qi(x)are complementary in the sense that their respectivecentroids lie in distinct orthogonal subspaces, i.e.,8i6=j;8x; y;hqi(x)jqj(y)i= 0. In the originalPQ, the subspaces are aligned with the natural axis, while OPQ learns a rotation, which amounts toalleviating this constraint and to not depend on the original coordinate system. Another way to seethis is to consider that PQ splits a given vector xintoksubvectors xi,i= 1: : : k , each of dimensiond=k:x= [x1: : : xi: : : xk], and quantizes each sub-vector using a distinct k-means quantizer. Eachsubvector xiis thus mapped to the closest centroid amongst 2bcentroids, where bis the number ofbits required to store the quantization index of the subquantizer, typically b= 8. The reconstructedvector can take 2kbdistinct reproduction values, and is stored in kbbits.PQ estimates the inner product in the compressed domain asx>y^x>y=kXi=1qi(xi)>yi: (3)This is a straightforward extension of the square L2 distance estimation of Jegou et al. (2011). Inpractice, the vector estimate ^xis trivially reconstructed from the codes, i.e., from the quantizationindexes, by concatenating these centroids.The two parameters involved in PQ, namely the number of subquantizers kand the number of bits bper quantization index, are typically set to k2[2; d=2], andb= 8to ensure byte-alignment.Discussion. PQ offers several interesting properties in our context of text classification. Firstly,the training is very fast because the subquantizers have a small number of centroids, i.e., 256 cen-troids for b= 8. Secondly, at test time it allows the reconstruction of the vectors with almost no3Under review as a conference paper at ICLR 2017computational and memory overhead. Thirdly, it has been successfully applied in computer vision,offering much better performance than binary codes, which makes it a natural candidate to compressrelatively shallow models. As observed by S ́anchez & Perronnin (2011), using PQ just before thelast layer incurs a very limited loss in accuracy when combined with a support vector machine.In the context of text classification, the norms of the vectors are widely spread, typically with a ratioof 1000 between the max and the min. Therefore kmeans performs poorly because it optimizes anabsolute error objective, so it maps all low-norm vectors to 0. A simple solution is to separate thenorm and the angle of the vectors and to quantize them separately. This allows a quantization withno loss of performance, yet requires an extra bbits per vector.Bottom-up strategy: re-training. The first works aiming at compressing CNN models like theone proposed by (Gong et al., 2014) used the reconstruction from off-the-shelf PQ, i.e., without anyre-training. However, as observed in Sablayrolles et al. (2016), when using quantization methodslike PQ, it is better to re-train the layers occurring after the quantization, so that the network canre-adjust itself to the quantization. There is a strong argument arguing for this re-training strategy:the square magnitude of vectors is reduced, on average, by the average quantization error for anyquantizer satisfying the Lloyd conditions; see Jegou et al. (2011) for details.This suggests a bottom-up learning strategy where we first quantize the input matrix, then retrainand quantize the output matrix (the input matrix being frozen). Experiments in section 4 show thatit is worth adopting this strategy.Memory savings with PQ. In practice, the bottom-up PQ strategy offers a compression factor of10 without any noticeable loss of performance. Without re-training, we notice a drop in accuracybetween 0:1%and0:5%, depending on the dataset and setting; see Section 4 and the appendix.3.3 F URTHER TEXT SPECIFIC TRICKSThe memory usage strongly depends on the size of the vocabulary, which can be large in manytext classification tasks. While it is clear that a large part of the vocabulary is useless or redundant,directly reducing the vocabulary to the most frequent words is not satisfactory: most of the frequentwords, like “the” or “is” are not discriminative, in contrast to some rare words, e.g., in the context oftag prediction. In this section, we discuss a few heuristics to reduce the space taken by the dictionary.They lead to major memory reduction, in extreme cases by a factor 100. We experimentally showthat this drastic reduction is complementary with the PQ compression method, meaning that thecombination of both strategies reduces the model size by a factor up to 1000 for some datasets.Pruning the vocabulary. Discovering which word or n-gram must be kept to preserve the overallperformance is a feature selection problem. While many approaches have been proposed to selectgroups of variables during training (Bach et al., 2012; Meier et al., 2008), we are interested inselecting a fixed subset of Kwords and ngrams from a pre-trained model. This can be achieved byselecting the Kembeddings that preserve as much of the model as possible, which can be reducedto selecting the Kwords and ngrams associated with the highest norms.While this approach offers major memory savings, it has one drawback occurring in some particularcases: some documents may not contained any of the Kbest features, leading to a significant dropin performance. It is thus important to keep the Kbest features under the condition that they coverthe whole training set. More formally, the problem is to find a subset Sin the feature setVthatmaximizes the sum of their norms wsunder the constraint that all the documents in the training setDare covered:maxSVXs2Sws s.t.jSj K; P 1S1D;where Pis a matrix such that Pds= 1 if the s-th feature is in the d-th document, and 0otherwise.This problem is directly related to set covering problems that are NP-hard (Feige, 1998). Standardgreedy approaches require the storing of an inverted index or to do multiple passes over the dataset,which is prohibitive on very large dataset (Chierichetti et al., 2010). This problem can be cast asan instance of online submodular maximization with a rank constraint (Badanidiyuru et al., 2014;4Under review as a conference paper at ICLR 20172 4 894.094.595.095.596.096.5accuracySogou2 4 8number of bytes69.570.070.571.071.572.072.5YahooFull PQ OPQ LSH, norm PQ, norm OPQ, norm2 4 862.062.462.863.263.6Yelp fullFigure 1: Accuracy as a function of the memory per vector/embedding on 3datasets from Zhanget al. (2015). Note, an extra byte is required when we encode the norm explicitly (”norm”).Bateni et al., 2010). In our case, we use a simple online parallelizable greedy approach: For eachdocument, we verify if it is already covered by a retained feature and, if not, we add the feature withthe highest norm to our set of retained features. If the number of features is below k, we add thefeatures with the highest norm that have not yet been picked.Hashing trick & Bloom filter. On small models, the dictionary can take a significant portion ofthe memory. Instead of saving it, we extend the hashing trick used in Joulin et al. (2016) to bothwords and n-grams. This strategy is also used in V owpal Wabbit (Agarwal et al., 2014) in the contextof online training. This allows us to save around 1-2Mb with almost no overhead at test time (justthe cost of computing the hashing function).Pruning the vocabulary while using the hashing trick requires keeping a list of the indices of theKremaining buckets. At test time, a binary search over the list of indices is required. It has acomplexity of O(log(K))and a memory overhead of a few hundreds of kilobytes. Using Bloomfilters instead reduces the complexity O(1)at test time and saves a few hundred kilobytes. However,in practice, it degrades performance.4 E XPERIMENTSThis section evaluates the quality of our model compression pipeline and compare it to other com-pression methods on different text classification problems, and to other compact text classifiers.Evaluation protocol and datasets. Our experimental pipeline is as follows: we train a modelusing fastText with the default setting unless specified otherwise. That is 2M buckets, a learningrate of 0:1and10training epochs. The dimensionality dof the embeddings is set to powers of 2toavoid border effects that could make the interpretation of the results more difficult. As baselines, weuse Locality-Sensitive Hashing (LSH) (Charikar, 2002), PQ (Jegou et al., 2011) and OPQ (Ge et al.,2013) (the non-parametric variant). Note that we use an improved version of LSH where randomorthogonal matrices are used instead of random matrix projection J ́egou et al. (2008). In a firstseries of experiments, we use the 8datasets and evaluation protocol of Zhang et al. (2015). Thesedatasets contain few million documents and have at most 10classes. We also explore the limit ofquantization on a dataset with an extremely large output space, that is a tag dataset extracted fromthe YFCC100M collection (Thomee et al., 2016)3, referred to as FlickrTag in the rest of this paper.5Under review as a conference paper at ICLR 2017-2-10AG Amazon full-2-10Amazon polarity DBPedia-2-10Sogou Yahoo100kB 1MB 10MB 100MB-2-10Yelp full100kB 1MB 10MB 100MBYelp polarityFull PQ Pruned Zhang et al. (2015) Xiao & Cho (2016)Figure 2: Loss of accuracy as a function of the model size. We compare the compressed model withdifferent level of pruning with NPQ and the full fastText model. We also compare with Zhanget al. (2015) and Xiao & Cho (2016). Note that the size is in log scale.4.1 S MALL DATASETSCompression techniques. We compare three popular methods used for similarity estimation withcompact codes: LSH, PQ and OPQ on the datasets released by Zhang et al. (2015). Figure 1 showsthe accuracy as a function of the number of bytes used per embedding, which corresponds to thenumber kof subvectors in the case of PQ and OPQ. See more results in the appendix. As discussedin Section 2, LSH reproduces the cosine similarity and is therefore not adapted to un-normalizeddata. Therefore we only report results with normalization. Once normalized, PQ and OPQ arealmost lossless even when using only k= 4subquantizers per embedding (equivalently, bytes). Weobserve in practice that using k=d=2,i.e., half of the components of the embeddings, works well inpractice. In the rest of the paper and if not stated otherwise, we focus on this setting. The differencebetween the normalized versions of PQ and OPQ is limited and depends on the dataset. Thereforewe adopt the normalized PQ (NPQ) for the rest of this study, since it is faster to train.word Entropy Norm word Entropy Norm. 1 354 mediocre 1399 1, 2 176 disappointing 454 2the 3 179 so-so 2809 3and 4 1639 lacks 1244 4i 5 2374 worthless 1757 5a 6 970 dreadful 4358 6to 7 1775 drm 6395 7it 8 1956 poorly 716 8of 9 2815 uninspired 4245 9this 10 3275 worst 402 10Table 1: Best ranked words w.r.t. entropy ( left) and norm ( right ) on the Amazon full review dataset.We give the rank for both criteria. The norm ranking filters out words carrying little information.3Data available at https://research.facebook.com/research/fasttext/6Under review as a conference paper at ICLR 2017Dataset full 64KiB 32KiB 16KiBAG 65M 92.1 91.4 90.6 89.1Amazon full 108M 60.0 58.8 56.0 52.9Amazon pol. 113M 94.5 93.3 92.1 89.3DBPedia 87M 98.4 98.2 98.1 97.4Sogou 73M 96.4 96.4 96.3 95.5Yahoo 122M 72.1 70.0 69.0 69.2Yelp full 78M 63.8 63.2 62.4 58.7Yelp pol. 77M 95.7 95.3 94.9 93.2Average diff. [ %] 0 -0.8 -1.7 -3.5Table 2: Performance on very small models. We use a quantization with k= 1, hashing and anextreme pruning. The last row shows the average drop of performance for different size.Pruning. Figure 2 shows the performance of our model with different sizes. We fix k=d=2anduse different pruning thresholds. NPQ offers a compression rate of 10compared to the full model.As the pruning becomes more agressive, the overall compression can increase up up to 1;000with little drop of performance and no additional overhead at test time. In fact, using a smallerdictionary makes the model faster at test time. We also compare with character-level ConvolutionalNeural Networks (CNN) (Zhang et al., 2015; Xiao & Cho, 2016). They are attractive models fortext classification because they achieve similar performance with less memory usage than linearmodels (Xiao & Cho, 2016). Even though fastText with the default setting uses more memory,NPQ is already on par with CNNs’ memory usage. Note that CNNs are not quantized, and it wouldbe worth seeing how much they can be quantized with no drop of performance. Such a study isbeyond the scope of this paper. Our pruning is based on the norm of the embeddings accordingto the guidelines of Section 3.3. Table 1 compares the ranking obtained with norms to the rankingobtained using entropy, which is commonly used in unsupervised settings Stolcke (2000).Extreme compression. Finally, in Table 2, we explore the limit of quantized model by lookingat the performance obtained for models under 64KiB. Surprisingly, even at 64KiB and 32KiB, thedrop of performance is only around 0:8%and1:7%despite a compression rate of 1;0004;000.4.2 L ARGE DATASET : FLICKR TAGIn this section, we explore the limit of compression algorithms on very large datasets. Similarto Joulin et al. (2016), we consider a hashtag prediction dataset containing 312;116labels. We setthe minimum count for words at 10, leading to a dictionary of 1;427;667words. We take 10Mbuckets for n-grams and a hierarchical softmax. We refer to this dataset as FlickrTag.Output encoding. We are interested in understanding how the performance degrades if the classi-fier is also quantized ( i.e., the matrix Bin Eq. 1) and when the pruning is at the limit of the minimumnumber of features required to cover the full dataset.Model k norm retrain Acc. Sizefull (uncompressed) 45.4 12 GiBInput 128 45.0 1.7 GiBInput 128 x 45.3 1.8 GiBInput 128 x x 45.5 1.8 GiBInput+Output 128 x 45.2 1.5 GiBInput+Output 128 x x 45.4 1.5 GiBTable 3: FlickrTag: Influence of quantizing the output matrix on performance. We use PQ forquantization with an optional normalization. We also retrain the output matrix after quantizing theinput one. The ”norm” refers to the separate encoding of the magnitude and angle, while ”retrain”refers to the re-training bottom-up PQ method described in Section 3.2.7Under review as a conference paper at ICLR 2017Table 3 shows that quantizing both the “input” matrix ( i.e.,Ain Eq. 1) and the “output” matrix ( i.e.,B) does not degrade the performance compared to the full model. We use embeddings with d= 256dimensions and use k=d=2subquantizers. We do not use any text specific tricks, which leads toa compression factor of 8. Note that even if the output matrix is not retrained over the embeddings,the performance is only 0:2%away from the full model. As shown in the Appendix, using lesssubquantizers significantly decreases the performance for a small memory gain.Model full Entropy pruning Norm pruning Max-Cover pruning#embeddings 11.5M 2M 1M 2M 1M 2M 1MMemory 12GiB 297MiB 174MiB 305MiB 179MiB 305MiB 179MiBCoverage [ %] 88.4 70.5 70.5 73.2 61.9 88.4 88.4Accuracy 45.4 32.1 30.5 41.6 35.8 45.5 43.9Table 4: FlickrTag: Comparison of entropy pruning, norm pruning and max-cover pruning methods.We show the coverage of the test set for each method.Pruning. Table 4 shows how the performance evolves with pruning. We measure this effect on topof a fully quantized model. The full model misses 11:6%of the test set because of missing words(some documents are either only composed of hashtags or have only rare words). There are 312;116labels and thus it seems reasonable to keep embeddings in the order of the million. A naive pruningwith1M features misses about 3040% of the test set, leading to a significant drop of performance.On the other hand, even though the max-coverage pruning approach was set on the train set, it doesnot suffer from any coverage loss on the test set. This leads to a smaller drop of performance. If thepruning is too aggressive, however, the coverage decreases significantly.5 F UTURE WORKIt may be possible to obtain further reduction of the model size in the future. One idea is to conditionthe size of the vectors (both for the input features and the labels) based on their frequency (Chenet al., 2015; Grave et al., 2016). For example, it is probably not worth representing the rare labelsby full 256-dimensional vectors in the case of the FlickrTag dataset. Thus, conditioning the vectorsize on the frequency and norm seems like an interesting direction to explore in the future.We may also consider combining the entropy and norm pruning criteria: instead of keeping thefeatures in the model based just on the frequency or the norm, we can use both to keep a good set offeatures. This could help to keep features that are both frequent and discriminative, and thereby toreduce the coverage problem that we have observed.Additionally, instead of pruning out the less useful features, we can decompose them into smallerunits (Mikolov et al., 2012). For example, this can be achieved by splitting every non-discriminativeword into a sequence of character trigrams. This could help in cases where training and test examplesare very short (for example just a single word).6 C ONCLUSIONIn this paper, we have presented several simple techniques to reduce, by several orders of magnitude,the memory complexity of certain text classifiers without sacrificing accuracy nor speed. This isachieved by applying discriminative pruning which aims to keep only important features in thetrained model, and by performing quantization of the weight matrices and hashing of the dictionary.We will publish the code as an extension of the fastText library. We hope that our work willserve as a baseline to the research community, where there is an increasing interest for comparingthe performance of various deep learning text classifiers for a given number of parameters. Overall,compared to recent work based on convolutional neural networks, fastText.zip is often moreaccurate, while requiring several orders of magnitude less time to train on common CPUs, andincurring a fraction of the memory complexity.8Under review as a conference paper at ICLR 2017REFERENCESAlekh Agarwal, Olivier Chapelle, Miroslav Dud ́ık, and John Langford. A reliable effective terascalelinear learning system. Journal of Machine Learning Research , 15(1):1111–1133, 2014.Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Optimization withsparsity-inducing penalties. Foundations and Trends Rin Machine Learning , 4(1):1–106, 2012.Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Stream-ing submodular maximization: Massive data summarization on the fly. In SIGKDD , pp. 671–680.ACM, 2014.Mohammad Hossein Bateni, Mohammad Taghi Hajiaghayi, and Morteza Zadimoghaddam. Sub-modular secretary problem and extensions. In Approximation, Randomization, and CombinatorialOptimization. Algorithms and Techniques , pp. 39–52. Springer, 2010.Moses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC , pp. 380–388, May 2002.Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neurallanguage models. arXiv preprint arXiv:1512.04906 , 2015.Flavio Chierichetti, Ravi Kumar, and Andrew Tomkins. Max-cover in map-reduce. In InternationalConference on World Wide Web , 2010.Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarizedneural networks: Training neural networks with weights and activations constrained to +1 or -1.arXiv preprint arXiv:1602.02830 , 2016.M. Datar, N. Immorlica, P. Indyk, and V .S. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the Symposium on Computational Geometry , pp. 253–262,2004.Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman.Indexing by latent semantic analysis. Journal of the American society for information science ,1990.Misha Denil, Babak Shakibi, Laurent Dinh, Marc-Aurelio Ranzato, and Nando et all de Freitas.Predicting parameters in deep learning. In NIPS , pp. 2148–2156, 2013.Uriel Feige. A threshold of ln n for approximating set cover. JACM , 45(4):634–652, 1998.Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. Optimized product quantization for approximatenearest neighbor search. In CVPR , June 2013.Yunchao Gong and Svetlana Lazebnik. Iterative quantization: A procrustean approach to learningbinary codes. In CVPR , June 2011.Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net-works using vector quantization. arXiv preprint arXiv:1412.6115 , 2014.Edouard Grave, Armand Joulin, Moustapha Ciss ́e, David Grangier, and Herv ́e J ́egou. Efficientsoftmax approximation for gpus. arXiv preprint arXiv:1609.04309 , 2016.Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networkswith pruning, trained quantization and huffman coding. In ICLR , 2016.Herv ́e J ́egou, Matthijs Douze, and Cordelia Schmid. Hamming embedding and weak geometricconsistency for large scale image search. In ECCV , October 2008.Herv ́e Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighborsearch. IEEE Trans. PAMI , January 2011.Thorsten Joachims. Text categorization with support vector machines: Learning with many relevantfeatures . Springer, 1998.9Under review as a conference paper at ICLR 2017Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficienttext classification. arXiv preprint arXiv:1607.01759 , 2016.Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. NIPS , 2:598–605, 1990.Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks withfew multiplications. arXiv preprint arXiv:1510.03009 , 2015.Andrew McCallum and Kamal Nigam. A comparison of event models for naive bayes text classifi-cation. In AAAI workshop on learning for text categorization , 1998.Lukas Meier, Sara Van De Geer, and Peter B ̈uhlmann. The group lasso for logistic regression.Journal of the Royal Statistical Society: Series B (Statistical Methodology) , 70(1):53–71, 2008.Tomas Mikolov. Statistical language models based on neural networks. In PhD thesis . VUT Brno,2012.Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and J Cernocky.Subword language modeling with neural networks. preprint , 2012.Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search.InICML , pp. 1926–1934, 2015.Mohammad Norouzi and David Fleet. Cartesian k-means. In CVPR , June 2013.Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Foundations and trends in infor-mation retrieval , 2008.Alexandre Sablayrolles, Matthijs Douze, Herv ́e J ́egou, and Nicolas Usunier. How should we evalu-ate supervised hashing? arXiv preprint arXiv:1609.06753 , 2016.Jorge S ́anchez and Florent Perronnin. High-dimensional signature compression for large-scale im-age classification. In CVPR , 2011.Anshumali Shrivastava and Ping Li. Asymmetric LSH for sublinear time maximum inner productsearch. In NIPS , pp. 2321–2329, 2014.Andreas Stolcke. Entropy-based pruning of backoff language models. arXiv preprint cs/0006025 ,2000.David Talbot and Thorsten Brants. Randomized language models via perfect hash functions. InACL, 2008.Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland,Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. In Communica-tions of the ACM , 2016.Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hashing for similarity search: Asurvey. arXiv preprint arXiv:1408.2927 , 2014.Jun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. Learning to hash for indexing big data - Asurvey. CoRR , abs/1509.05472, 2015.Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topicclassification. In ACL, 2012.Kilian Q Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Featurehashing for large scale multitask learning. In ICML , 2009.Yair Weiss, Antonio Torralba, and Rob Fergus. Spectral hashing. In NIPS , December 2009.Yijun Xiao and Kyunghyun Cho. Efficient character-level document classification by combiningconvolution and recurrent layers. arXiv preprint arXiv:1602.00367 , 2016.Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas-sification. In NIPS , 2015.10Under review as a conference paper at ICLR 2017APPENDIXIn the appendix, we show some additional results. The model used in these experiments only had1M ngram buckets. In Table 5, we show a thorough comparison of LSH, PQ and OPQ on 8differentdatasets. Table 7 summarizes the comparison with CNNs in terms of accuracy and size. Table 8show a thorough comparison of the hashing trick and the Bloom filters.Quant. k norm AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p.full 92.1 36M 59.8 97M 94.5 104M 98.4 67M 96.3 47M 72 120M 63.7 56M 95.7 53Mfull,nodict 92.1 34M 59.9 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.6 48M 95.6 46MLSH 8 88.7 8.5M 51.3 20M 90.3 21M 92.7 14M 94.2 11M 54.8 23M 56.7 12M 92.2 12MPQ 8 91.7 8.5M 59.3 20M 94.4 21M 97.4 14M 96.1 11M 71.3 23M 62.8 12M 95.4 12MOPQ 8 91.9 8.5M 59.3 20M 94.4 21M 96.9 14M 95.8 11M 71.4 23M 62.5 12M 95.4 12MLSH 8 x 91.9 9.5M 59.4 22M 94.5 24M 97.8 16M 96.2 12M 71.6 26M 63.4 14M 95.6 13MPQ 8 x 92.0 9.5M 59.8 22M 94.5 24M 98.4 16M 96.3 12M 72.1 26M 63.7 14M 95.6 13MOPQ 8 x 92.1 9.5M 59.9 22M 94.5 24M 98.4 16M 96.3 12M 72.2 26M 63.6 14M 95.6 13MLSH 4 88.3 4.3M 50.5 9.7M 88.9 11M 91.6 7.0M 94.3 5.3M 54.6 12M 56.5 6.0M 92.9 5.7MPQ 4 91.6 4.3M 59.2 9.7M 94.4 11M 96.3 7.0M 96.1 5.3M 71.0 12M 62.2 6.0M 95.4 5.7MOPQ 4 91.7 4.3M 59.0 9.7M 94.4 11M 96.9 7.0M 95.6 5.3M 71.2 12M 62.6 6.0M 95.4 5.7MLSH 4 x 92.1 5.3M 59.2 13M 94.4 13M 97.7 8.8M 96.2 6.6M 71.1 15M 63.1 7.4M 95.5 7.2MPQ 4 x 92.1 5.3M 59.8 13M 94.5 13M 98.4 8.8M 96.3 6.6M 72.0 15M 63.6 7.5M 95.6 7.2MOPQ 4 x 92.2 5.3M 59.8 13M 94.5 13M 98.3 8.8M 96.3 6.6M 72.1 15M 63.7 7.5M 95.6 7.2MLSH 2 87.7 2.2M 50.1 4.9M 88.9 5.2M 90.6 3.5M 93.9 2.7M 51.4 5.7M 56.6 3.0M 91.3 2.9MPQ 2 91.1 2.2M 58.7 4.9M 94.4 5.2M 87.1 3.6M 95.3 2.7M 69.5 5.7M 62.1 3.0M 95.4 2.9MOPQ 2 91.4 2.2M 58.2 4.9M 94.3 5.2M 91.6 3.6M 94.2 2.7M 69.6 5.7M 62.1 3.0M 95.4 2.9MLSH 2 x 91.8 3.2M 58.6 7.3M 94.3 7.8M 97.1 5.3M 96.1 4.0M 69.7 8.6M 62.7 4.5M 95.5 4.3MPQ 2 x 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3MOPQ 2 x 92.1 3.2M 59.5 7.3M 94.5 7.8M 98.1 5.3M 96.2 4.0M 71.5 8.6M 63.4 4.5M 95.6 4.3MTable 5: Comparison between standard quantization methods. The original model has a dimension-ality of 8and2M buckets. Note that all of the methods are without dictionary.k co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p.full, nodict 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46M8 full 92.0 9.5M 59.8 22M 94.5 24M 98.4 16M 96.3 12M 72.1 26M 63.7 14M 95.6 13M4 full 92.1 5.3M 59.8 13M 94.5 13M 98.4 8.8M 96.3 6.6M 72 15M 63.6 7.5M 95.6 7.2M2 full 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3M8 200K 92.0 2.5M 59.7 2.5M 94.3 2.5M 98.5 2.5M 96.6 2.5M 71.8 2.5M 63.3 2.5M 95.6 2.5M8 100K 91.9 1.3M 59.5 1.3M 94.3 1.3M 98.5 1.3M 96.6 1.3M 71.6 1.3M 63.4 1.3M 95.6 1.3M8 50K 91.7 645K 59.7 645K 94.3 644K 98.5 645K 96.6 645K 71.5 645K 63.2 645K 95.6 644K8 10K 91.3 137K 58.6 137K 93.2 137K 98.5 137K 96.5 137K 71.3 137K 63.3 137K 95.4 137K4 200K 92.0 1.8M 59.7 1.8M 94.3 1.8M 98.5 1.8M 96.6 1.8M 71.7 1.8M 63.3 1.8M 95.6 1.8M4 100K 91.9 889K 59.5 889K 94.4 889K 98.5 889K 96.6 889K 71.7 889K 63.4 889K 95.6 889K4 50K 91.7 449K 59.6 449K 94.3 449K 98.5 450K 96.6 449K 71.4 450K 63.2 449K 95.5 449K4 10K 91.5 98K 58.6 98K 93.2 98K 98.5 98K 96.5 98K 71.2 98K 63.3 98K 95.4 98K2 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4M2 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693K2 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352K2 10K 91.3 78K 58.5 78K 93.2 78K 98.4 79K 96.5 78K 70.8 78K 63.2 78K 95.3 78KTable 6: Comparison with different quantization and level of pruning. “co” is the cut-off parameterof the pruning.11Under review as a conference paper at ICLR 2017Dataset Zhang et al. (2015) Xiao & Cho (2016) fastText +PQ,k=d=2AG 90.2 108M 91.4 80M 91.9 889KAmz. f. 59.5 10.8M 59.2 1.6M 59.6 449KAmz. p. 94.5 10.8M 94.1 1.6M 94.3 449KDBP 98.3 108M 98.6 1.2M 98.5 98KSogou 95.1 108M 95.2 1.6M 96.5 98KYah. 70.5 108M 71.4 80M 71.7 889KYelp f. 61.6 108M 61.8 1.4M 63.3 98KYelp p. 94.8 108M 94.5 1.2M 95.5 449KTable 7: Comparison between CNNs and fastText with and without quantization. The numbersfor Zhang et al. (2015) are reported from Xiao & Cho (2016). Note that for the CNNs, we reportthe size of the model under the assumption that they use float32 storage. For fastText (+PQ) wereport the memory used in RAM at test time.Quant. Bloom co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p.full,nodict 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46MNPQ 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4MNPQ x 200K 92.2 830K 59.3 830K 94.1 830K 98.4 830K 96.5 830K 70.7 830K 63.0 830K 95.5 830KNPQ 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693KNPQ x 100K 91.8 420K 59.1 420K 93.9 420K 98.4 420K 96.5 420K 70.6 420K 62.8 420K 95.3 420KNPQ 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352KNPQ x 50K 91.5 215K 58.8 215K 93.6 215K 98.3 215K 96.5 215K 70.1 215K 62.7 215K 95.1 215KNPQ 10K 91.3 78K 58.5 78K 93.2 78K 98.4 79K 96.5 78K 70.8 78K 63.2 78K 95.3 78KNPQ x 10K 90.8 51K 56.8 51K 91.7 51K 98.1 51K 96.1 51K 68.7 51K 61.7 51K 94.5 51KTable 8: Comparison with and without Bloom filters. For NPQ, we set d= 8andk= 2.12Under review as a conference paper at ICLR 2017Model k norm retrain Acc. Sizefull 45.4 12GInput 128 45.0 1.7GInput 128 x 45.3 1.8GInput 128 x x 45.5 1.8GInput+Output 128 x 45.2 1.5GInput+Output 128 x x 45.4 1.5GInput+Output, co=2M 128 x x 45.5 305MInput+Output, n co=1M 128 x x 43.9 179MInput 64 44.0 1.1GInput 64 x 44.7 1.1GInput 64 x 44.9 1.1GInput+Output 64 x 44.6 784MInput+Output 64 x x 44.8 784MInput+Output, co=2M 64 x 42.5 183MInput+Output, co=1M 64 x 39.9 118MInput+Output, co=2M 64 x x 45.0 183MInput+Output, co=1M 64 x x 43.4 118MInput 32 40.5 690MInput 32 x 42.4 701MInput 32 x x 42.9 701MInput+Output 32 x 42.3 435MInput+Output 32 x x 42.8 435MInput+Output, co=2M 32 x 35.0 122MInput+Output, co=1M 32 x 32.6 88MInput+Output, co=2M 32 x x 43.3 122MInput+Output, co=1M 32 x x 41.6 88MTable 9: FlickrTag: Comparison for a large dataset of (i) different quantization methods and param-eters, (ii) with or without re-training.13